text
stringlengths
4
2.78M
--- abstract: 'Binary Chandrasekhar-mass white dwarfs accreting mass from non-degenerate stellar companions through the single-degenerate channel have reigned for decades as the leading explanation of Type Ia supernovae. Yet, a comprehensive theoretical explanation has not yet emerged to explain the expected properties of the canonical near-Chandrasekhar-mass white dwarf model. A simmering phase within the convective core of the white dwarf leads to the ignition of one or more flame bubbles scattered across the core. Consequently, near-Chandrasekhar-mass single-degenerate SNe Ia are inherently stochastic, and are expected to lead to a range of outcomes, from subluminous SN 2002cx-like events, to overluminous SN 1991T-like events. However, all prior simulations of the single-degenerate channel carried through the detonation phase have set the ignition points as free parameters. In this work, for the first time, we place ignition points as predicted by [*ab initio*]{} models of the convective phase leading up to ignition, and follow through the detonation phase in fully three-dimensional simulations. Single-degenerates in this framework are characteristically overluminous. Using a statistical approach, we determine the $^{56}$Ni mass distribution arising from stochastic ignition. While there is a total spread of $\gtrsim 0.2 M_{\odot}$ for detonating models, the distribution is strongly left-skewed, and with a narrow standard deviation of $\simeq 0.03 M_{\odot}$. Conversely, if single-degenerates are not overluminous but primarily yield normal or failed events, then the models require fine-tuning of the ignition parameters, or otherwise require revised physics or progenitor models. We discuss implications of our findings for the modeling of single-degenerate SNe Ia.' author: - Chris Byrohl - Robert Fisher - Dean Townsley bibliography: - 'converted\_to\_latex.bib' title: 'The Intrinsic Stochasticity of the $^{56}$Ni Distribution of Single-Degenerate Type Ia Supernovae' --- Introduction ============ Numerous classic works identified white dwarfs accreting to near the Chandrasekhar mass $M_{\rm ch}$ in binary systems as candidate progenitors of Type Ia supernovae (SNe Ia) – e.g. @arnett69, @Whelan_Iben_1973, and @nomotoetal84. This classic picture was long thought to provide an explanation for the uniformity of brightnesses observed in SNe Ia [@phillips93]. The nature of the dominant production channel for SNe Ia has long been unclear [@BranchSearchProgenitorsType1995] and more recently, the classic picture of near $M_{\rm ch}$ progenitors has been substantially revised, with single-degenerates now widely believed to be rare in nature. The single-degenerate channel has been shown to be inconsistent with a range of constraints, including the delay-time distribution, the absence of hydrogen, and the absence of companions [@maozetal14]. Single-degenerates are also inconsistent with observational and theoretical rate predictions [@maozmannucci12]. However, recent observations have provided strong evidence that Chandrasekhar-mass white dwarf SNe Ia do occur in at least some systems in nature. Hard X-ray spectra of the 3C 397 supernova remnant (SNR) are consistent with electron captures which arise during nuclear burning at high densities typical of Chandrasekhar-mass white dwarfs [@yamaguchietal14; @yamaguchietal15]. Additional X-ray and infrared observations of the Kepler SNR suggest that it was an overluminous single-degenerate supernova [@katsudaetal15]. Furthermore, the pre-maximum light shock signature detected in both a subluminous SN Ia 2012cg [@marionetal16] and a normal SN Ia iPTF14atg [@caoetal15] similar to theoretical predictions of the shock interaction with the companion star [@kasen10], although these observations have also been contested [@kromeretal16; @shappeeetal18]. A large body of theoretical and computational work has explored possible mechanisms for single-degenerate SNe Ia [@maozmannucci12]. Many single-degenerate explosion mechanisms begin with a deflagration in the convective core of a near-$M_{\rm Ch}$ WD [@nomotoetal84]. From this common starting point, authors have explored the possibility of pure deflagrations [@ropkeetal07b; @jordanetal12a; @kromeretal13], deflagration-to-detonation transitions (DDTs) (@khokhlov91 [@ropkeetal07a; @Seitenzahl3Dddt2013; @maloneetal14; @Martinez-Rodriguez2017; @daveetal17] and many more, and gravitationally-confined detonations (GCDs) [@plewaetal04; @ropkeetal07a; @townsleyetal07; @jordanetal08; @meakinetal09; @seitenzahletal16]. The viability of the proposed explosion mechanisms hinges crucially on the nature of the flame ignition during the convective phase. In particular, the GCD mechanism relies upon an offset ignition to buoyantly drive the flame bubble through breakout. Because the vigor of the GCD mechanism relies upon maintaining the WD intact until the ash collides at a point opposite of breakout, its viability is diminished as the ignitions become more centrally concentrated and multi-point. In contrast, a pure deflagration model produces good agreement with observations of the subclass of SNe Iax [@kromeretal13], but requires a vigorous deflagration phase with several simultaneous near-central ignitions. Both the pure deflagration and the GCD mechanism require that the flame surface does not undergo a transition to a detonation prior to breakout, as the DDT model does. Furthermore, there exists the possibility that a detonation does not arise during the initial ash collision subsequent to breakout, and that the WD remains gravitationally bound, leading to a subsequent contraction and a detonation through the pulsationally-assisted GCD (PGCD) mechanism [@garciasenzbravo05; @jordanetal12a]. Because the ignition of a flame bubble in the convective core of the white dwarf is inherently stochastic, outcomes ranging from subluminous through overluminous SNe Ia are expected to arise in Chandrasekhar-mass SNe Ia. The ignition arises within a highly-turbulent (Reynolds number Re $\sim 10^{15}$) convective flow [@isernetal17], with the detailed outcome critically dependent upon the high-end tail of the temperature distribution. For many years, the distribution of ignition points was poorly constrained by theory and simulation [@garciasenzwoosley95; @woosleyetal04]. Early studies suggested multi-point ignitions as a viable scenario, which has only been revised recently as it became possible to begin to simulate these crucial last minutes of the simmering phase in full 3D simulations. For example, @zingaleetal11 and @nonakaetal11 performed a numerical study for a WD with a central density $2.2\times 10^{9}$ g cm$^{-3}$ and central temperature $6.25\times 10^{8}$ K to determine the probability distribution of hot spots triggering the deflagration phase. @zingaleetal11 demonstrated that most ignitions for the progenitor considered occur at a single point at radial offsets below 100 km from the center, and most likely at about 50 km. Consequently, these [*ab initio*]{} simulations point towards a low amount of deflagration energy resulting from a small single bubble, buoyancy-driven ignition, in contrast to prior simulations which often invoked multiple-bubble ignitions. It has been known for some time that such low-deflagration energies generally lead to large amounts of @ropkeetal07a run a series of simulations with off-centered ignitions demonstrating an anti-correlation of deflagration yield and ignition offset. However, initial offsets do not include ignitions below 50 km, thus neglecting roughly half of the ignitions expected from results in @nonakaetal11. @hillebrandtetal07 propose that off-centered, lobsided explosions, such as those following the deflagration phase simulated in @ropkeetal07a, might explain overluminous SN Ia events. Recent theoretical work explored the physics of stochastic ignition close to the WD’s center using semi-analytic methods in-depth, and demonstrated that single-bubble ignitions are generally buoyancy-dominated, leading to a weak deflagration phase [@fisherjumper15]. Consequently, as @fisherjumper15 argued, single-bubble ignitions tend to lead to the production of a relatively large amount of $^{56}$Ni and hence an overluminous SN Ia. This theoretical work was soon given observational support when spectral modeling of the nebular phase of SNe Ia revealed the canonical bright event SN 1991T had an inferred ejecta mass of 1.4 $M_{\odot}$ [@childressetal15]. Most recently, @jiangetal18 have examined the early-phase light curves of 40 SNe Ia in the optical, UV, and NUV, and demonstrated that all six luminous 91T- and 99aa-like events in their sample are associated with an early-excess consistent with a $^{56}$Ni-abundant outer layer, as expected in the GCD scenario. Subsequent three-dimensional simulations of a buoyantly-driven single bubble ignition confirmed a large amount of $^{56}$Ni consistent with SN 1991T [@seitenzahletal16]. However, because the stable IGEs tend to be buoyantly-driven in the GCD model, @seitenzahletal16 found that the observed stable IGEs at low velocities in their model could only be reproduced along a line of sight centered around the detonation region. While there are systemic differences in how the LEAFS code used by @seitenzahletal16 treats subgrid scale turbulent nuclear burning in comparison to FLASH – see e.g. @jordanetal08, it is possible that the bulk of this inconsistency could be rectified by a DDT as opposed to a GCD model. In particular, @fisherjumper15 noted that buoyantly-driven ignitions will lead to a large amount of $^{56}$Ni and an overluminous SNe Ia in both the DDT and GCD models. This recent observational and theoretical progress motivates the current study, in which we explore the inherent stochasticity of near-Chandrasekhar mass white dwarfs in the single-degenerate channel, from ignition through detonation. In Section \[sec:methodology\], we shortly summarize the simulation setup and the assumed initial hot spot distribution. In Section \[sec:results\], we describe the WD’s evolution from ignition to its possible detonation depending on the ignition’s offset to the center of mass and link our findings to the initial hot spot distribution. In Section \[sec:discussion\], we discuss possible uncertainties in our modeling before summarizing our findings in Section \[sec:conclusions\]. Methodology {#sec:methodology} =========== Our simulations were performed with the 3D Eulerian adaptive mesh refinement (AMR) code FLASH 4.3 [@Fryxell_2000] solving the hydrodynamic equation with the directionally split piecewise-parabolic method (PPM). We use a tabular Helmholtz equation of state taking into account radiation, nuclei, electrons, positrons and corrections for Coulomb effects, which remains valid in the electron degenerate relativistic regime [@Timmes_2000]. Flame physics is modeled by an advection-diffusion-reaction equation. Nuclear energy generation is incorporated using a simplified treatment of the flame energetics [@townsleyetal07; @townsleyetal09; @townsleyetal16]. Self-gravity is accounted for by a multipole solver [@Couch_2013] up to order $l=6$ with isolated boundary conditions. The progenitor model of the white dwarf used assumes a mass of $1.38\ M_{\odot}$ and a uniform 50/50 carbon/oxygen (C/O) composition. See Section \[sec:discussion\] for a discussion of the impact of non-zero stellar progenitor metallicity. The white dwarf has a central temperature stratification, including an adiabatic core with central density $2.2 \times 10^9$ g cm$^{-3}$ and temperature $7 \times 10^8$ K, pressure-matched onto an isothermal envelope with temperature $10^7$ K [@jacksonetal10; @kruegeretal12]. Furthermore, the central density of our WD progenitor is a standard value commonly considered in the literature, because higher-central density WD progenitors produce anomalously high abundances of Fe-peak elements, including $^{48}$Ca, $^{54}$Cr, and $^{66}$Zn [@meyeretal96; @woosley97; @nomotoetal97; @brachwitzetal00; @daveetal17; @morietal18]. A very low density region surrounding the white dwarf, sometimes referred to in the literature as “fluff,” is required by Eulerian grid-based simulations, which cannot treat empty space without some matter density. The fluff is chosen to have an initial density of $10^{-3}$ g cm$^{-3}$ and temperature of $3 \times 10^7$ K, and is dynamically unimportant for the duration of the models presented here. Since the deflagration energy release and the nucleosynthetic yield of $^{56}$Ni hinges critically on the bubble initial conditions, we investigate the earliest phases of the bubble evolution in Section \[sec:earlytime\]. The turbulent cascade behaves fundamentally differently in 2D and 3D, and influences our choices in determining the spatial dimensionality of the simulations presented here. In particular, in 2D, the turbulent cascade is inverse, proceeding from smaller to larger scales [@kraichnan67]. In contrast, in 3D, the turbulent cascade proceeds directly, from larger to smaller scales, where the energy is dissipated at the smallest scales due to viscosity. In the early-time simulations, the bubble remains laminar, and can be simulated in 2D. The fundamental distinctions between turbulence in 2D and 3D have major ramifications for studying longer timescales, on which the flame becomes fully turbulent, since physically-motivated flame-turbulence interaction subgrid models can [*only*]{} be realized in 3D. Consequently, all longer-time simulations, in which the bubble enters a turbulent state have been run in 3D with Cartesian geometry with a turbulence-flame interaction model to capture enhanced burning on subgrid scales. All 3D simulations were performed both with and without the turbulence-flame interaction (TFI) model (described below), thereby spanning a range of possible outcomes on the flame propagation resulting from unresolved turbulence. Test 2D simulations in cylindrical coordinates led to unphysical behavior in the turbulent phase, including spurious surface protuberances burning in the radial direction and thus significantly altering the simulation outcomes in comparison to 3D models. Artificial outcomes were particularly significant for runs with ignition points close to the center of mass of the white dwarf, where unphysical burning in the radial direction has the largest impact. Our Cartesian domain extends from $-6.5536\times 10^{5}$ km to $+6.5536\times 10^{5}$ km in each direction, with a maximal refinement down to $\Delta=4$ km. We employ several refinement criteria, which are designed to follow the nuclear burning of the models at high resolution, while also minimizing the resolution in the very low density regions outside the white dwarf itself. Our simulations seek to maintain the highest resolution in the burning region behind the flame surface, and employ a standard density gradient criterion to refine when the density gradient parameter exceeds $0.1$, and derefines when it is beneath $0.0375$. Further refinement criteria seek to derefine in the fluff and in regions outside of active burning, derefining one level if the energy generation rate is lower than $5 \times 10^{17}$ erg g$^{-1}$ cm$^{-3}$, and completely to level one if the density is below $10^3$ g cm$^{-3}$. Except for their resolution and threshold, these criteria are the same as in @townsleyetal09. Furthermore, because the ejected ash continues to expand over time, the computational cost of following the ejected ash grows without bound. Consequently, we impose an additional derefinement outside a radius of $4000$ km to $\Delta=128$ km, which only impacts the ejected mass. We increased this derefinement radius to $6000$ km for offsets $r_0\lesssim 20$ km, where the pre-expansion can reach similar radii. The single flame bubble’s initial size is limited by the hydrodynamic resolution of our simulations. At a resolution of $\Delta=4$ km for the flame front, we assume an initial spherical shape with radius is set to $R_0=16$ km. In order for this to be a reasonable assumption, a self-consistent evolution since appearance of the hot spot should yield a negligible velocity profile and a self-similar evolution preserving sphericality as discussed in @VladimirovaModelflamesBoussinesq2007. The consistency of assuming a spherical ignition point can be assessed with simple physical arguments. The flame polishing scale $\lambda_\mathrm{fp}=4\pi S_l^2/(Ag)$, below which perturbations on the surface are polished out [@timmeswoosley92], implies that even if the initial ignition was non-spherical it would become spherical soon afterwards. An explicit numerical test confirming this was performed by [@maloneetal14]. Later perturbations to the sphericality can arise from a turbulent background flow and the buoyant rise. The background flow is small compared to the laminar flame speed of $\sim 100$ km/s, so that sphericality should initially be sustained. The impact of the buoyant rise on sphericality is closely linked to the question of a negligible initial velocity field, whose amplitude however increases as the bubble starts to rise. We assume the velocity field to become relevant when the velocity from the gravitational acceleration $g$ reaches the order of the laminar velocity, the stretching scale $l_{\rm fl}$. This should approximately corresponds to $l_\mathrm{fl}=2S_l^2/(Ag)$ [@maloneetal14], which primarily depends on the offset near the white dwarf’s center. $A$ is the Atwood number of fuel and ash density. Note that this criterion is stricter by a factor of $2\pi$ compared to the criterion for sphericality due to perturbations: The flame bubble is expected to stretch radially before the wrinkles in the flame front are not polished out anymore. Alternatively to above estimator for $l_{\rm fl}$, we integrate the flame’s evolution based on [@fisherjumper15] to determine when the bubble’s velocity reaches the laminar flame speed, which is shown in Table \[tab:2Druns\]. Only at small radial bubble offsets $r_0$ from the center, which we are particularly interested in, fulfill this condition. This length scales with $r_0^{-1}$ and therefore the condition allows large bubbles at low offsets. As @fisherjumper15 argue that there is a critical offset at which the deflagration will burn through the core, vastly changing the overall deflagration yield and thus its possible detonation, a completely self-consistent evolution would be desirable at these offsets. However, we are effectively limited by the required computational resources and resolution. As an alternative for such self-consistent treatment, we evolve 2D models for the linear phase, where deviations to 3D outcomes should be negligible, which we can resolve sufficiently well. We incorporate a turbulence-flame interaction model presented in @jackson_power-law_2014 implementing a specific model of power-law wrinkling based on that proposed by @charlette_power-law_2002. The reaction front is modeled by a reaction-diffusion front which propagates with a speed based upon the estimated physical features of the wrinkled physical flame whose width is, for most of the interior of the WD, many orders of magnitude smaller than the computational grid scale. Due to the interaction of turbulence with the flame, the location of the reaction front, as coarsened to a filter scale $\Delta$ consisting of a few grid cells, is approximated to propagate at a turbulent flame speed $s_t = \Xi s_l$, where $s_l$ is the physical laminar flame speed, and $\Xi$ is called the wrinkling factor. The wrinkling is given by $$\Xi = \left(1+\frac{\Delta}{\eta_c}\right)^{1/3},$$ where $\eta_c$ is the cutoff scale for wrinkling, and is dependent upon local properties of both the turbulence and the physical flame. In this model, $\eta_c$ is the inverse of the mean curvature of the flame surface, and is determined by assuming equilibrium between subgrid flame surface creation due to wrinkling by the turbulence and flame surface destruction by flame surface propagation and diffusion. This turbulence-flame interaction model leads to a turbulent flame speed $s_t$ that is approximately equal to the characteristic speed of turbulent fluctuations on the filter scale, $u'_\Delta$ at intermediate densities, $10^8$-$10^9$ g cm$^{-3}$, as can be seen in Figure 4 in @jackson_power-law_2014. The turbulent flame speed falls off to the laminar flame speed at lower densities where the flame is too thick and slow to support wrinkling. At high densities where the flame is effectively polished by the high laminar speeds, the turbulent flame speed also approaches to the laminar flame speed value. Performing the calculation of the cutoff scale for wrinkling, $\eta_c$, requires a measurement of the turbulence on the filter scale, $u'_\Delta$, and makes the physical assumption that the subgrid turbulence is homogeneous, isotropic, and follows Kolmogorov’s theory on the filter scale. As shown by @zingaleetal05, buoyancy-driven turbulence becomes increasingly homogeneous and isotropic on small scales, implying the last assumption is valid provided the filter scale is sufficiently small. ![Radial hot spot distribution and fit function for raw data used in [@nonakaetal11].[]{data-label="fig:hotspotPDF"}](hotspotPDF.pdf){width="1.0\columnwidth"} From  @nonakaetal11 we obtain the probability density $P(r)$ of hot spots forming at a certain distance from the center of mass. Using the raw data and the same methodology, we create a histogram for such distribution and a fit to this distribution, see Figure \[fig:hotspotPDF\]. Shown is the probability density function per unit length. Under the assumption that the probability density per volume only mildly changes near the center of mass, this implies a $P(r)dr\propto r^2dr$ scaling at low offsets due to the shrinking volume available for hot spots to occur. While not exactly fulfilling this consideration, we obtain a reasonable fit using a $\beta$-distribution, obtaining an expectation value of $\langle r_0\rangle=48$ km and a probability of 2.2% for hot spots forming at $r_0<16$ km, the critical ignition radius determined by @fisherjumper15. We simulate the outcomes of varying ignition offsets for the given progenitor according to the this probability distribution and choose a representative range of initial offsets with an initial bubble radius of $R_0=16$ km shown in Table \[tab:3Druns\]. We utilize both 2D and 3D simulations. The size of the initial bubble is naturally limited by the simulation resolution. As demonstrated analytically in [@fisherjumper15], the flame bubble’s dynamics vastly change at low initial offsets. We employ 2D simulations to investigate the initial stages of the bubble dynamics at very high resolution. Moving to 2D is a reasonable strategy in this case, as we are only interested in the initial, linear phase. In 2D, we employ a maximal resolution of $0.25$ km and an initial bubble radius of $2$ km, at initial offsets ranging from $0-50$ km as listed in Table \[tab:2Druns\]. The 3D simulations are evolved until they can undergo a detonation as GCD. The precise conditions under which a DDT may arise are still a matter of active investigation, though recent three-dimensional simulations may shed further light on this issue [@poludnenkoetal11; @fisheretal18]. We adopt conservative criteria for detonation initiation based upon studies of the the Zel’dovich gradient mechanism [@Seitenzahl_2009], which demonstrate that the critical length above temperatures $\simeq 2 \times 10^9$ K at a density $10^7$ g cm$^{-3}$ become of order 1 km, and a detonation is deemed likely. Further, in this paper, we evolve all 3D models within the context of the GCD scenario. As discussed in the introduction, current [*ab initio*]{} calculations point towards offset single-point ignitions, which favor both the GCD and DDT scenario over pure deflagrations. Because the GCD model involves further evolution post-bubble breakout, it generally predicts a greater deflagration energy release than the DDT model, for an otherwise identical WD progenitor and flame bubble ignition model. Consequently, consideration of the GCD model yields a lower limit for the mass of $^{56}$Ni due to a lower central density $\rho_c$ at the time of detonation, in comparison to DDT models. Artificial detonations can occur due to temperature oscillations arising as numerical artifacts from degenerate stellar equation of state coupled with hydrodynamics close to discontinuities [@ZingalePiecewiseParabolicMethod2015]. These oscillations are particularly striking during the flame bubble’s buoyant rise. In order to prevent detonations arising from these artifacts, we restrict detonations to occur in the southern hemisphere ($z<0$ km). Offset (km) $\lambda_{\rm fp}$ (km) $l_{\rm fl}$ (km) ------------- ------------------------- ------------------- 4 $353.6$ $40.9$ 10 $141.4$ $27.2$ 20 $70.7$ $17.6$ 50 $28.3$ $8.2$ : \[tab:2Druns\] Performed 2D runs with a maximal resolution of $\Delta=0.25$ km and initial radius of $2$ km. Offset (km) TFI $t_{\rm det}$ (s) M$_{\rm Ni56}$ (M$_\odot$) ------------- ----- ----------------------------------- ---------------------------- 0 / failed$^{\rm a}$/failed$^{\rm a}$ $0.56$/$0.35$ 16 / $3.70$/$2.92$ $1.08$/$1.05$ 20 / $3.34$/$3.27$ $1.12$/$1.06$ 32 / $2.61$/$2.54$ $1.14$/$1.14$ 40 / $3.06$/$2.42$ $1.09$/$1.13$ 50 / $2.48$/$2.31$ $1.14$/$1.20$ 100 / $2.26$/$2.12$ $1.21$/$1.20$ 125 / $2.21$/$2.08$ $1.22$/$1.20$ : \[tab:3Druns\] Performed 3D runs with a maximal resolution of $\Delta=4$ km. $^{\rm a}$Model fails to detonate. Results {#sec:results} ======= We first discuss the early phase of bubble evolution. The early linear phase of evolution is similar in both 2D and 3D, so we investigate the early linear evolution in high-resolution 2D models and compare these against semi-analytic predictions. We then move on to examine the subsequent nonlinear evolution through breakout and detonation in full 3D, which we also evolve starting with the linear phase, but at lower resolution. Early Linear Evolution in 2D {#sec:earlytime} ---------------------------- ![Deflagration phase during the first $0.6$ s for a 2D model with $10$ km offset, $2$ km initial radius and a resolution of $0.25$ km. The dashed line shows $z=0$ km. On the coordinate axis $r$ denotes $\sqrt{x^2+y^2}$.[]{data-label="fig:slices_linphase"}](slices_linphase.pdf){width="1.0\columnwidth"} ![Bubble evolutionary tracks shown for both 2D hydrodynamic simulations as well as for the analytic solution in  [@fisherjumper15]. The plot shows the bubble radius $R$ versus offset radius $r$. The evolution of different initial offsets $r_0$ is shown as the solid curve for simulations and as the dashed curve for the analytical solution. The dots represent time steps of $0.1$ s, starting with $0.0$ s. States above the dotted line have burned through the white dwarf’s center of mass.[]{data-label="fig:fj15_compare"}](fj15_compare.pdf){width="1.0\columnwidth"} ![Position of the southern flame front $r_s$ as a function of time $t$ after ignition. Line style and color are chosen as in Figure \[fig:fj15\_compare\].[]{data-label="fig:fj15_compare_southernfront"}](fj15_compare_southernfront.pdf){width="1.0\columnwidth"} Figure \[fig:slices\_linphase\] shows the slices of the flame bubble’s evolution in its laminar phase for $r_0=10$ km. The burned material grows spherically as long as the buoyant velocity is small and the bubble’s size stays below the flame polishing scale. As the bubble grows, the acceleration for material at the northern and southern flame front start to differ and the bubble becomes elongated along the initial offset’s direction until a plume forms at the northern front. Interestingly, the southern flame front’s laminar speed seems to be countered by the background flow from the buoyant rise at the northern front. In Figure \[fig:fj15\_compare\], we show the evolution of the flame bubble’s radius $R(r)$ as a function of the bubble offset $r(t)$. For the simulation data, the volume-equivalent spherical radius ($R=\sqrt[3]{3V/4\pi}$) deduced from the burned volume $V$ is shown. We compare our results from the initial phase of linear growth with the analytic model presented in [@fisherjumper15] and find them to be in good agreement for the first tenths of seconds, particularly for larger initial offsets. The analytic description starts to fail as the velocity from the buoyant rise becomes inhomogeneous across the bubble, effectively stretching the bubble due to a lower speed at the southern flame front. Figure \[fig:fj15\_compare\_southernfront\] shows the position of the southern flame front’s position for the analytic and numerical evolution, which start to differ as the analytic solution does not incorporate an inhomogeneous velocity/acceleration field. The resulting elongation gives rise to a stem being left behind the rising plume at the northern front. Even when the southern flame front crosses the center of mass it will not buoyantly rise towards the opposite pole but is confined close to the center of mass on the relevant time-scale due to the background flow caused by the buoyant rise of the ash on the northern hemisphere. Non-linear Evolution and Detonation ----------------------------------- With formation of a rising plume the evolution becomes non-linear and depends on the imposed flame model as presented in Section \[sec:methodology\]. To capture the flame’s turbulent rise, we evolve 3D models from ignition to detonation for the parameters listed in Table \[tab:3Druns\]. As we find the 3D runs to remain mostly symmetric, we show slices in the $z$-$x$ plane restricted by $x\geq 0$ km and $y=0$ km. Figure \[fig:slices\_TFI\_r100\] and \[fig:slices\_TFI\_r20\] show the evolution of the white dwarf for $20$ km and $100$ km offset. Figure \[fig:slices\_noTFI\_r20\] also shows the evolution of a $20$ km offset model, but without enhanced burning that the prior two models use. Because the evolutionary timescales for each run depend on the initial conditions chosen, the slices for each run are chosen with respect to the state of the flame, and not in absolute time. In particular, in each plot, the first frame shows the breakout of the flame at the star’s surface. The next frame depicts the post-breakout flame crossing the $z = 0$ equator on the star’s surface. The last frame shows the model just prior to detonation across from the point of breakout. While low offsets also have a slightly larger distance to the WD’s surface, the evolution of shown models demonstrate the delay to breakout in comparison to larger offsets due to the smaller buoyant force near the center of mass, increasing the breakout time by roughly $0.3$ s for $r_0=20$ km over $r_0=100$ km. Smaller initial offsets, lead to a slightly increased plume size both in radial and tangential direction with respect to the center of mass across different initial offsets. After breakout, the flame front travels around the white dwarf and eventually reaches the point opposing the point of breakout. Material of the envelope is pushed in front of this flame front into this opposing point. At larger offsets than $40$ km, such as the shown $100$ km run, the ram pressure building up suffices to trigger a detonation before the ash reaches the opposing point. For smaller offsets than roughly $40$ km, such as the runs with an initial offset of $20$ km, the ram pressure is insufficient to trigger a detonation upon the flame reaching the opposite pole and a detonation only occurs after a subsequent partial recontraction of the white dwarf delaying the detonation. For some offsets lower than the shown $20$ km, the white dwarf might not detonate upon recontraction either. The difference of the enhanced burning model seems to be only moderate for the shown slices at offset $r_0=20$ km. The evolutionary phases represented by the slices coincide between flame models, while without enhanced burning a larger fraction of the material ejected from the white dwarf seems to be burned. ![Time series for run with enhanced burning at an offset of $100$ km in three stages: Breakout, equator crossing and prior to detonation. Slices are shown in the positive quadrant of the x-z plane with $y=0$ km with $r=\sqrt{x^2+y^2}$. The colormap indicates the amount of burned material $\phi_{fa}$. The solid line shows the density contour for $\rho=10^7$g cm$^{-3}$.[]{data-label="fig:slices_TFI_r100"}](slicecomp_3DTFI_r100.pdf){width="1.0\columnwidth"} ![Time series for run with enhanced burning at an offset of $20$ km analogous to Figure \[fig:slices\_TFI\_r100\].[]{data-label="fig:slices_TFI_r20"}](slicecomp_3DTFI_r20.pdf){width="1.0\columnwidth"} ![Time series for run without enhanced burning at an offset of $20$ km analogous to Figure \[fig:slices\_TFI\_r100\].[]{data-label="fig:slices_noTFI_r20"}](slicecomp_3DnoTFI_r20.pdf){width="1.0\columnwidth"} Figure \[fig:Ni56\_t3Dcombo\] shows the estimated $^{56}$Ni yield over time relative to the time of ignition. The $^{56}$Ni yield in each model is obtained from the electron mass fraction $Y_e$, and by assuming the neutronization of IGE occurs in equal parts by mass of $^{54}$Fe and $^{58}$Ni for all $Y_e$, which holds within 2% in tabulated yields from previous models [@meakinetal09; @townsleyetal09]. The rapid increase of M$_{\rm Ni56}$ occurring at $t\gtrsim 2.0$ s indicates the onset of detonation, except for offset $r_0=0$ km, where a large growth sets in at $t=1.0$ s due to turbulent deflagration. We classify the progenitor’s evolution into three different classes: failed, GCD and PGCD. The PGCD scenario introduced by [@jordanetal12a] shows a strong recontraction phase ($2\cdot t_{det, GCD}\lesssim t_{\rm det,PGCD}$) due a significantly increased deflagration yield from a many-bubble-ignition setup. Given the smaller deflagration yields in our simulations due to the single ignition hot spot, PGCDs here only show a mild recontraction (as e.g. indicated by the evolution of the central density), making the transition between PGCDs and GCDs gradual: Sometimes, there is no detonation upon buildup of ram pressure but only when the fuel-ash mixture reaches the southern pole even if no clear recontraction is present. We therefore do not impose a binary criterion between those scenarios. As Figure \[fig:Ni56\_R\] and Table \[tab:3Druns\] show, the $^{56}$Ni yield seems to converge towards roughly $1.21$ M$_\odot$ at large offsets $r\gtrapprox 100$ km with the progenitors undergoing the GCD scenario. As suspected by [@fisherjumper15] the $^{56}$Ni yield decreases with lower initial offset as visualized in Figure \[fig:Ni56\_R\]. For runs $\lesssim 40$ km a transition towards PGCD-like scenarios takes place. The transition is not monotonic, but has a stochastic component whether the ram pressure from the initial deflagration will suffice for an imminent detonation or not. For example, in our simulations a $32$ km offset suffices for a GCD, despite the onset of a PGCD already at $r_0=40$ km. At some point we might expect the PGCD scenario to fail. However, we lack the spatial resolution at very small radial offsets to determine the location of this transition. Thus, we are left with the artificial case of central ignition for which the $^{56}$Ni yield from deflagration increases to $0.56$ M$_\odot$/$0.35$ M$_\odot$ (TFI/no TFI), depending on the flame model. However, there is no detonation so that the total yield drops to these values as compared to higher initial offsets. Due to the numerical expense of resolving the energy generating regions at maximal resolution with AMR, we had to derefine to a maximal resolution of $\Delta=8$ km at $t=2.00/1.82$ s for the no TFI/TFI scenarios with $r_0=0$ km. Based on our parameter sampling, the transition from PGCD to failed must occur at $0\textrm{ km}<r_0<16\textrm{ km}$ for the chosen progenitor. At large offsets, the yields with and without enhanced burning model vary only marginally. However, at lower offsets the enhanced burning significantly adds to the $^{56}$Ni yield, particularly for the failed events where differences add up to $60\%$, but also for PGCD events in the order of up to $10\%$. Likelihood of $^{56}$Ni Yields ------------------------------ We next compute the probability distribution of M$_{Ni56}$ outcomes for the presented GCD SD channel models. The transformation from the hot spot probability distribution $P(r_0)$ to the probability distribution $P(M_{Ni56})$ of $^{56}$Ni outcomes is given as $$\label{eq:transformation} P (M_{\rm Ni56}) = \sum_{r_0 \in g^{-1} (M_{\rm Ni56})} \frac{P (r_0)}{\left|g' (r_0)\right|}.$$ Here $g (r_0) \equiv M_{\rm Ni56}(r_0)$ is the amount of $^{56}$Ni produced as a function of offset radius $r_0$. $P(r_0)$ is the hot spot distribution found in [@zingaleetal11] shown in Figure \[fig:hotspotPDF\]. This relationship may be derived from Bayes’ Theorem with minimal assumptions. We start with $$\begin{aligned} \label{eq:bayestheorem} P(M_{\rm Ni56}|r_0)P(r_0)=P(r_0|M_{\rm Ni56})P(M_{\rm Ni56}),\end{aligned}$$ and assume a simplification that the $^{56}$Ni yield is solely determined by the offset position, leaving out possible uncertainties from velocity flow and early bifurcations arising in the turbulent phase, one finds $$\begin{aligned} P(M_{\rm Ni56}|r_0)=\delta(g_\mathrm{M_{\rm Ni56}}(r_0)-M_{\rm Ni56}), \nonumber\\ P(r_0|M_{\rm Ni56})=\delta(r_0-g^{-1}(M_{\rm Ni56})), \nonumber\end{aligned}$$ Here $\delta (x)$ is the Dirac delta distribution. Finally, using the identity $$\begin{aligned} \delta(f(x))=\sum_i\frac{\delta(x-x_i)}{\left|f^{'}(x_i)\right|},\end{aligned}$$ where we sum over the roots $x_i$ of $f(x)$, we can rewrite equation \[eq:bayestheorem\] as equation . Even for a very similar stellar structure, we expect bifurcations arising from the turbulent nature of the flame bubble’s buoyant rise affecting the final $^{56}$Ni yield. In our computations, we see a similar phenomenon from slight offset changes and perturbations of the initial flame bubble. Therefore, the yields we obtained do not need to follow a monotonous relationship as only one possible realization is drawn at a given offset. Given the numerical expense, neither can we evaluate multiple runs at the same offset with slightly modified stellar structure or flame bubble, nor can we afford to run more models at different offsets. However, if deemed significant, additional parameters such as varied background velocity field, could easily be incorporated by marginalizing over such parameters. With our limited data sample we nevertheless try to obtain insights into the resulting spread of $^{56}$Ni yields given the stochastic nature of the initial ignition offset. To do so, we impose a strictly monotonous fit function for M$_{\rm Ni56}$($r_0$) with an asymptotic yield at high offsets $r_0$ for which we use $$\begin{aligned} y(r_0) = \frac{y_{\rm max}+\Delta y}{2}+\frac{y_{\rm max}-\Delta y}{2}\cdot \tanh\left(s\cdot(r_0-r_s) \right),\end{aligned}$$ where $y_{\rm max}$ is the asymptotic yield at high offsets, $\Delta y$ the spread between the two asymptotic branches, $r_s$ the position of the turning point and $s$ characterizes the sensitivity of the $^{56}$Ni yield with respect to the initial offset $r_0$. We fix the asymptotic yield to the approximate value $y_{\rm max}$ found earlier. The resulting distributions $P(M_{\rm Ni56})$ for our progenitor model with and without enhanced burning are shown in Figure \[fig:P\_Ni56\]. We show the probability distributions for events with offsets larger than $16$ km, accounting for $97.8$% of the ignitions. The distribution shows a slightly larger spread for the non-TFI models due to the lower $^{56}$Ni yield at low ignition offsets. Nevertheless, we find that the majority of ignitions result in a very confined $^{56}$Ni yield. For given hot spot distribution $P(r_0)$ and the outcomes M$_{\rm Ni56}(r_0)$ of the simulated progenitor, we get a stochastic spread $P(M_{\rm Ni56})$ outcomes that is strongly favors overluminous events with a $^{56}$Ni yield of $\sim 1.2$ M$_\odot$ with a standard deviation of $\sigma\sim 0.03$. However, the total spread in outcomes of detonating models $\delta=\max(M_{\rm Ni56}(r_0))-\min(M_{\rm Ni56}(r_0))$ due to the stochasticity of hot spots forming is significantly larger. We are limited by the hydrodynamic resolution, but find that $\delta \gtrsim 0.2$M$_\odot$. We find $\sigma\ll\delta$ for both TFI and no TFI models as the $^{56}$Ni yield is already close to asymptotic value of $1.21$M$_\odot$ at radii $r_0\sim 50$ km, which is the most likely point of ignition for the assumed hot spot distribution. For a hot spot distribution peaking closer to the WD’s center of mass, the standard deviation could be significantly higher. The exact shape of the left tail of the distribution is highly uncertain as it depends on the chosen fit function given our sparse sampling. Similarly, we expect modeling uncertainties, e.g. due to the lack of a velocity field, to propagate most severely into the $^{56}$Ni yield and the resulting probability distribution at low offset radii as the buoyant evolution can be strongly enhanced or delayed. Other stochastic parameters, such as the state of the WD’s velocity field, were not considered, but would have to be marginalized over to obtain probability distribution in a more elaborate study. Discussion {#sec:discussion} ========== We have shown how the ignition offset probability distribution directly links to a range of SNe Ia outcomes, parameterized by the $^{56}$Ni yield. This range of SNe Ia outcomes is intrinsically connected to the turbulent convective velocity field in the near-$M_{\rm Ch}$ WD progenitor, which causes the ignition of the SD channel to be inherently stochastic and unpredictable. The physics of the SD channel is complex and is subject to numerous modeling uncertainties: the pre-WD stellar evolution, accretion from the companion, possibly impacting the WD initial composition and structure, the physics of the simmering phase leading up to ignition, and the physics of turbulent nuclear burning and detonation. In the following, we discuss this range of modeling uncertainties and to what extent each of these effects may impact our conclusions. ![Nickel 56 yield M$_{\rm Ni56}$ over time $t$ after ignition for the 3D simulations at selected initial offsets $r_0$ with and without TFI.[]{data-label="fig:Ni56_t3Dcombo"}](Ni56_t3Dcombo.pdf){width="1.0\columnwidth"} ![Nickel 56 yields at different offsets in 3D with and without enhanced burning including the $\arctan$-fit. Shaded regions mark offsets resulting in GCD/PGCD/failed scenarios according to their label. Additionally hatched regions mark transitions between scenarios due to computational limitations and classification ambiguities (see text).[]{data-label="fig:Ni56_R"}](Ni56_R_labelled.pdf){width="1.0\columnwidth"} ![Probability density function for Nickel 56 yields based on simulations and the hot spot probability density function.[]{data-label="fig:P_Ni56"}](P_Ni56_fit_agnostic_pres.pdf){width="1.0\columnwidth"} Another crucial piece of physics underlying both the simmering phase and the nuclear burning within the SNe Ia is the rate for C12 + C12 fusion. Recent experiments have measured this reaction rate for the first time for center-of-mass energies in the range of 0.8 - 2.5 MeV, and demonstrated using the Trojan horse method, an enhancement in the cross sections by as much as a factor of 25 in a key temperature range relevant to SNe Ia [@tuminoetal18]. While this work has been contested by other authors [@mukhamedzhanovetpang18] (and subsequently rebutted – @tuminoetal18b ), and will ultimately await additional confirmation, it is important to recognize the possible impact which uncertainties in this key reaction rate may have upon the SNe Ia modeling. A higher reaction rate would increase the flame speed and might particularly change the M$_{\rm Ni56}$ outcomes at low offsets where the buoyant evolution is most sensitive to changes of our fiducial model. In this work, we have incorporated the statistical distribution of ignition points drawn from actual three-dimensional simulations of the convective simmering phase of near-$M_{\rm Ch}$ WDs leading up to ignition. While this approach has clear advantages over the majority of prior work, which typically adopted ignition points in an arbitrary fashion, it is nonetheless still limited by the fact that there has only been one high-quality three-dimensional simmering phase simulation in a single WD progenitor completed to date. The simulation has been performed at increasing resolution, and the distribution of hot spot offsets appears to be converged [@zingaleetal11; @nonakaetal11; @maloneetal14]. However, it is conceivable that the distribution of hot spots could be more centrally-condensed in WD progenitors with higher central density. The mechanism underlying the initiation of detonation plays an important role in SNe Ia theory, and much effort has focused upon whether the detonation mechanism in a near-$M_{\rm Ch}$ SD scenario is a DDT or GCD [@ropkeetal07a; @seitenzahletal16; @daveetal17]. For example, because the DDT detonates prior to bubble breakout, the stratification of the $^{56}$Ni and IGEs is generally more centrally condensed, in broader agreement with observations of high M$_{\rm Ni56}$, overluminous SNe Ia like 91T [@seitenzahletal16]. In the current work, we have focused upon the GCD mechanism in our simulations in inferring the intrinsic variation of the $^{56}$Ni production resulting from stochastic ignition. However, if we were to have instead adopted a DDT criterion for detonation initiation, the $^{56}$Ni distribution would be even more heavily left-skewed. This is because, given identically the same WD progenitor and ignition point, the DDT detonates prior to breakout, and consequently always results in a less pre-expanded WD progenitor than a GCD [@daveetal17]. As a result, the conclusion that the stochastic variance in $^{56}$Ni yields is small, and the mean $^{56}$Ni yield is large, is not qualitatively modified under the DDT scenario. In this work, we have begun with a quiescent WD, although the ignition arises in the WD interior, which is itself convective, and as a consequence of the transport of angular momentum from the accretion stream, may itself be rotating. Indeed, recent work has shown that the effect of rotation may be significant enough to weaken the convergence in the detonation region of a classical GCD [@garciasenzetal16], although a PGCD might still be possible. Furthermore, at low ignition offsets, the magnitude of the initial convective velocity field may have an impact on the early flame bubble’s evolution, and thus the $^{56}$Ni yield. As we start our simulations with zero velocity, this adds an additional uncertainty in the resulting M$_{\rm Ni56}$ distribution. On the one hand, there is turbulence on small scales, distorting the flame front early on. Expected velocities for this are small ($\sim 10$ km/s) with regard to the laminar flame speed ($\sim 100$ km/s), so that the flame bubble’s sphericality is still mostly unaffected until broken by its buoyant rise. Minor shifts in a possible failed-to-detonated transition radius might be to be expected. On the other hand, there is a possibility of the ignition point to occur in a larger convective flow. If such hot spots form in convectively outward moving regions as found by [@nonakaetal11], this will further decrease the probability for ignitions that burn through the WD’s center. [@maloneetal14] ran a series of numerical simulations similar to our setup for the deflagration phase, but additionally include a comparison of a setup with self-consistent convective velocity field and one without any velocity field. In these simulations, the authors find the influence of the initial flow field to increase as the initial ignition point is set closer to the center of mass, particularly for an exactly centered ignition. Stellar composition influences the final nucleosynthetic yield of a SD SNe Ia through a variety of effects. The CNO metals of the WD stellar progenitor ultimately yield $^{22}$Ne during He burning. @umedaetal99 suggested that a variation in the carbon abundance within the progenitor WD in the single-degenerate channel would impact the production of $^{56}$Ni. In particular, @umedaetal99 conjectured that WDs with a richer C/O ratio would lead to a more turbulent flame, an earlier transition from deflagration to detonation at higher densities, and hence a greater production of $^{56}$Ni. @timmesetal03 demonstrated both analytically and numerically that the neutron excess carried by $^{22}$Ne results in a decrease in the M$_{\rm Ni56}$ of the SN Ia event, in direct proportion to the abundance of $^{22}$Ne. @townsleyetal09 further considered a range of additional compositional effects influencing the final nucleosynthetic yields, including the ignition density, the energy release, the flame speed, the WD structure, and the density at which a possible deflagration-to-detonation transition arises. The simulations with $^{22}$Ne mass fractions increasing from 0 to 0.02, which were run long enough to determine a final $^{56}$Ni yield, demonstrate that the combination of these effects result in a roughly 10% decrease in M$_{\rm Ni56}$. Similarly, we expect a slight decrease in $M_{\rm Ni56}$ based on complementary work by @jacksonetal10 investigating the impact of the $^{22}$Ne content on the DDT density and the resulting $^{56}$Ni mass. Computational simulations of single-degenerate SNe Ia have subsequently explored the influence of varying the C/O ratio within the progenitor WD in the context of the DDT model [@kruegeretal10; @ohlmannetal14]. These investigations have demonstrated that higher C/O ratios yield more energetic and more luminous SNe Ia. Taken together, this body of work on SD SNe Ia generally supports the view that stellar progenitor C/O ratio and metallicity play a role in determining the brightness of a SN Ia event. However, at the same time, these models have demonstrated that additional free parameters, including both the number and distribution of ignition points, as well as the DDT transition density, have a combined effect on the explosion energy comparable to that of the C/O ratio and stellar progenitor metallicity. Moreover, based upon this body of work, the combined influence of both a decrease in the C/O ratio and an increase in the stellar progenitor metallicity from the values assumed here (50/50 and 0, respectively), would result in a 10% - 20% decrease in the $^{56}$Ni yields, which would quantitatively impact our predicted M$_{\rm Ni56}$ distribution, but not alone yield a distribution more closely resembling normal SNe Ia. Most simulation models of near-$M_{\rm Ch}$ WDs adopt a central density $\rho_c \simeq 2 \times 10^9$ g cm$^{-3}$, as we have in this paper. Because the electron capture rates are highly sensitive to the density, higher-central density WDs generally produce greater amounts of stable IGE, and a lower $^{56}$Ni yield. Higher central density WDs significantly overproduce (relative to solar) a range of neutron-rich isotopes, including $^{50}$Ti, $^{54}$Cr, $^{58}$Fe, and $^{62}$Ni, and as a consequence, were generally excluded from consideration as near-$M_{\rm Ch}$ WD progenitors [@meyeretal96; @nomotoetal97; @woosley97; @brachwitzetal00]. However, if SD near-$M_{\rm Ch}$ WDs constitute a small fraction of all SNe Ia, such high-central density WDs may not be rare occurrences. If the central density of the near-$M_{\rm Ch}$ WD is indeed higher than $\rho_c \simeq 2 \times 10^9$ g cm$^{-3}$, then the flame speed and the consequent deflagration energy release can be greater than considered here. This can in turn lead to greater pre-expansion and a reduced amount of $^{56}$Ni as shown in 2D simulations [@kruegeretal12; @daveetal17], possibly consistent with a normal or even a failed SNe Ia. However, the qualitative outcome of an increased higher central density can vary as shown by @SeitenzahlTypeIasupernova2011. In their 3D numerical study of the DDT scenario, the authors of the latter study show that the central density is only a secondary parameter. However, their study assumed multipoint ignition over a wide range of ignition kernels. When single point ignitions are adopted, increased electron capture rates at higher central densities lead to higher abundances of neutron-rich iron peak elements at the expense of $^{56}$Ni [@daveetal17]. Conclusions {#sec:conclusions} =========== In this paper, we investigated the impact of a single initial ignition’s offset $r_0$ for a single ignition point of a deflagration flame bubble in a fiducial 50/50 C/O WD with a central density of $2.2\times 10^{9}$ g cm$^{-3}$ and an adiabatic temperature profile leading up to Type Ia supernovae in the GCD scenario. We showed that a transition to failed SNe Ia (i.e. those events lacking a GCD) occurs as $r_0$ falls below some offset below $16$ km. Even for those white dwarfs detonating, the $^{56}$Ni yield spawns a range of outcomes changing by $10-20$% with a decreasing yield as $r_0$ approach the radius where no detonation is triggered. Summarizing our key conclusions: 1. Stochastic range of outcomes. For chosen progenitor this corresponds to a spread of $\delta\gtrsim0.2$ M$_\odot$ for detonating models, even though the M$_{\rm Ni56}$ distribution is strongly left-skewed so that low M$_{\rm Ni56}$ are unlikely for the given probability distribution. This range of outcomes is stochastic and will add onto other variations from the different progenitors’ stellar structure and evolution. 2. For non-centered ignitions, all ignitions lead up to an overluminous SNe Ia. We do not find a viable scenario from a single bubble ignition leading to a normal Type Ia for the progenitor used here, which is also commonly referenced in literature. This disfavors single degenerate progenitors as a contributing channel to failed and normal type Ia SNe. If this channel was to contribute to failed and normal type Ia supernovae, this would require readjustment and better understanding of the stellar structure and evolution, and flame dynamics. 3. (Quasi-)symmetric deflagrations around the center of mass, as commonly used in numerical studies, are most likely artificial constructs: Ignitions very close to the center are rare as shown by @nonakaetal11 and even if such events occur, a strong asymmetry evolves as a background flow in direction of the outermost flame front counteracts the burning into other directions as numerically demonstrated here for offsets as small as 4 km. However, future in-depth studies on the likelihood of multi-ignition occurrences and their correlations with the turbulent velocity field might leave room for rare occurrences of symmetric deflagrations. [**Acknowledgements**]{} The authors thank Mike Zingale for generously providing the simulation data from his group’s convective simmering phase models, which was used in Figure \[fig:hotspotPDF\]. The authors thank Pranav Dave and Rahul Kashyap for insightful conversations. RTF also thanks the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics for visiting support during which a portion of this work was undertaken. CB acknowledges support from the Deutscher Akademischer Austauschdienst (DAAD). RTF acknowledges support from NASA ATP award 80NSSC18K1013. This work used the Extreme Science and Engineering Discovery Environment (XSEDE) Stampede 2 supercomputer at the University of Texas at Austin’s Texas Advanced Computing Center through allocation TG-AST100038. XSEDE is supported by National Science Foundation grant number ACI-1548562 [@townsetal14]. We use a modified version of the FLASH code 4.3 [@Fryxell_2000], which was in part developed by the DOE NNSA-ASC OASCR Flash Center at the University of Chicago, for our simulations including the SN Ia modeling presented in @townsleyetal16. The authors have made use of Frank Timmes’ hot white dwarf progenitor (). For our analysis, we acknowledge use of the Python programming language [@VanRossum1991], the use of the Numpy [@VanDerWalt2011], IPython [@Perez2007], and Matplotlib [@Hunter2007] packages. Our analysis and plots strongly benefited from the use of the yt package [@ytproject].
--- abstract: 'We analyze the validity of perturbative renormalization group estimates obtained within the fixed dimension approach of frustrated magnets. We reconsider the resummed five-loop $\beta$-functions obtained within the minimal subtraction scheme without $\varepsilon$-expansion for both frustrated magnets and the well-controlled ferromagnetic systems with a cubic anisotropy. Analyzing the convergence properties of the critical exponents in these two cases we find that the fixed point supposed to control the second order phase transition of frustrated magnets is very likely an unphysical one. This is supported by its non-Gaussian character at the upper critical dimension $d=4$. Our work confirms the weak first order nature of the phase transition occuring at three dimensions and provides elements towards a unified picture of all existing theoretical approaches to frustrated magnets.' address: - '$^{1}$ LPTMC, CNRS-UMR 7600, Université Pierre et Marie Curie, 75252 Paris Cédex 05, France' - '$^{2}$ Institute for Condensed Matter Physics, National Acad. Sci. of Ukraine, UA–79011 Lviv, Ukraine' - '$^{3}$ Institut für Theoretische Physik, Johannes Kepler Universität Linz, A-4040 Linz, Austria' - '$^{4}$ Ivan Franko National University of Lviv, UA–79005 Lviv, Ukraine' author: - 'B. Delamotte$^{1}$, Yu. Holovatch$^{2,3}$, D. Ivaneyko$^{4}$, D. Mouhanna$^{1}$ and M. Tissier$^{1}$' title: Fixed points in frustrated magnets revisited --- Introduction. ============= Although undoubtedly successful to describe the critical behavior of $O(N)$-like models, [*perturbative*]{} field theory is still unable to provide a clear, non-controversial understanding of the physics of certain more complex models among which are the famous Heisenberg or $XY$ frustrated magnets (see [@delamotte03] and references therein). At the core of the problem is that different kinds of perturbative approaches, performed up to five- or six-loop order, lead to contradictory results: in dimension $d=3$, first order phase transitions are predicted within the $\varepsilon$ (or pseudo-$\varepsilon$)- expansion [@antonenko95; @holovatch04; @calabrese03c] whereas a second order transition is found in the fixed-dimension (FD) perturbative approaches performed either in the minimal-subtraction ($\overline{\hbox{MS}}$) scheme [*without*]{} $\varepsilon$-expansion [@calabrese04] or in the massive scheme [@pelissetto01a]. In fact, FD results for frustrated magnets are neither supported by experiments nor by Monte Carlo simulations [@delamotte03; @itakura01; @peles04; @bekhechi06; @quirion06] (see however [@calabrese04] where a scaling behavior is found). They also disagree with the results obtained from the non-perturbative renormalization group (NPRG) approach [@delamotte03; @tissier00] that predicts (weak) first order phase transitions in $d=3$ in agreement with the $\varepsilon$-expansion analysis. In this article we shed light on the discrepancies encountered in perturbative approaches to frustrated magnets by showing that the FD approaches lead to dubious predictions as for the critical physics in $d=3$. Our key-point relies on the very nature of the FD perturbative approach and is easy to grasp already for the simplest — $O(N)$ — model. In this case, the (non-resummed) renormalization group (RG) $\beta$-function at $L$ loops is a polynomial in its coupling constant $u$ of order $L+1$. Thus, it admits $L+1$ roots $u^*$, $\beta(u^*)=0$, that are either real or complex. Within the $\varepsilon$-expansion, when one solves the fixed point (FP) equation in successive orders in $\varepsilon=4-d$, the only non-trivial FP retained is by definition such that $u^*\sim \varepsilon$. On the contrary, in the FD approaches, when one directly (analytically or numerically) solves the non-linear FP equation at fixed $d$ (fixed $\varepsilon$) no real root can be a priori discarded. As a result, the generic situation is that the number of FPs as well as their stability vary with the order $L$: at a given order, there can exist several real and stable FPs or none instead of a single one. This artefact of the FD approach is already known and was first noticed in the massive scheme in $d=3$ [@parisi80]. The way to cope with it is also known: resumming the perturbative expansion of $\beta(u)$ (see e.g. [@zinnjustin89]) is supposed both to restore the non-trivial Wilson-Fisher FP and to suppress the non-physical or “spurious” roots. This is indeed what occurs for the $O(N)$ model for which the FP analysis performed on the resummed $\beta$-function of FD approaches enables to discriminate between physical and “spurious” FPs. However this ability of the resummation procedures to eradicate spurious solutions of the FD approach has never been questioned and, [*a fortiori*]{}, evaluated in the context of more complex models and, in particular, for models with several coupling constants. We precisely argue, in this article, that the situation is very different for frustrated magnets and probably for several other models. Indeed, considering the $\beta$-functions derived at five loops in the $\overline{\hbox{MS}}$ scheme and using a standard resummation procedure [@calabrese04] we show that the FP found in $d=3$ without expanding in $\varepsilon$, although it persists after resummation, is in fact spurious. Our conclusion is based on several facts: (i) the critical exponents computed at the FP supposed to control the second order behavior of frustrated magnets display bad convergence properties with the order of computation in the controversial cases of XY and Heisenberg spin systems (ii) when analyzed with the same FD approach, the field theory relevant to ferromagnetic systems with cubic anisotropy displays a similar FP — having no counterpart within the $\varepsilon$-expansion — in contradiction with its well established critical physics. The critical exponents computed at this supernumerary FP display the same bad convergence properties with the loop order as in the case of XY and Heisenberg frustrated magnets (iii) the coordinates $(u_1^*,u_2^*)$ of the attractive FP found in the FD approach of frustrated magnets are multivalued functions of $(d,N)$ — $N$ being the number of spin components — because of the existence of a topological singularity in the mapping between $(d,N)$ and the FP coordinates $(u_1^*(d,N),u_2^*(d,N))$. This singularity provides strong indications of the existence of pathologies in the RG equations obtained at fixed dimension (iv) finally, we provide strong arguments showing that the supernumerary FPs found in the frustrated and cubic models survive in the upper critical dimension $d=4$ where they are found to be non-Gaussian, a behavior deeply connected with the existence of the above mentioned topological singularity. Being given the present state of knowledge of $\phi^4$-like theories that are very likely trivial in $d=4$, this fact confirms the serious doubts on the actual existence of these supernumerary FPs. Resummation method. =================== To investigate the five-loop $\beta$ functions derived in the $\overline{\hbox{MS}}$ scheme we have to resum them. Before discussing the case of a series of two coupling constants relevant for frustrated magnets or ferromagnetic models with cubic anisotropy we recall, for the sake of clarity, the main steps needed to resum a series of one coupling constant $u$ as well as the underlying hypotheses (see for a review [@suslov05]). Let us consider a series $$f(u)=\sum_{n} a_n \ u^n \ \label{series1}$$ where the coefficients $a_n$ are supposed to grow as $n!$. The Borel-Leroy sum associated with $f(u)$ is given by: $$B(u)=\sum_{n} {a_n\over \Gamma[n+b+1]} \ u^n \ \label{borelsum}$$ and is supposed to converge, in the complex plane, inside a circle of radius $1/a$, where $u=-1/a$ is the singularity of $B(u)$ closest to the origin. Then, using this definition as well as $\Gamma[n+b+1]=\int_0^{\infty} t^{n+b}\ e^{-t} dt$, one can rewrite $$f(u)= \sum_{n} {a_n\over \Gamma[n+b+1]} \ u^n \int_0^{\infty} \ dt \ e^{-t}\ t^{n+b}$$ and, interchanging summation and integration, one can [*define*]{} the Borel transform of $f$ as: $$f_B(u)=\int_0^{\infty} \ dt \ e^{-t}\ t^{b}\ \ B(ut)\ . \ \label{boreltrans}$$ In order to perform the integral in (\[boreltrans\]) on the whole real positive semi-axis one has to find an analytic continuation of $B(t)$. Several methods can be used, Padé approximants for instance. However, it is generally believed that the use of a conformal mapping is more efficient since it makes use of the convergence properties of the Borel sum. Under the assumption that all the singularities of $B(u)$ lie on the negative real axis and that the Borel-Leroy sum is analytic in the whole complex plane except for the cut extending from $-1/a$ to $-\infty$, one can perform the change of variable: $$\omega(u)={\sqrt{1 + a\, u}-1\over \sqrt{1 + a\, u}+1} \hspace{1cm} \Longleftrightarrow \hspace{1cm} u(\omega)={4\over a}{\omega\over(1-\omega)^2} \label{conformal}$$ that maps the complex $u$-plane cut from $u=-1/a$ to $-\infty$ onto the unit circle in the $w$-plane such that the singularities of $B(u)$ lying on the negative axis now lie on the boundary of the circle $|w|=1$. The resulting expression $B(u(\omega))$ has a convergent Taylor expansion within the unit circle $|\omega|<1$ and can be rewritten: $$B(u(\omega))=\sum_{n} d_n(a,b) \hspace{0.1cm} \left[\omega(u)\right]^n \label{borel3}$$ where the coefficients $d_n(a,b)$ are computed so that the re-expansion of the r.h.s. of (\[borel3\]) in powers of $u$ coincides with that of (\[series1\]). One obtains through (\[borel3\]) an analytic continuation of $B(u)$ in the whole $u$ cut-plane so that a resummed expression of the series $f$ can be written: $$f_R(u)=\sum_{n} d_n(a, b) \hspace{-0.1cm} \int_0^{\infty} \hspace{-0.2cm}dt\, \, {e^{-t}\, t^{b}\ \left[\omega(u t)\right]^n}\ . \label{resummation1}$$ In practice it is interesting to generalize the expression (\[resummation1\]) by introducing [@kazakov79] the expression $$f_R(u)=\sum_{n} d_n(\alpha,a, b) \hspace{-0.1cm} \int_0^{\infty} \hspace{-0.2cm}dt\, \, {e^{-t}\, t^{b}}\ { \left[\omega(u t)\right]^n \over \left[1-\omega(u t)\right]^{\alpha} }\ \label{resummation2}$$ whose meaning will be explained just below. If an infinite number of terms of the series $f_R(u)$ were known, expression (\[resummation2\]) would be independent of the parameters $a$ and $b$ and $\alpha$. However when only a finite number of terms are known, $f_R(u)$ acquires a dependence on them. In principle, the parameters $a$ and $b$ are fixed by the large order behavior of the series: $$a_{n\to\infty}\sim (-a_0)^n \, n!\, n^{b_0}$$ which leads to $a=a_0$ and $b\simeq b_0+3/2$ [@leguillou80] while $\alpha$ is determined by the strong coupling behavior of the initial series: $$f(u\to\infty) \sim u^{\alpha_0/2} $$ which can be imposed at any order of the expansion by choosing $\alpha=\alpha_0$. The common assumption is that the above choice of $a$, $b$ and $\alpha$ improve the convergence of the resummation procedure since it encodes exact results. Let us however emphasize that, often, only $a$ is known and that the other parameters, $\alpha$ and $b$, are considered either as free (as for instance in [@calabrese04]) or variational (for instance in [@mudrov98c] where $\alpha$ is determined by optimizing the apparent convergence of the series). In any case, the choice of value of $a$, $\alpha$ and $b$ must be validated a posteriori by checking that a small change of their value does not yield strong variations of the quantities under study. Such variations would clearly indicate that one has chosen an unstable, parameter-dependent calculation procedure or that one has not computed the quantities under study at a sufficiently high order of perturbation theory to consider them as converged. In the following, we shall employ this “stability criterion” to validate – or invalidate – the results obtained by means of the FD perturbative approach to frustrated magnets. In the context of frustrated magnets, the above described resummation procedure must be extended to several (two) coupling constants. In this case, $f$ is a function of two variables $u_1$ and $u_2$ known through its series expansion in powers of $u_1$ and $u_2$. The resummation technique used in [@calabrese04] consists in considering $f$ as a function of $u_1$ and $z=u_2/u_1$: $$f(u_1,z)=\sum_{n} a_n(z) \ u_1^n \ \label{series}$$ and in resumming with respect to a single variable $u_1$ only. An important hypothesis underlying this procedure is that one can safely resum (\[series\]) with respect to $u_1$ while keeping $z$ fixed, [*i.e.*]{} without resumming with respect to $u_2$. Under this hypothesis the resummed expression associated with $f$ reads: $$f_R(u_1,z)=\sum_{n} d_n(\alpha,a(z),b;z) \hspace{-0.1cm} \int_0^{\infty} \hspace{-0.2cm}dt\, \, {e^{-t}\, t^{b}}{ \left[\omega(u_1 t;z)\right]^n \over \left[1-\omega(u_1 t;z)\right]^{\alpha} } \label{resummation}$$ with: $$\omega(u;z)={\sqrt{1 + a(z)\, u}-1\over \sqrt{1 + a(z)\, u}+1}$$ where, as above, the coefficients $d_n(\alpha,a(z),b,z)$ in (\[resummation\]) are computed so that the re-expansion of the r.h.s. of (\[resummation\]) in powers of $u$ coincides with that of (\[series\]). Frustrated magnets. =================== The Hamiltonian relevant for frustrated systems is given by: $$\begin{array}{ll} \displaystyle \hspace{0cm}{\mathcal H}= \int{\rm d^d} x \Big\{\frac{1}{2} \left[(\partial\pmb{$\phi$}_1)^2+ (\partial\pmb{$\phi$}_2)^2 + m^2 (\pmb{$\phi$}_1^2+\pmb{$\phi$}_2^2)\right]+\\ \\ \hspace{2.3cm}\displaystyle \frac{u_1}{4!}\ \left[\pmb{$\phi$}_1^2+ \pmb{$\phi$}_2^2\right]^2 +\frac{u_2}{12}\ \left[(\pmb{$\phi$}_1 \cdot \pmb{$\phi$}_2)^2- \pmb{$\phi$}_1^2\,\pmb{$\phi$}_2^2\right] \Big \} \end{array} \label{landau}$$ where the $\pmb{$\phi$}_i$, $i=1,2$, are $N$-component vector fields and $u_1$ and $u_2$ are the coupling constants that satisfy $u_1>0$ and $u_2<4 u_1$ — which corresponds to an Hamiltonian bounded from below. For $m^2>0$ the ground state of Hamiltonian (\[landau\]) is given by $\pmb{$\phi$}_1=\pmb{$\phi$}_2=0$ while for $m^2<0$ it is given by a configuration where $\pmb{$\phi$}_1$ and $\pmb{$\phi$}_2$ are orthogonal with the same norm. The Hamiltonian (\[landau\]) thus describes a symmetry breaking scheme between a disordered and an ordered phase where the $O(N)$ group of rotation is broken down to $O(N-2)$ which is relevant for frustrated magnets (for details see [@delamotte03] for instance). Let us first recall the FP structure of Hamiltonian (\[landau\]) [*around $d=4$*]{} at leading order in $\varepsilon=4-d$ [@garel76; @bailin77; @yosefin85]. For $N$ larger than a critical value $N_c(d)$ depending on the dimension, the RG equations display, apart the usual Gaussian ($u_1^*=u_2^*=0$) and $O(2N)$ ($u_1^*\ne 0, u_2^*=0$) FPs, two non-trivial ($u_1^*\ne 0$ and $u_2^*\ne 0$) FPs : one, $C_+$, is stable; the other one, $C_-$, is unstable. Above $N_c(d)$, the transition is thus predicted to be of second order. As $N$ is lowered starting from values of $N>N_c(d)$, the two FPs $C_+$ and $C_-$ get closer and finally collapse together for $N=N_c(d)$. Below $N_c(d)$, there is no longer a stable FP and the transition is expected to be of first order. The value of $N_c(d)$ for $d=3$ has been computed by several means: in a double expansion in $u_1$, $u_2$ and $\varepsilon=4-d$ up to five-loops [@antonenko95; @holovatch04; @calabrese03c], directly in $d=3$ in a weak-coupling expansion within the massive scheme up to six-loops [@pelissetto01a], within a NPRG approach [@delamotte03] and, finally, within the $\overline{\hbox{MS}}$ scheme [*without*]{} $\varepsilon$-expansion [@calabrese04]. The predictions obtained within the perturbative approaches performed at fixed $d$ strongly differ from those obtained using other methods, in particular in $d\le 3$, see below. This is the reason which led us to reconsider this kind of approach. We thus apply the resummation procedure described above without $\varepsilon$-expansion [@schloms87] to the $\beta_{u_i}$ functions, $i=1,2$, obtained at five loops in the $\overline{\hbox{MS}}$ scheme [@calabrese04]. More precisely, as in [@calabrese04], we resum $(\beta_{u_i}(u_1,z) + \varepsilon u_i)/{u_1}^2$, $i=1,2$, instead of $\beta_{u_i}(u_1,z)$, which, in fact, leads to similar results. For the model (\[landau\]), the region of Borel-summability is given by $2u_1-u_2\ge 0$ (see for instance [@calabrese04] for details) to which corresponds the value $a(z)=1/2$. For $2 u_1-u_2\le 0$ there exists a singularity on the positive real axis so that the series is no longer Borel-summable. However, as noted in [@calabrese04], as far as $4u_1-u_2\ge 0$, the singularity of the Borel transform closest to the origin is still on the negative axis. Thus, the asymptotic behavior is still correctly taken into account by the conformal mapping and one can, [*a priori*]{}, trust the resummed results. Note finally that $b$ and $\alpha$ are typically varied in the ranges $[6,30]$ and $[-0.5,2]$. One finds, in agreement with [@calabrese04], that there exists a curve (parametrized by $N_c(d)$ or its reciprocal $d_c(N)$) such that for $d<d_c(N)$ a stable FP $C_+$ governs the critical properties of the system. The curves $N_c(d)$ obtained within this scheme is shown in Fig.\[courbes\_ncd\] by a dotted line ($N_c^{\rm FD}$). On the same figure we show the results for $N_c(d)$ obtained within the NPRG approach [@delamotte03], red solid curve ($N_c^{{\rm NPRG}}$) and by (resummed) $\varepsilon^5$-expansion [@calabrese04], green solid curve ($N_c^\varepsilon$). Two points must be noted. First the curves $N_c^\varepsilon(d)$ and $N_c^{{\rm NPRG}}(d)$ show a remarkable agreement being given the very different nature of the two corresponding computations. Second, one can see on Fig.\[courbes\_ncd\] the strong discrepancy between the two previous approaches and the perturbative FD approach. In particular the S-like shape of the curve $N_c^{\rm FD}(d)$ obtained within the perturbative FD approach is such that the FP $C_+^{\rm FD}$ exists for all $N\geq 2$ in $d=3$ contrary to the other approaches in which a FP $C_+$ exists only for $N>N_c(d=3)\simeq5$ [@delamotte03]. This situation, put together with the fact, already underlined in the Introduction, that the FD approach a priori displays spurious FPs, lead us to strongly question the validity of the results obtained at FD and, in particular, the existence of genuine FPs in $d=3$ for $N=2,3$. ![Curves $N_c(d)$ obtained within the $\overline{\hbox{MS}}$ scheme with $\varepsilon$-expansion ($N_c^\varepsilon$), without $\varepsilon$-expansion ($N_c^{\rm FD}$) and the NPRG approach ($N_c^{{\rm NPRG}}$). The resummation parameters for the $\overline{\hbox{MS}}$ curve are $a=1/2$, $b=10$ and $\alpha=1$.[]{data-label="courbes_ncd"}](courbesncd.eps){width="0.8\linewidth"} Convergence of the loop expansion. ================================== The frustrated model. --------------------- We first examine the convergence of the loop expansion of the FD approach by studying the sensitivity of the resummed quantities with respect to variations of the resummation parameters $b$ and $\alpha$ as well as the order $L$ of the computation. We focus on the (real part of the) correction to scaling exponent $\omega$ at the FP $C^+$ which rules its stability : for Re($\omega)>0$ the FP is stable and for Re($\omega)<0$ it is unstable. Within the FD approach, one finds in $d=3$ for $N<7$ that $C^+$ is a focus, that is $\omega$ is complex at this FP (the flow spirals around it). In practice, following [@mudrov98c], we optimize $\omega(\alpha,b,L)$ by first choosing $\alpha$ such that $\omega(L+1)-\omega(L)$ is minimal. Then, one determines the parameter $b$ in such a way that $\omega(b)$ is stationary. We have checked that similar convergence properties are obtained for the exponent $\nu$ [@delamottenext]. We start by considering the uncontroversial case of a “large” number of components of the order parameter, typically greater than 7 in $d=3$. Indeed, in this case, [*all*]{} perturbative and non-perturbative approaches agree together and find a stable FP characterizing a second order behavior [@delamotte03]. In Fig.\[frustreN9\], we display $\omega(b)$ in the case $N=9$ for $L=4$ and $L=5$ for typical values of $\alpha$. At a given order, one sees that it is indeed possible to find values of $b$ that make $\omega$ stationary. By performing the same analysis at four- and five-loop orders one observes (i) that the dependence of $\omega$ with respect to $b$ decreases with the order of the expansion as expected (ii) a convergence of the results with the order with, however, large error bars typically around $5-7\%$. Note that the typical error bars obtained for the Ising model with the same methodology at the same orders are much less than 1$\%$. ![The critical exponent $\omega$ for $N=9$ as a function of $b$ at four (curves on the left) and five (curves on the right) loops for $\alpha=-0.5,\, 0,\, 0.5$ for the frustrated model.[]{data-label="frustreN9"}](omegafrustresN9.eps){width="0.7\linewidth"} We then consider the controversial case of Heisenberg systems ($N=3$). In Fig.\[frustreN3\] we display again $\omega(b)$ for $L=4$ and $L=5$. There, a new phenomenon occurs: while one still finds a stationary value of $\omega(b)$ at four loops, this is no longer the case at five loops. Moreover, considering the values of $\alpha$ and $b$ that minimize the difference $\omega(L=5)-\omega(L=4)$ one observes a bad “convergence”: the errors bar on the critical exponents are now of order $30\%$ and thus far larger than in the case $N=9$. An even worse behavior is obtained in the XY case. From these analyses one gets striking indications of drastically different convergence properties for the $N=9$ and $N=3$ cases. This suggests a qualitative difference between the corresponding FPs. To characterize more precisely this difference let us now study the cubic model along the same lines. ![The (real part of the) critical exponent $\omega$ for $N=3$ as a function of $b$ at five (upper curves) and four (lower curves) loops for $\alpha=-0.5,\, 0,\, 0.5$ for the frustrated model. Note the change of scale with respect to Fig.\[frustreN9\].[]{data-label="frustreN3"}](omegafrustres-1.eps){width="0.7\linewidth"} The cubic model. ---------------- We now consider the ferromagnetic model with cubic anisotropy whose Hamiltonian is: $$\displaystyle \hspace{0cm}{\mathcal H}= \int{\rm d^d} x \Big\{\frac{1}{2}\left[(\partial\pmb{$\phi$})^2+ m^2 \pmb{$\phi$}^2\right]+ {u\over 4!} \left[\pmb{$\phi$}^2\right]^2 +{v\over 4!}\sum_{i=1}^N \phi_i^4\Big \} \label{cubic}$$ with a $N$-component vector field $\pmb{$\phi$}$. The Hamiltonian (\[cubic\]) is used to study the critical behavior of numerous magnetic and ferroelectric systems with appropriate order parameter symmetry (see e.g. [@folk00b]). The $\beta$-functions are known at five-loop order in the $\overline{\hbox{MS}}$ scheme [@kleinert95] and at six-loop order in the massive scheme [@carmona00]. Apart from the Gaussian ($u^*=v^*=0$) and an Ising FP ($u^*=0, v^*\ne 0)$, there exist two FPs more: the $O(N)$ symmetric FP $(u^*\neq0, v^*=0)$ and the mixed one $M$ $(u^*\neq 0, v^*\neq 0)$. The Ising and Gaussian FPs are both unstable for all values of $N$. The $O(N)$ FP is stable and $M$ is unstable with $ v^*< 0$ for $N<\tilde{N_c}$ and the opposite for $N>\tilde{N_c}$. The critical value $\tilde{N_c}$ has been found to be slightly less than 3: for instance $\tilde{N_c}\sim 2.89(4)$ in [@carmona00] and $\tilde{N_c}\sim 2.862(5)$ in [@folk00b]. Let us now analyze the FP structure of the model (\[cubic\]) within the $\overline{\hbox{MS}}$ scheme without $\varepsilon$-expansion by applying the conformal mapping Borel transform (\[resummation\]) at $d=3$ ($\varepsilon=1$). The parameter $a(z=v/u)$ entering in (\[resummation\]) is now given by $a(z)=1+z$ for $z>0$ and $a(z)=1+z/N$ for $z<0$ while the region of Borel-summability is given by the condition $u+v>0$ and $Nu+v>0$. Within this scheme one surprisingly observes that, in addition to the above mentioned usual FPs, there exist, in a whole domain of parameters $b$ and $\alpha$, several other FPs that have no counterparts in the $\varepsilon$-expansion. In particular, one of them that we call $P$ (which is stable and such that $u^*>0, v^*<0$) exists for any value of $N\lesssim 7.5$ and lies in the region of Borel-summability $u+v>0$. The presence of this FP, if taken seriously, would have important physical consequences since it would correspond to a second order phase transition with a new universality class. However no such transition has ever been reported. On the contrary, a first order behaviour for all values of $N$ larger than $\tilde{N_c}$ is found within perturbative [@carmona00; @calabrese03d] or non-perturbative [@tissier01b] field theoretical analysis as well as numerical simulations [@itakura99] in related systems (four-state antiferromagnetic Potts model). Thus, the existence of $P$ has to be considered as an artefact of the FD analysis. Note finally, and interestingly, that $P$ is found to be a [*focus*]{} FP, a striking similarity with the frustrated case. ![The critical exponent $\omega$ as a function of $b$ at five (upper curves) and four (lower curves) loops for $\alpha=1,1.5,1.7$ for the cubic model (N=2).[]{data-label="omegacubic"}](omegacubique.eps){width="0.7\linewidth"} At this stage, it is very instructive to perform the same convergence analysis as the one performed for $C_+$ in the frustrated case. The Ising, $O(N)$ and mixed $M$ FPs display good convergence properties and we focus in the following on the supernumerary FP $P$. In Fig.\[omegacubic\] we plot $\omega(b)$ at this FP for $L=4,5$ and for three different values of $\alpha$. Interestingly, when comparing Fig.\[frustreN3\] and Fig.\[omegacubic\] we find similar behavior between the cubic case and the frustrated case for $N=3$. Indeed, on Fig.\[omegacubic\] one finds stationary values of $\omega(b)$ at four loops but, at five loops, this is only the case for $\alpha=1.7$. It is also interesting to consider the values of $\alpha$ and $b$ that minimizes the difference $\omega(L=5)-\omega(L=4)$. From there, one finds error bars for $\omega$ of order $40\%$, i.e. of the [*same*]{} order of magnitude as the one found in the $N=3$ frustrated case. Being given the fact that the FP $P$ is clearly an artefact of the FD analysis, our study suggests that the lack of convergence of $\omega(b)$ [*characterizes*]{} the behavior at a spurious FP. Thus, coming back on the frustrated case one is naturally led to the conclusion that the FPs $C_+^{{\rm FD}}$ found in $d=3$ and $N=2,3$ should also be interpreted as spurious FPs, as artefacts of the FD analysis. In order to confirm this statement we now examine another characteristic feature of the set of $C_+^{{\rm FD}}$ FPs considered as functions of $d$ and $N$. The singularity $S$. ==================== We now display a specific feature of the RG flow of frustrated magnets analyzed within the FD approach that provides another strong indication of the problematic character of this approach. Let us consider the coordinates $(u_1^*, u_2^*)$ of the FP $C_+^{{\rm FD}}$ as functions of $d$ and $N$: $u_i^*=u_i^*(d,N)$, $i=1,2$. These functions are the roots of the $\beta$-functions of the couplings $u_1$ and $u_2$ obtained in the FD approach and resummed according to the scheme sketched above, Eq.(\[resummation\]): $$\beta_{u_1}(u_1^*,u_2^*)=\beta_{u_2}(u_1^*,u_2^*)=0 \ .$$ The resummed $\beta$-functions are smooth functions of $d$ and $N$ showing no particular feature for $2<d\le4$ and $2\le N<\infty$. However, as we now show, the functions $u_i^*=u_i^*(d,N)$, $i=1,2$ exhibit a non-trivial behavior as $d$ and $N$ are varied continuously around the point labelled $S$ in Fig.\[courbes\_ncd\]. Let us first give, in Fig.\[focuslocus\], the precise definition of the curve $N_c(d)$ in the FD approach. This curve is made of two parts: the first one, labelled (I), corresponds to the part of the curve above $S$ while the second one, labelled (II), corresponds to the part below $S$. (I) is defined as the line in the $(d,N)$ plane above which there exist two non-trivial [*locus*]{} FP (that is having real exponents $\omega$), one stable $C_+^{{\rm FD}}$ and one unstable $C_-^{{\rm FD}}$, and below which there exists none. We recall the mechanism of disappearance of these FPs: when [*at fixed dimension $d$*]{}, $N$ is decreased from large values down to $N=N_c(d)$ the two FPs $C_+^{{\rm FD}}$ and $C_-^{{\rm FD}}$ get closer and closer from each other and finally collapse right on (I). For $N$ below $N_c(d)$, the coordinates $u_1^*$ and $ u_2^*$ become complex and there is no longer any non-trivial FP with real coordinates. Thus, the part (I) of the curve $N_c(d)$ corresponds to the region where the speed of the RG flow between $C_+^{{\rm FD}}$ and $C_-^{{\rm FD}}$ vanishes. The same behavior is observed in other ($\varepsilon$-expansion and NPRG) approaches and the numerical values of $N_c(d)$ in the corresponding part of the curve are very close when calculated by different approches (c.f. Fig.\[courbes\_ncd\]). ![Curve $N_c(d)$ within the $\overline{\hbox{MS}}$ scheme without $\varepsilon$-expansion. The part (I) of the curve $N_c^{{\rm FD}}$ corresponds to a boundary between a region where, at a given dimension $d$ there exists a stable locus FP at large $N$ and no FP at small $N$. The part (II) of the curve $N_c^{{\rm FD}}$ corresponds to a boundary between a region where at fixed $N$ there exists an attractive focus FP for $d<d(N)$ and a repulsive focus for $d>d(N)$. Finally the line (F) is defined as follow: above (F), $C_+^{{\rm FD}}$ is a locus FP ($\omega_1$ and $\omega_2$ are both real) and below (F), it is a focus ($\omega_1$ and $\omega_2=\omega_1^*$ are complex). []{data-label="focuslocus"}](focuslocus.eps){width="0.7\linewidth"} As usual, the speed of the RG flow around a FP can be obtained by linearizing the flow equations at this point. It is thus governed by the two eigenvalues of the “stability” matrix $$M_{ij}={\frac{\partial\beta_{u_i}(u_1,u_2)}{\partial u_j}}{\Bigg\vert_{u_1^*, u_2^*}}$$ that provide the speeds of the RG flow in its two eigendirections at the FP considered. These eigenvalues define the two critical exponents $\omega_1$ and $\omega_2$ governing the corrections to scaling. The equation of the part (I) of $N_c(d)$ is thus given by: $$\omega_1=\omega_1\bigg(u_1^*(d,N), u_2^*(d,N)\bigg)=0$$ where $\omega_1$ is the eigenvalue of $M$ corresponding to the eigendirection of the flow joining $C_+^{{\rm FD}}$ to $C_-^{{\rm FD}}$. Now, when moving [*on*]{} the line $N_c(d)$ towards $S$ one finds that $\omega_2$ decreases and eventually vanishes. One thus defines the point $S$ by: $$\omega_2(u_1^*(S), u_2^*(S))=0\ .$$ Thus, right at $S$, $\omega_1=\omega_2=0$ and, with the choice of parameters $a=1/2, b=10, \alpha=1$ one finds that at $S$, $d=3.24$, $N=7$ and $(u_1^*(S), u_2^*(S))=(0.3,0.7)$. Thus, $S$ is just outside — but not far of — the region of Borel-summability. Note also that, since its coordinates verify $4u_1^*- u_2^*>0$, $S$ is still in the region where the resummed results are trustable. In fact, the point $S$ is also a special point in the sense that it is the endpoint of another line, labelled $(F)$ on Fig.\[focuslocus\] which is such that below $(F)$, the two exponents $\omega_i$’s acquire an imaginary part and are complex conjugates: $\omega_1=\omega_2^*$. This means that, below $(F)$, the FPs become focuses, either attractive or repulsive. This is in particular the case of the FP $C_+^{{\rm FD}}$ found in [@calabrese04] in $d=3$. From now on we concentrate on the FP $C_+^{{\rm FD}}$ since the coordinates of $C_-^{{\rm FD}}$ rapidly grow and go outside the region of Borel summability. Note that the part of (F) shown on Fig.\[focuslocus\] lies inside the region of Borel summability and thus, within the FD approach, is supposed to be well under control. We now define the part (II) of the curve $N_c(d)$ as the line separating the region where $C_+^{{\rm FD}}$ is an [*attractive*]{} focus FP (for $d<d_c(N)$) and the region where it is a [ *repulsive*]{} focus FP (for $d>d_c(N)$) [^1]. Thus, by definition, (II) is the line on which the real part of the $\omega_i$’s vanishes: $${\rm Re}(\omega_1)={\rm Re}(\omega_2)=0$$ but it is [*no longer*]{} a line where $C_+^{{\rm FD}}$ collapses with another FP and disappears. It is remarkable that within the FD approach, there exists a line (the part (II) of the curve $N_c(d)$) that has [*no counterpart*]{} in other approches. Contrary to what occurs above, the coordinates $u_1^*$ and $u_2^*$ of $C_+^{{\rm FD}}$ on the line (II) are, for a large part of (II), far outside the region of Borel summability. It is thus not possible to compute accurately the location of (II). We, however, emphasize that only the existence of (II) is necessary for the validity of our arguments, not its precise determination. As for the existence of this part (II) of the curve $N_c(d)$, it is an unavoidable consequence of the existence of $S$ which is supposed to be under control [*within the perturbative FD approach*]{}. Thus, either $S$ really exists and part (II) of $N_c(d)$ also exists — with a shape that could be somewhat different of the one drawn on Fig.\[focuslocus\] — or it does not exist and neither does $S$. In this second case, this would mean that the [*whole*]{} resummation scheme is questionable at least for sufficiently low $d$ and $N$ (typically $d<3.2$ and $N<7$). We argue in the following that this is very probably what occurs. Let us now show on Fig.\[paths\] that there exists a very non-trivial property of the RG flow which is a consequence of the existence of $S$. The idea is to follow continuously the coordinates of the FP $C_+^{{\rm FD}}$ along a path encircling $S$, path A in Fig.\[paths\] for instance. We start, for instance at $(d=3, N=5)$ go to $(d=3.4, N=5)$ then to $(d=3.4, N=9)$ then to $(d=3, N=9)$ and finally go back to $(d=3, N=5)$. The surprising fact is that by making a trip along such a closed path, the coordinates $(u_1^*, u_2^*)$ of $C_+^{{\rm FD}}$ do not go back to their original value. This is specific to $S$ since along any closed path that does not encircle this point — path B in Fig.\[paths\] for instance — the coordinates $(u_1^*, u_2^*)$ of $C_+^{{\rm FD}}$ always go back to their original values. Let us emphasize here that such a path can well cross part (I) of the curve $N_c(d)$ as path B does in Fig.\[paths\]. In this case, $C_+^{{\rm FD}}$ has complex coordinates on the part of the path which is below (I), but as the path crosses again (I) the coordinates become real again and go back finally to their original values. All this makes $S$ a topological singularity of the functions $u_1^*(d,N)$ and $ u_2^*(d,N)$. ![Two different paths in the $(N,d)$ plane. Path A encloses $S$ and is such that the coordinates of the FP $C_+^{{\rm FD}}$ do not go back to their original values after a trip along this path at variance with path B. []{data-label="paths"}](path.eps){width="0.7\linewidth"} Let us remark at this stage that, before having drawn any conclusion from the existence of a topological singularity, one faces a very unusual property of the mapping from the plane $(N,d)$ to the FP coupling constants space $(u_1^*, u_2^*)$ which makes multivalued the functions $u_1^*(N,d)$ and $u_2^*(N,d)$. While one cannot a priori discard the possibility of such a behavior of the functions $u_1^*(N,d)$ and $u_2^*(N,d)$, it is tempting to attribute it to the coexistence at a given $(d,N)$ of FPs that are identical to those found within the $\varepsilon=4-d$ expansion and FPs that are artefacts of the FD approach. Let us now draw the full consequences of the striking behavior described above. Fixed point at the upper critical dimension. ============================================ We now present our last argument in favor of the spurious character of the FPs obtained within the FD approach for small $d$ and $N$. It is based, to a large extent, on the existence of the singularity $S$ in the $(u_1^*,u_2^*)$ plane that leads to a striking property of the field theory describing frustrated magnets. Let us first recall some basic features about the description of the long distance physics of a lattice system by a continuous field theory. The most important ingredient in this construction is the choice of a low-energy effective Hamiltonian in terms of the order parameter $\phi$. This choice implies the selection of a finite number of terms — the $\phi^2$ and $\phi^4$ terms for second order phase transitions — among the infinite number of operators obtained in the Hubbard-Stratonovitch derivation of the microscopic hamiltonian. This selection of the most relevant terms fully relies on the existence of an upper critical dimension where the theory is [*perturbatively*]{} infrared free, [*i.e.*]{} controlled by the Gaussian FP. Indeed, it is only under this condition that power-counting makes sense since it is based on the engineering dimension of the field that neglects fluctuations. Perturbation theory can then be used since, by definition, it consists in an expansion around the Gaussian theory. Also, the perturbative results obtained in this way are reliable as long as the (infrared attractive) FP found this way (i) is connected to the Gaussian by the RG flow, (ii) lies in the Borel-summability region (iii) is not too far from the Gaussian FP so that calculations performed at $L$ loops lead to converged results. Let us indicate that because of point (i) above, [*if*]{} the theory is trivial in $d=4$, it is very probable that any non-trivial FP identified in $d=3$, once followed continuously from $d=3$ to $d=4$, becomes Gaussian in this dimension. This is in particular what underlies the validity of the $\varepsilon=4-d$ expansion. Let us now briefly discuss the question of triviality of scalar theories in $d=4$ and the ensuing consequences for their perturbative analysis in any dimension below 4. We first emphasize that there is no rigorous proof of the triviality of scalar theories in $d=4$. However (i) there is a large body of evidences of triviality at least for the $O(N)$ models and in particular for the Ising model (ii) it is very likely that even if the scalar models were non-trivial in $d=4$, perturbation theories would not be able to reach the corresponding FP [@callaway88]. Thus, the most natural hypothesis is that the $O(N)\times O(2)$ theory is also trivial in $d=4$. An ever weaker hypothesis, that we make and use in the following, is that this is the case when it is analyzed perturbatively. All previous considerations and assumptions lead us to conclude that, very probably, any physical FP found [*perturbatively*]{} in any dimension must be Gaussian when followed continuously in $d=4$. Let us notice that a non-perturbative approach, performed by some of the present authors [@delamotte03] on the $O(N)\times O(2)$ theory did not lead to any non-trivial FP in $d=4$ which sustains our triviality hypotheses in $d=4$. Therefore, according to our hypotheses, a practical way to check whether a FP found at a given dimension, $d=3$ for instance, is a genuine FP or is just an artefact of perturbation theory is to follow it by continuity up to $d=4$ [@holovatch04; @dudka04]. If the FP survives as a non-Gaussian FP at this dimension we consider it as spurious. It is important to realize that our criterion does not exclude FP in $d=3$ whose coordinates become complex between $d=3$ to $d=4$ as far as they vanish in $d=4$. This is in particular the case for all FPs associated to the paths with fixed $N$ in the $(d,N)$ plane that cross (I) above $S$. Let us apply this criterion to analyze the FPs $P$ and $C_+^{{\rm FD}}$ that appear in the FD analysis of the cubic and frustrated models. We present our results in Fig.\[frustre3\] where we have displayed the coordinates $u^*$ and $u_1^*$ associated with these FPs as functions of $d$ at fixed $N$. Manifestly, they both survive everywhere above $d=3$ and are [*not*]{} Gaussian in $d=4$. In the cubic case, this is true for all values of $N$ for which $P$ exists. In the frustrated case, this is true for $N$ typically below 7. According to our criterion, $P$ is, as expected, always found to be spurious while $C_+^{{\rm FD}}$ is spurious only for $N<7$. We thus conclude, in the frustrated case, that the FPs found in $d=3$ for $N=2,3$ are spurious. ![The $u^*$ coordinate of the FP $P$ of the cubic model ($N=2$, upper curve) and the $u_1^*$ coordinate of the FP $C_+^{{\rm FD}}$ of the frustrated model ($N=3$, lower curve) as functions of $d$.[]{data-label="frustre3"}](ucubfrust.eps){width="0.6\linewidth"} Let us emphasize that when $d$ approaches 4 the coordinates of the FPs $P$ and $C_+^{{\rm FD}}$ for $N<7$ become large and do no longer belong to the region of Borel-summability so that they cannot be determined accurately as it is the case of part (II) of the curve $N_c(d)$. One could thus naively conclude that it is not possible to safely decide whether these FPs are or are not Gaussian in $d=4$. This is actually not the case for at least two reasons. First, if these FPs [*were*]{} Gaussian in $d=4$, their coordinates just below this dimension would be extremely small and their existence as Gaussian FPs in $d=4$ could be safely established within perturbation theory even without any resummation procedure. Since for $d$ just below 4 no such FPs close to the Gaussian are found in perturbation theory, their non-Gaussian character is doubtless in $d=4$. Second, in the frustrated case, the non-Gaussian character of the $C_+^{{\rm FD}}$ FP is a clear consequence of the existence of (II) which, itself, relies on the existence of the singularity $S$. Indeed, following a FP $C_+^{{\rm FD}}$ along a path starting in $d=3$, going to $d=4$ and crossing $N_c^{{\rm FD}}(d)$ [*above*]{} $S$, the coordinates of $C_+^{{\rm FD}}$ become complex in $d=d_c(N)$ and both the real and the imaginary parts of $u^*_1$ and $u^*_2$ go to zero for $d=4$ where it is thus a Gaussian FP. If, on the contrary, the path crosses $N_c^{{\rm FD}}(d)$ [*below*]{} $S$, $C_+^{{\rm FD}}$ changes from a stable focus to an unstable one at $d=d_c(N)$ and does not go to the Gaussian in $d=4$. The singularity $S$ is therefore responsible for the non-Gaussian character of $C_+^{{\rm FD}}$ in $d=4$. Thus, even if the coordinates of $C_+^{{\rm FD}}$ for $N<7$ become large for $d$ close to 4, its non-Gaussian character in this dimension makes no doubt. Note finally that it is tempting to follow this FP $C_+^{{\rm FD}}$ [*above*]{} $d=4$ where there exist rigorous proofs of the triviality of the scalar $\phi^4$ field theory [@aizenmann81]. Indeed, as it is suggested by Fig.\[frustre3\], the FP $C_+^{{\rm FD}}$ apparently survives at finite distance above d=4. However this last fact must be taken with great caution since $C_+^{{\rm FD}}$ is now deeply out of the region of Borel-summability and one has no longer any control of where the FP really lies. Conclusion. =========== It appears from our study that the FPs $C_+^{{\rm FD}}$ identified in the FD approach are very likely spurious. The transition in frustrated magnets should thus be of — possibly weak — first order in agreement with NPRG and $\varepsilon$-expansion approaches. It remains to explain the failure in the resummation procedure used in the FD approach, Eq.(\[resummation\]). As already emphasized this procedure relies on the hypothesis that resumming with respect to $u_1$, keeping a polynomial structure in $u_2$, is sufficient. Alternatively, a resummation of the series with respect to the [*two*]{} coupling constants could be required to obtain reliable results (see for instance [@alvarez00] for the randomly diluted Ising model). Postponing these considerations for a future publication [@delamottenext] we assume that the use of Eq.(\[resummation\]) as such is justified. Then, a possible origin of the failure in the resummation procedure could be that the series considered are not known at large enough order to reach the asymptotic behavior. In this case there would be no reason to fix the parameter $a$ at its asymptotic value $a=1/2$ and one should have to vary it as $b$ and $\alpha$ to optimize the results [@mudrov98c]. We display in Fig.\[nc\] the curves $N_c^{{\rm FD}}(d)$ for different values of $a$. The part corresponding to large values of $N_c$, typically for $N_c\gtrsim 7$, is almost insensitive to the variations of $a$ whereas this is clearly not the case for smaller values of $N_c$. In particular, for sufficiently large values of $a$, typically $a\geq1.5$, the S-like part is pushed below $d=3$ so that $N_c^\varepsilon(d)$, $N_c^{{\rm FD}}(d)$ and $N_c^{{\rm NPRG}}(d)$ are then compatible everywhere for $3<d<4$. Let us also notice in this respect that, for $a=1.3$, the shape of the curve $N_c^{{\rm FD}}(d)$ is even compatible with the results obtained in the massive scheme in $d=3$ in which one finds FPs for all values of $N$ but in the range $5.7(3)<N<6.4(4)$ [@calabrese03b]. This suggests that the two FD methods — massive scheme in $d=3$ and $\overline{\hbox{MS}}$ without $\varepsilon$-expansion — are in fact compatible but suffer from the same problems of convergence. Thus, under our hypothesis, all qualitative differences between the different approaches disappear, so that the problem would boil down to a question of order of computation. ![Three curves $N_c^{{\rm FD}}(d)$ for different values of the parameter $a$ (from right to left $a=0.5, 1.3, 1.5$) and the curve $N_c^{\varepsilon}(d)$. The parameters $b$ and $\alpha$ are $b=10$ and $\alpha=1$. The parts of the curves below the black dots correspond to a regime of Borel non-summability. []{data-label="nc"}](courbesa.eps){width="0.8\linewidth"} Finally note that our present considerations surely pertains to the case of frustrated magnets in $d=2$ [@calabrese02]. Indeed we have checked that the FP found in $d=2$ is continuously related to the FP $C_+^{{\rm FD}}$ in $d=3$ which makes its existence doubtful. Our conclusions could also apply in other situations where FPs that have no counterpart in the $\varepsilon$-expansion are found, as it is the case, for instance, in QCD at finite temperature [@basile04]. We wish to thank P. Azaria, P. Calabrese, R. Folk, R. Guida and J. Zinn-Justin for useful discussions. Work of Yu.H. was supported in part by the Austrian Fonds zur Förderung der wissenschaftlichen Forschung, Project P19583. We acknowledge the CNRS-NAS Franco-Ukrainian bilateral exchange program. References. {#references. .unnumbered} =========== [10]{} url \#1[[\#1]{}]{}urlprefix\[2\]\[\][[\#2](#2)]{} Delamotte B, Mouhanna D and Tissier M 2004 [*Phys. Rev. B*]{} [**69**]{} 134413 Antonenko S A, Sokolov A I and Varnashev K B 1995 [*Phys. Lett. A*]{} [**208**]{} 161 Holovatch [Yu]{}, Ivaneyko D and Delamotte D 2004 [*J. Phys. A*]{} [**37**]{} 3569 Calabrese P and Parruccini P 2004 [*Nucl. Phys. B*]{} [**679**]{} 568 Calabrese P, Parruccini P, Pelissetto A and Vicari E 2004 [*Phys. Rev. B*]{} [**70**]{} 174439 Pelissetto A, Rossi P and Vicari E 2001 [*Phys. Rev. B*]{} [**63**]{} 140414(R) Itakura M 2003 [*J. Phys. Soc. Jap.*]{} [**72**]{} 74 Peles A, Southern B W, Delamotte B, Mouhanna D and Tissier M 2004 [*Phys. Rev. B*]{} [**69**]{} 220408(R) Bekhechi S, Southern B, Peles A and Mouhanna D 2006 [*Phys. Rev. E*]{} [**74**]{} 016109 Quirion G, Han X, Plumer M L and Poirier M 2006 [*Phys. Rev. Lett.*]{} [**97**]{} 077202 Tissier M, Delamotte B and Mouhanna D 2000 [*Phys. Rev. Lett.*]{} [**84**]{} 5208 Parisi G 1980 [*J. Stat. Phys.*]{} [**23**]{} 49 Zinn-Justin J 1989 [*Quantum Field Theory and Critical Phenomena*]{} 3rd ed (New York: Oxford University Press) Suslov I M , 2005 [*J.Exp.Theor.Phys.*]{} [**100**]{} 1188 Kazakov D I, Tarasov O V and Shirkov D V 1979 [*Theor. Math. Phys.*]{} [**38**]{} 15 Le Guillou J C and Zinn-Justin J, 1980 [*Phys. Rev. B*]{} [**21**]{} 3976 Mudrov A I and Varnashev K B 1998 [*Phys. Rev. E*]{} [**58**]{} 5371 Garel T and Pfeuty P 1976 [*J. Phys. C: Solid St. Phys.*]{} [**9**]{} L245 Bailin D, Love A, and Moore M A [*J. Phys. C: Solid State Phys.*]{} [**10**]{} 1159 Yosefin M and Domany E 1985 [*Phys. Rev. B*]{} [**32**]{} 1778 Schloms R and Dohm V 1987 [*Europhys. Lett.*]{} [**3**]{} 413 2006 [*unpublished*]{} 2000 [*Phys. Rev. B*]{} [**62**]{} 12195 err. ibid. [**63**]{}, 189901 (2001) Kleinert H and Schulte-Frohlinde V 1995 [*Phys. Lett. B*]{} [**342**]{} 284 Carmona J M, Pelissetto A and Vicari E 2000 [*Phys. Rev. B*]{} [**61**]{} 15136 Calabrese P, Pelissetto A and Vicari E 2003 [*Phys. Rev. B*]{} [**67**]{} 024418 Tissier M, Mouhanna D, Vidal J and Delamotte B 2002 [*Phys. Rev. B*]{} [**65**]{} 140402 Itakura M, 1999 [*Phys. Rev. B*]{} [**60**]{} 6558 Callaway D J E, 1988 [*Phys. Rep.* ]{} [**167**]{} 241 Dudka M, [[Yu]{} Holovatch]{} and Yavors’kii T 2004 [*J. Phys. A*]{} [**37**]{} 10727 Aizenmann M, 1981 [*Phys. Rev. Lett.*]{} [**47**]{} 1 Alvarez G, Mart[ì]{}n-Mayor V and Ruiz-Lorenzo J J 2000 [*J. Phys. A*]{} [**33**]{} 841 Calabrese P, Parruccini P and Sokolov A I 2003 [*Phys. Rev. B*]{} [**68**]{} 094415 Calabrese P, Parruccini P and Sokolov A I 2002 [*Phys. Rev. B*]{} [**66**]{} 180403(R) Basile F, Pelissetto A and Vicari E 2005 [*JHEP*]{} [**0502**]{} 044 [^1]: The change of stability of $C_+^{{\rm FD}}$ occurs as follows. For $d<d_c(N)$ and $N<7.5$ and sufficiently close to (II), $C_+^{{\rm FD}}$ is a focus and is attractive inside a basin of attraction which is a limit cycle for the RG flow. This limit cycle is repulsive : inside it the RG flow converges to $C_+^{{\rm FD}}$, outside the RG flow diverges. When, at fixed $N$, the dimension $d$ is increased this limit cycle shrinks and becomes a point on (II). When $d$ is further increased, the limit cycle re-appears but now it is attractive and $C_+^{{\rm FD}}$ becomes a repulsive focus FP.
--- abstract: 'We have performed a systematic analysis of the dynamics of different galaxy populations in galaxy groups from the 2dFGRS. For this purpose we have combined all the groups into a single system, where velocities $v$ and radius $r$ are expressed adimensionally. We have used several methods to compare the distributions of relative velocities of galaxies with respect to the group centre for samples selected according to their spectral type (as defined by Madgwick et al., 2002), $b_j$ band luminosity and $B\!-\!R$ colour index. We have found strong segregation effects: spectral type I objects show a statistically narrower velocity distribution than that of galaxies with a substantial star formation activity (type II-IV). Similarly, the same behavior is observed for galaxies with colour index $B\!-\!R\!>\!1$ compared to galaxies with $B\!-\!R\!<\!1$. Bright ($M_{b}<-19$) and faint ($M_{b}>-19$) galaxies show the same segregation. It is not important once the sample is restricted to a given spectral type. These effects are particularly important in the central region ($R_p<0.5\;R_{vir}$) and do not have a strong dependence on the mass of the parent group. These trends show a strong correlation between the dynamics of galaxies in groups and star formation rate reflected both by spectral type and by colour index.' author: - | M. Lares, D. G. Lambas, A. G. Sánchez\ Grupo de Investigaciones en Astronomía Teórica y Experimental (IATE), Observatorio Astronómico de Córdoba, UNC, Argentina.\ Consejo Nacional de Investigaciones Científicas y Tecnológicas. (CONICET), Argentina.\ title: Dynamical segregation of galaxies in groups and clusters --- methods: statistical – galaxies: clusters: general – galaxies: kinematics and dynamics – galaxies: evolution Introduction ============ Galaxy properties can be affected by several mechanisms in groups or clusters. The fact that different galaxies can be modified to a different extent, could give rise to observable segregational effects. By studying these effects, we may obtain valuable information on the way in which these mechanisms act on galaxies and drive their evolution. The morphology density relation [@oelmer; @dress80; @andreon] is the best known segregational effect. Early type galaxies are more concentrated in denser regions, and lie closer to the centres of the clusters than late type galaxies. More recently, the clustering properties of galaxies have been found to be dependent on the characteristics of spectral features [@julian; @mardom; @biviano; @madg03c], and luminosity [@benoist; @norberg01; @norberg02; @stein; @adami; @girardi]. Several mechanisms have been proposed to explain galaxy transformations. Their relevance are quite different according to the environmental conditions [@balogh] and so, their importance depends on the mass of the clusters, and perhaps on the history of galaxy clustering [@gnedin.b]. Some effects are more effective in dense regions like rich clusters, whereas in groups of galaxies other mechanisms play the most important role. Ram pressure [@gunn-gott] can inhibit star formation by exhausting the gas present in galaxies that move fast in the intergalactic medium of rich clusters. Similarly, galaxy harassment [@moore] can produce significant changes in the star formation rate of a galaxy. These effects are not expected to be important in poor clusters or groups, where the velocity dispersion is lower, instead, effects such as mergers or tidal interactions can be dominant in these environments [@gnedin.a]. Besides affecting galaxy properties, such as star formation, luminosity and colour index, some of the physical processes listed above may also produce changes on the dynamics of the galaxy with respect to the cluster centre. In turn, the efficiency of some of these mechanisms to produce significant changes on a galaxy, depends on its dynamical behavior. For example, the effects of galaxy interactions are stronger for galaxies moving slowly with respect to the cluster centre. This suggests that the dynamical properties of galaxies in groups and clusters may be related with the star formation efficiency, colours or luminosities of galaxies[@menciff; @moore2]. Segregation effects of galaxy velocities in clusters are predicted theoretically [@menciff; @gnedin.b], in semi–analytical models [@menci], and has been reported in rich clusters [@sodre]. The relation between the dynamical properties and the luminosity of a galaxy has been observed in rich clusters, by e.g. @whitmore [@adami] and @stein, who find evidence for velocity segregation. In agreement with these findings, theoretical studies [@fyf] and numerical simulations [@yepes] show similar trends. However, it is not clear how to interpret these results. Some authors propose different orbit shapes for galaxies with different morphologies or luminosities. In this scheme, early–type galaxies have quasi–isotropic orbits, while late–type galaxies move in nearly radial orbits [@biviano77; @adami]. However, other models have been proposed that contradict this statement [@amelia]. Theoretical works predict virialized systems with a Maxwellian velocity distribution [@saslaw; @ueda] so it has been proposed that early and late-type galaxies have Gaussian velocity distributions. However, observations in rich clusters do not support these hypothesis (e.g. Colles & Dunn, 1996) The purpose of this paper is to explore for a possible difference in the dynamical behavior of galaxies with different spectral types, luminosities and colour indexes. The outline of this paper is as follows. In section 2 we describe the data sample used in our work, and in section 3, the method used in our analysis. Section 4 presents the results of our search for velocity segregation in spectral type, luminosity and colour. Finally, in section 5 we present a discussion of our results and future perspectives. The data ======== ![Normalized velocity distribution functions of relative velocities for two samples of galaxies selected according to membership to low ($<10^{14}\;M_{\odot}$) and high ($>10^{14}\;M_{\odot}$) parent cluster virial mass. Smoothed curves are Gaussian fits centered in $v=0$.](fig1.eps "fig:"){width="\columnwidth"} \[gaussmass\] In order to analyse the dynamics of different galaxy populations in galaxy systems, we have carried out a systematic search for segregation effects of galaxies in velocity space. We have used a group catalogue constructed from the final version of the 2dF Galaxy Redshift Survey [@colles], using the same technique used by @manuelyz to construct the group catalogue of the 2dFGRS 100K release [@folkes]. This sample comprises 40978 galaxies in 5568 groups. Virial mass, velocity dispersion and virial radius have been determined for the groups in this sample. Principal component analysis technique has been applied to all galaxies in the 2dFGRS by @madg02. This technique allows to obtain the maximum possible spectral information with a minimum set of parameters, and so it offers a quantitative and efficient way to classify galaxies with a spectral index $\eta$. This index has a clear physical interpretation, since it is strongly related to the galaxy morphology [@madg03a], and correlates with the equivalent width of the H$\alpha$ line and with the star birthrate parameter [@madg03b]. These spectral parameters measure the strength of absorption features by stars and ISM, and the strength of nebular emission features, making possible to obtain an idea of the relative contributions of different populations of stars. Negative values of $\eta$ correspond to non star-forming galaxies, usually early type galaxies, whereas large values imply star formation features in the integrated spectra, typical of late type galaxies. @madg02 defined 4 spectral types based on the shape of the distribution of $\eta$ for galaxies in the 2dFGRS. Type I comprises all galaxies with $\eta<-1.4$; type II, galaxies with $-1.4<\eta<1.1$; type III, galaxies with $1.1<\eta<3.1$ and type IV all galaxies with $\eta>3.1$. The 2dFGRS also contains photometric information in the APM $b_j$ band and in the super-cosmos $b$ and $r$ bands. This is a useful tool to study the dependence of a possible velocity segregation on galaxy luminosity and colour. Absolute magnitudes are denoted $M_b$, $B$ and $R$. Distance dependent quantities are calculated using a Hubble parameter $H=100\;Km\,s^{-1}\,Mpc^{-1}$. Analysis ======== \[tabla\] [|cc@cc@r|ccrrr|]{} case& samples &restriction & $Log(\mathcal{M}_v/M_{\odot})$ & $r$ & $N_1$ & $N_2$ & $L_{KS}$ & $P_{\beta}$ & $P_{\Delta K}$\ \ 1 & $\eta<-1.4$ vs. $\eta>1.1$ & all luminosities & all masses & $<1.2$ & 8985 & 2561 & $>99.9\%$ & $<0.001$ & $<0.001$\ 2 & $\eta<-1.4$ vs. $\eta>-1.4$ & all luminosities & all masses & $<1.2$ & 8985 & 6066 & $>99.9\%$ & $<0.001$ & $<0.001$\ 3 & $\eta<-1.4$ vs. $\eta>1.1$ & all luminosities & all masses & $<0.5$ & 7234 & 1803 & $>99.9\%$ & $<0.001$ & $<0.001$\ 4 & $\eta<-1.4$ vs. $\eta>-1.4$ & all luminosities & all masses & $<0.5$ & 7234 & 4250 & $>99.9\%$ & $<0.001$ & $<0.001$\ 5 & $\eta<-1.4$ vs. $\eta>1.1$ & $M_b<-19$ & all masses & $<0.5$ & 4086 & 603 & $>99.9\%$ & $0.001$ & $<0.001$\ 6 & $\eta<-1.4$ vs. $\eta>1.1$ & $M_b>-19$ & all masses & $<0.5$ & 3148 & 1200 & $>99.9\%$ & $0.001$ & $0.004$\ 7 & $\eta<-1.4$ vs. $\eta>1.1$ & all luminosities & $<14$ & $<0.5$ & 2430 & 916 & $99.9\%$ &$0.014$ & $0.001$\ 8 & $\eta<-1.4$ vs. $\eta>1.1$ & all luminosities & $>14$ & $<0.5$ & 4804 & 887 & $>99.9\%$ & $<0.001$ & $<0.001$\ \ 9 & $M_B<-19$ vs. $M_B>-19\;\;$ & all types & all masses & $<0.5$ &5770 &5714 &$99.9\%$ &$0.001$ & $<0.001$\ 10& $M_B<-19$ vs. $M_B>-19\;\;$ & all types & all masses & $>0.5$ &1669 &1898 &$85.0\%$ &$0.104$ & $0.176$\ 11& $M_B<-19$ vs. $M_B>-19\;\;$ & $\eta<-1.4$ &all masses & $<0.5$ &4086 &3148 &$99.0\%$ &$0.052$ & $0.007$\ 12& $M_B<-19$ vs. $M_B>-19\;\;$ & $\eta>-1.4$ &all masses & $<0.5$ &1684 &2566 &$91.2\%$ &$0.018$ & $0.136$\ 13& $M_B<-19$ vs. $M_B>-19\;\;$ & $\eta>1.1$ & all masses & $<0.5$ &603 &1200 &$54.0\%$ &$0.395$ & $0.280$\ 14& $M_B<-19$ vs. $M_B>-19\;\;$ & all types & $<14$ & $<0.5$ &1841 &2660 &$99.9\%$ &$<0.001$ & $<0.001$\ 15& $M_B<-19$ vs. $M_B>-19\;\;$ & all types & $>14$ & $<0.5$ &3929 &3054 &$94.1\%$ &$0.016$ & $0.019$\ \ 16& $CI < 1 $ vs. $CI > 1 $ & all types & luminosities &all masses & $<0.5$ &7291 &4180 &$>99.9\%$ &$<0.001$ & $<0.001$\ 17& $CI < 1 $ vs. $CI > 1 $ & all types & luminosities &all masses & $>0.5$ &1904 &1663 &$94.0\%$ &$0.276$ & $0.373$\ 18& $CI < 1.2 $ vs. $CI > 1.2 $ & $\eta<-1.4$ &all masses& $<0.5$ &3583 &3651 & $90.5\%$ & $0.158$ & $0.239$\ 19& $CI < 0.7 $ vs. $CI > 0.7 $ & $\eta>1.1$ &all masses & $<0.5$ &805 &998 &$66.2\%$ &$0.369$ & $0.320$\ 20& $CI < 1 $ vs. $CI > 1 $ & $\eta>1.1$ &all masses & $<0.5$ &136 &1666 &$45.3\%$ &$0.281$ & $0.197$\ 21& $CI < 1 $ vs. $CI > 1 $ & $\eta<-1.4$ &all masses & $<0.5$ &6361 &866 &$99.8\%$ &$0.018$ & $<0.000$\ 22& $CI < 1 $ vs. $CI > 1 $ & $M_b<-19$ &all masses & $<0.5$ &4121 &1639 & $>99.9\%$ & $0.002$ & $<0.001$\ 23& $CI < 1 $ vs. $CI > 1 $ & $M_b>-19$ &all masses & $<0.5$ &3170 &2541 & $>99.9\%$ & $0.037$ & $<0.004$\ 24& $CI < 1 $ vs. $CI > 1 $ & all types & luminosities & $<14$ & $<0.5$ &2341 &2154 &$>99.9\%$ & $0.013$ & $<0.001$\ 25& $CI < 1 $ vs. $CI > 1 $ & all types & luminosities & $>14$ & $<0.5$ &4950 &2026 &$>99.9\%$ &$0.001$ & $<0.001$\ Ensemble group -------------- In order to make a suitable analysis of the data, we have combined all the groups into a single system. In this ensemble, velocities $v$ and radius $r$ are expressed adimensionally. The line of sight velocities of each object $\Delta V$, relative to the group average velocity, are scaled by the corresponding velocity dispersion $\sigma$ of the host group. In a similar fashion, galaxy projected distance to the group centre $R_p$ are scaled by its virial radius value $R_v$. This procedure allows for a simple and improved statistical treatment of the data and, assuming isotropy with respect to their centres, maintains spatial and dynamical properties of the ensemble groups. Similar procedures have been implemented e. g. by @adami [@stein] and @amelia. The virial radius is estimated from the projected distances of members to the group centre [@manuelyz]. Uncertainties in these value, as well as in the determination of the centre of the system (derived by the unweighted mean of group member positions), can be larger for groups with few galaxies. Accordingly we have restricted all our analysis to groups with more than 10 members to reduce these uncertainties. In section \[uncertain\] we analyse the reliability of our results related to this choice. In order to explore the possible differences in the velocity distribution of galaxies between groups and clusters, we have divided the total sample of galaxies according to membership into low ($<10^{14}\;M_{\odot}$) and high ($>10^{14}\;M_{\odot}$) virial mass systems. In Fig. 1 we show the resulting velocity distribution functions $f(v)$ where it can be appreciated that both sets present remarkably similar distributions. This fact allows us to perform the same treatment to all groups irrespective of their mass. We notice, however, that the shape of this distribution is influenced by uncertainties in galaxy redshift determinations in the 2dFGRS. The rms uncertainty is approximately 85 km/s [@colles]. In section \[uncertain\] we explore the effect of these uncertainties in our analysis. Analysis of the ensemble group {#ensamble} ------------------------------ Our analysis is based on the comparison of normalized velocity distributions of galaxies in different samples, selected by spectral type ($\eta$), colour index (CI) or luminosity (quantified by $M_b$). In order to test the presence of a difference in the dynamical behavior of two given samples of galaxies, we have adopted different procedures to deal with the velocity distribution functions $f(v)$. One of these methods is the Kolmogorov–Smirnov (KS) test [@press-numrec]. In this method, it is possible to disprove, to a given level of significance, the null hypothesis that two distributions were taken from the same population distribution function. We have calculated the difference $\Delta K$ between the values of the kurtosis of the two distribution functions. This parameter provides a useful characterisation of the velocity distributions since it is related to the relative fraction of galaxies with low velocity ($v<1$) in each subsample objectively. We have also binned the velocities in $|v|$, using 5 bins. The uncertainties in each bin have been determined using the bootstrap resampling technique. We defined a parameter $\beta$ as the difference between the first bins of each distribution. The uncertainty of $\beta$ is estimated by propagating individual errors for each bin. We have constructed 1000 new samples drawn from the original data but reassigning spectral types, colour indexes or luminosities. The distributions of these parameters mimics the ones of the original observed sample. For each one of these random samples we have determined $\beta$ and $\Delta K$. The resulting distributions of these parameters provide an estimate of the significance of the observed values in the real data, since these distributions can be used to calculate the probabilities $P_{\beta}$ and $P_{\Delta K}$ of obtaining, in the random samples, values of $\beta$ and $\Delta K$ greater than the observed ones. The resulting probabilities $P_{\beta}$, $P_{\Delta K}$ and the KS significance are shown in Table 1. As previously mentioned, we have resampled sets of data using the bootstrap technique in order to estimate the reliability of the results. Explicitly, for each pair of samples we have calculated the variance of $\beta$ and $\Delta K$ for the corresponding bootstrapped samples. This variance provides a reliable estimate of the uncertainties of the given parameter. The large size of the data set has allowed us to explore the results for different subsamples of the data corresponding to different galaxy and group properties such as spectral types, luminosities, colour indexes, group centric distance and parent group virial mass. To achieve this goal we have applied the tests described above to the different subsamples. results ======= ![Normalized binned velocity distributions for early ($\eta<-1.4$) and late type ($\eta>1.1$) galaxies within $R_p=0.5\;R_v$. Error bars have been calculated using the bootstrap resampling method. This plot corresponds to the case $3$ in Table 1, which presents the strongest segregation.](fig2.eps "fig:"){width="\columnwidth"} \[STS01\] [fig3.eps]{} \[errs\] Spectral type segregation {#STS} ------------------------- ![Normalized velocity distributions for bright (, solid line) and faint (, dashed line) galaxies within (case 9). The inner plot shows the results obtained for a sample with $r>0.5$ (case 10)](fig4.eps "fig:"){width="\columnwidth"} \[LUM01\] We have analysed the distribution functions of the relative velocities for subsamples of different spectral type index, with no further restriction in galaxy luminosity or colour index. Figure 2 shows the velocity distributions $f(v)$ for type I and types III–IV galaxies, where the presence of a strong segregation can be clearly appreciated. The relative line of sight velocities of type I galaxies are statistically smaller than those of galaxies with a substantial star formation activity. To compute the statistical significance of the difference between these two distributions, we have applied the three tests described in the previous section. According to the KS test, we can disprove to a significance level greater than 99.9% that the two distributions are drawn from the same parent distribution. The distributions of the parameters $\beta$ and $\Delta K$ obtained for 1000 random realizations are shown in Fig. 3, the arrows indicate the values obtained for the real data. According to these results, velocity segregation by spectral type is very significant statistically, corresponding to a fraction of $69.5\%$ of early type galaxies and $60.7\%$ of late type galaxies within the group mean velocity dispersion ($v<1$) with respect to the total number of galaxies in each subsample. These quantities may depend on the uncertainties on the determination of velocities, however, as is stated in section 4.5, this fact does not affect the observed trends. Table 1 summarizes the results for spectral type segregation obtained by restricting the samples to a given galaxy luminosity and group centric distance, as well as different parent group virial mass. If we restrict our analysis to the inner region of the groups ($r<0.5$), the segregation intensity is stronger, but it is similar for systems of different mass. These facts will be addressed in more detail in section \[globaldep\] Luminosity segregation ---------------------- ![Normalized velocity distributions for red ($B\!-\!R\!>\!1$, solid line) and blue ($B\!-\!R\!<\!1$, dashed line) galaxies. The inner plot shows the results obtained for a sample restricted to type I galaxies](fig5.eps "fig:"){width="\columnwidth"} \[CLI01\] We have searched for a possible velocity segregation by luminosity, by applying the same methods as in section \[STS\]. We find a significant difference of the normalized relative velocity distributions between bright and faint galaxies, as can be appreciated in Fig. 4. The luminosity cut adopted to define the two subsamples, $M_{b}=-19$, give a similar number of galaxies in both of them. We have also considered the central ($r < 0.5 $) region of the groups, where it can be seen that the velocity segregation by luminosity is stronger (see Table 1). This trend is similar to the observed behavior of the segregation by spectral type, which is stronger in denser environments (see section \[STS\]) and will be discussed in more detail in section \[globaldep\]. Given the significant velocity segregation by spectral types, we have searched for luminosity segregation in subsamples restricted to type I and types III-IV galaxies. We find that the luminosity segregation signal is of less significance in these cases, indicating that luminosity is not a primary parameter in defining the dynamics of galaxies in groups. This suggests that early spectral type galaxies, on average more luminous than late types, could provide the observed dependence on luminosity. Colour segregation ------------------ Using the same procedure we have also explored the relation between galaxy dynamics and colour index. We have considered the $B\!-\!R$ colour index provided in the 2dFGRS final data release for the galaxies in the group sample. Given the narrow range of redshifts in the group sample ($0.02\,\lesssim\,z\,\lesssim\,0.20$), a unique threshold is suitable to define a sample of red galaxies. The normalized velocity distributions for red ($B\!-\!R\!>\!1$) and blue ($B\!-\!R\!<\!1$) galaxies are shown in Fig. 5, where a significant velocity segregation of velocity according to galaxy colour index can be clearly appreciated. Given the correlation between $B\!-\!R$ colour index and spectral index $\eta$, we have restricted our analysis to galaxies with low present star formation (type I), and strongly star forming galaxies (types III-IV). The results are shown in Table 1, and for type I objects in the small box of Fig. 5. As it can be appreciated, velocity segregation has a lower level of significance in the last case. Dependence of segregation on global properties {#globaldep} ---------------------------------------------- ![Dependence of the kurtosis differences $\Delta K$ on $R_p/R_{vir}$. The dashed region displays the rms of the distribution of $\Delta K$ obtained for the random samples described in section \[ensamble\]. Error bands are calculated using bootstrap resampling technique.](fig6.eps "fig:"){width="\columnwidth"} \[STS.rad\] ![The same as Fig. 6 but for the dependence of $\Delta K$ on the virial mass of the parent group. The dashed region displays the rms of the distribution of $\Delta K$ obtained for the random samples. Error bands are calculated using bootstrap resampling technique.](fig7.eps "fig:"){width="\columnwidth"} \[STS+CI.mass\] In the previous section we have analysed the different dynamics of galaxies according to their intrinsic properties, namely spectral type, luminosity and colour index. Our results suggest that the segregation effects are stronger in denser environments although there is no indication of a strong dependence on the parent group mass. In this section we explore in more detail the dependence of our results on galaxy–group centric distance and parent group virial mass. In Fig. 6 we show the dependence of the kurtosis differences $\Delta K$ on $r$. The dashed region displays the rms of the distribution of $\Delta K$ obtained for the random samples described in section \[ensamble\]. It can be clearly appreciated that velocity segregation in the central regions ($r<0.5$) is particularly important for spectral type and colour index but smoothly decreases at larger group centric distances. In a similar fashion, we have analysed the dependence of velocity segregation on parent group virial mass. Our results for galaxies with $r<0.5$ are shown in Fig. 7. It can be appreciated that there is no strong dependence of the segregation effects on the mass of the parent group. This is an important fact suggesting that the mechanisms that generate the observed difference in the dynamics, according to the galaxy star formation activity, are efficient on a wide range of masses. Analysis of uncertainties {#uncertain} ------------------------- The uncertainty in redshift determinations in the 2dFGRS amounts to a rms of $85Km\;s^{-1}$ [@colles]. This is a quite large figure, so its effects on our statistical analysis deserves particular attention. To account for this uncertainty we have convolved each line of sight velocity measurement with a Gaussian with dispersion $\epsilon=85Km\;s^{-1}$. The resulting smoothed histograms are suitable to compute the parameters characterizing the differences of the distribution of velocities of two given samples of galaxies taking into account line of sight velocity errors. As an example of this analysis, in Fig. 8 we show the results for case 3, where it can be appreciated that the binned and the smoothed distributions show the same behavior. Moreover we have computed the parameters $\beta$ and $\Delta K$ for the cases shown in Table 1, finding similar results which show the stability of our analysis against redshift measurements errors. As a test of the stability of our results against the number of group members, we have also analysed a sub-sample of groups restricted to have at least 20 members. We obtain similar and even more prominent segregation effects for the same galaxy properties analysed previously. Also, in order to test the effects of possible erroneous determination of the centre of the groups in our results, we have repeated the analysis for re–centered groups. These new centres where calculated using only galaxies within 1.2 times the virial radius and 2.5 times the velocity dispersion, which would provide a better estimate of group centres for elongated or clumpy systems. Again here the results are similar and show that our conclusions are not strongly dependent on the group centre definition. Discussion ========== ![Normalized velocity distributions convolved with a Gaussian of width $85\;Km\,s^{-1}$. Inside box shows the binned case for comparison.](fig8.eps "fig:"){width="\columnwidth"} \[smooth\] We find a statistically significant difference of the distributions of group centric line of sight velocities, normalized to the group mean velocity dispersion, for samples of galaxies selected by spectral type, luminosity or colour index. Given the large data set analysed, we have been able to investigate the dependence of this velocity segregation on group properties and galaxy–group centric distance. Spectral type I objects, corresponding to passively star forming galaxies, show a statistically narrower velocity distribution than that of galaxies with a substantial star formation activity (types III-IV). Similarly, samples of galaxies with greater colour index ($B\!-\!R\!>\!1$) have a larger fraction of small velocities ($v<1$) compared to galaxies with $B\!-\!R\!<\!1$. These two trends show a strong correlation between galaxy dynamics in groups and star formation, reflected both by spectral type and by colour index. The velocity distribution of luminous galaxies (typically brighter than $M_b=-19$) also show a larger fraction of small velocities, although we notice that once the galaxies are restricted to a given spectral type, there is a less significant segregation. Thus, luminosity is not likely to be a primary parameter determining galaxy dynamics in groups. Our results suggest that the observed luminosity segregation might be related to the fact that the slowest objects, of early spectral type, are on average more luminous than star forming galaxies. There are several mechanisms that may produce dynamical segregations of galaxies in groups and clusters. Ram pressure can effectively remove the existing gas in the galaxies and transform star forming into passively star forming objects. This mechanism affects most strongly those galaxies with large velocities with respect to the intra–cluster medium, and then it should produce a dynamical segregation with opposite trends to the observed one. Moreover, since our analysis concerns groups and small clusters of galaxies, ram pressure is not expected to be significant. Our results indicates that the velocity segregation effects are nearly independent of group virial mass. This fact also suggests that ram pressure is not important since its effects are stronger in more massive systems, and with higher velocity dispersion. Thus, it is unlikely that ram pressure may explain the observed correlations. Mergers on the other hand, are effective to generate spheroidal objects with a low star formation rate. Galaxy encounters are expected to lower the original cluster-centric relative velocities of each galaxy, with respect to the velocities of the galaxies prior to the merger event, so that they can act effectively in generating the observed trends. In a similar fashion, tidal interactions may effectively remove a substantial amount of gas from disks of galaxies, and then, are also effective in truncating star formation. Early type objects generated through this mechanism, would be biased to smaller velocities since interactions are expected to be more effective in slow encounters. Furthermore, these are generally brighter and redder objects. It has been suggested that morphological transformation of galaxies takes place in systems which have a density threshold larger than the density of groups and poor clusters of galaxies [@moore; @gray]. Our results indicate that dynamical segregation of passively star forming galaxies is a generic feature of systems of galaxies, irrespective of global properties. However, the fact that segregation effectively occurs in the inner regions of groups indicates that density might be an important parameter in determining the observed effects. Acknowledgments {#acknowledgments .unnumbered} =============== This work was partially supported by the Concejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), the Secretaría de Ciencia y Técnica (UNC) and the Agencia Córdoba Ciencia. [99]{} Adami C., Biviano A., Mazure A., 1997, A&A,331,439 Andreon S., 1996, A&A, 314, 763 Balogh M., Eke V., Miller C., Lewis I., Bower R., Couch W.,Nichol R., Bland-Hawthorn J., et al., 2004, MNRAS, 348, 1355 Benoist C., Maurogordato S., da Costa L. N., Cappi A., Schaeffer R., 1996, ApJ, 472, 452 Biviano, A., Katgert, P., Mazure, A., Moles, M., den Hartog, R., Perea, J., Focardi, P., 1997, A&A, 321, 84 Biviano A., Katgert P., Thomas T., Adami C., 2002, A&A, 387, 8 Colles M. M. et al., 2001, MNRAS, 328, 1039 Colles M., Dunn A. M., 1996, ApJ, 458, 435 Dressler A., 1980, ApJ, 236, 351 Domínguez M., Zandivarez A., Martínez H. J., Merchán M. E., Muriel H., Lambas D. G., 2002, MNRAS, 335, 825 ApJ, 236, 351 Folkes S., et al., 1999, MNRAS, 308, 459 Fusco-Femiano R., Menci N., 1998, ApJ, 498, 95 Girardi M., Rigoni E., Mardirossian F., Mezzetti M., 2003, A&A,406,403 Gnedin O., 2003, ApJ, 582, 141 Gnedin O., 2003, ApJ, 589, 752 Gray M. E., Wolf C., Meisenheimer K., Taylor A., Dye S., Borch A., Kleinheinrich M., 2004, MNRAS, 347, L73 Gunn J. E., Gott J. R., 1972, ApJ, 176, 1 Madgwick et al., 2002, MNRAS, 333, 133 Madgwick D. S., 2003, MNRAS, 338, 197 Madgwick D. S., Somerville R., Lahav O., Ellis R., 2003, MNRAS, 343, 871 Madgwick D. S. et al., 2003, MNRAS, 344, 847 Martínez H. J., Zandivarez A., Domínguez M., Merchán M. E., Lambas D. G., 2002, MNRAS, 333L, 31 Menci N., Cavaliere A., Fontana A., Giallongo E., Poli F., 2002, ApJ, 575, 18 Menci N., Fusco-Femiano R., 1996, ApJ, 472, 46 Merchán M. E., Zandivarez A., 2002, MNRAS, 335, 216 Moore B., Katz N., Lake G., Dressler A., & Oemler A., 1996, Nature, 379, 613 Moore B., Lake G., Katz N., 1998, ApJ, 495, 139 Norberg P., et al., 2001, MNRAS, 328, 64 Norberg P., et al., 2002, MNRAS, 332, 827 Oelmer A. J., 1974, ApJ, 194, 1 Press W. H., Teukolsky S. A., Vetterling W. T., Flannery B. P., 1986, Numerical Recipes in fortran 77. Cambridge University Press, Cambridge. Ramírez A. C., de Souza R. E., 1998, ApJ, 496, 693 Saslaw W. C., Chitre S. M., Itoh M., Inagaki S., 1990, ApJ, 365, 419 Sodré L., Capelato H., Steiner J., Mazure A., 1989, AJ, 97, 1279 Stein P., 1997, A&A, 317,670 Ueda H., Itoh M., Suto Y., 1993, ApJ, 45, 7 Whitmore B., Gilmore D., Jones C., 1993, ApJ, 407, 489 Yepes G., Domínguez–Tenreiro R., del Pozo–Sanz R., 1991, ApJ, 373, 336
--- abstract: 'In this article, we study orbifold constructions associated with the Leech lattice vertex operator algebra. As an application, we prove that the structure of a strongly regular holomorphic vertex operator algebra of central charge $24$ is uniquely determined by its weight one Lie algebra if the Lie algebra has the type $A_{3,4}^3A_{1,2}$, $A_{4,5}^2$, $D_{4,12}A_{2,6}$, $A_{6,7}$, $A_{7,4}A_{1,1}^3$, $D_{5,8}A_{1,2}$ or $D_{6,5}A_{1,1}^2$ by using the reverse orbifold construction. Our result also provides alternative constructions of these vertex operator algebras (except for the case $A_{6,7}$) from the Leech lattice vertex operator algebra.' address: - 'Institute of Mathematics, Academia Sinica, Taipei 10617, Taiwan and National Center for Theoretical Sciences of Taiwan.' - 'Graduate School of Information Sciences, Tohoku University, Sendai 980-8579, Japan ' author: - Ching Hung Lam - Hiroki Shimakura title: On orbifold constructions associated with the Leech lattice vertex operator algebra --- Introduction ============ In this article, we continue our program on classification of holomorphic vertex operator algebras (VOAs) of central charge $24$ based on the $71$ possible weight one Lie algebra structures in Schellekens’ list ([@Sc93; @EMS]). This program can be divided into two parts— the existence and the uniqueness parts. Recently, the existence part, that is, constructions of $71$ holomorphic VOAs of central charge $24$, has been established ([@FLM; @Bo; @DGM; @Lam; @LS12; @Mi3; @SS; @LS16; @LS16b; @EMS; @LLin]). The remaining question is to prove that the holomorphic VOA structure is uniquely determined by its weight one Lie algebra if the central charge is $24$. Up to now, the uniqueness has been established for $57$ cases in [@DM; @LS15; @LS; @LLin; @KLL; @EMS2]. In [@LS], a general method for proving the uniqueness of a holomorphic VOA $V$ with $V_1\neq 0$ has been proposed (see Section 2.6 for detail). Roughly speaking, the main idea is to “reverse" the original orbifold construction and to reduce the uniqueness problem of holomorphic VOAs to the uniqueness of some conjugacy classes of the automorphism groups of some “known" holomorphic VOAs, such as lattice VOAs. In particular, explicit knowledge about the full automorphism groups of the “known" VOAs plays an important role in this method. In this article, we will establish the uniqueness for more cases. Our approach based on the Leech lattice VOA is motivated by the case $A_{6,7}$. In [@LS16b], a holomorphic VOA $V$ of central charge $24$ with $V_1=A_{6,7}$ was constructed by applying a ${\mathbb{Z}}_7$-orbifold construction to the Leech lattice VOA and a non-standard lift of an order $7$ isometry of the Leech lattice. The case $A_{6,7}$ is somewhat special because it is the only case in Schellekens’ list that contains an affine VOA of level $7$. If we apply an orbifold construction to a holomorphic VOA $V$ of central charge $24$ with $V_1=A_{6,7}$ and a suitable automorphism of finite order, it seems that the weight one Lie algebra of the resulting VOA is either abelian or isomorphic to $A_{6,7}$ (cf. [@LS16 Proposition 5.5]). Therefore, in some sense, the construction in [@LS16b] from the Leech lattice VOA is the only way for obtaining a holomorphic VOA with the weight one Lie algebra $A_{6,7}$ by using orbifold constructions. In order to apply the method in [@LS] and to prove the uniqueness for the case $A_{6,7}$, we will try to “reverse" the above orbifold construction associated with the Leech lattice VOA. Namely, we should define an automorphism $\sigma$ of a holomorphic VOA $V$ with $V_1=A_{6,7}$ so that the Leech lattice VOA is obtained by applying the orbifold construction to $V$ and $\sigma$. Indeed, we will define $\sigma$ as an order $7$ inner automorphism such that the restriction to the weight one Lie algebra $V_1$ is regular, that is, the fixed-point subalgebra is abelian. Since the weight one Lie algebra of the Leech lattice VOA is abelian, it seems that such an automorphism of $V$ is the only possible choice. We can easily confirm the necessary conditions on $\sigma$ for the ${\mathbb{Z}}_7$-orbifold construction. In addition, we will show that the orbifold construction associated with $V$ and $\sigma$ actually gives the Leech lattice VOA; it is enough to prove that the dimension of $V_1$ is $24$ by Schellekens’ list. This will be verified by using the dimension formulae on $V_1$ ([@Mon; @Mo; @EMS2]). In order to apply the formulae, we prove that for $1\le i\le 6$, the conformal weight of the irreducible $\sigma^i$-twisted $V$-module is at least $1$ by using a similar combinatorial argument as in [@LS]. The remaining task is to prove the uniqueness of the conjugacy class of the automorphism $\varphi$ in the automorphism group ${\mathrm{Aut}\,}V_\Lambda$ of the Leech lattice VOA $V_\Lambda$ under the assumption that the orbifold construction associated with $V_\Lambda$ and $\varphi$ gives the original holomorphic VOA $V$. By [@DN], ${\mathrm{Aut}\,}V_\Lambda$ is an extension of the isometry group $O(\Lambda)$ of the Leech lattice $\Lambda$ by an abelian group $({\mathbb{C}}^\times)^{24}$. Hence we have $\varphi=\sigma\phi_g$ for some inner automorphism $\sigma\in ({\mathbb{C}}^\times)^{24}$ and a standard lift $\phi_g$ of an isometry $g$ of $\Lambda$. The assumption on $\varphi$ gives some constraints, such as dimension or weights for $(V_\Lambda^\varphi)_1$-modules, on the weight one subspaces of the fixed-point subalgebra and the irreducible $\varphi$-twisted $V_\Lambda$-module. These constraints turn out to be sufficient to determine the conjugacy class of $g$ in $O(\Lambda)$ uniquely. In addition, we verify that, under the constraints above, $\sigma\in({\mathbb{C}}^\times)^{24}$ is unique up to conjugation by the centralizer $C_{O(\Lambda)}(g)$ of $g$ in $O(\Lambda)$. Thus $\varphi$ belongs to the unique conjugacy class in ${\mathrm{Aut}\,}V_\Lambda$. By the structure of ${\mathrm{Aut}\,}V_\Lambda$, it is easier to handle this group than the automorphism group of the other Niemeier lattice VOA, which is an advantage for using the Leech lattice VOA. Indeed, the uniqueness of conjugacy classes of this group can be verified by calculations on the Leech lattice and its isometry group, the Conway group. The technical details on finite order automorphisms of (semi)simple Lie algebras in [@Kac] (cf. [@LS; @EMS2]) are not necessary in our argument. Instead, we use some known facts about the Conway group and the Leech lattice (cf. [@Wi83; @HL90; @HM16]). In addition to the case $A_{6,7}$, we also establish the uniqueness for six other cases by the same manner: $A_{3,4}^3A_{1,2}$, $A_{4,5}^2$, $D_{4,12}A_{2,6}$, $A_{7,4}A_{1,1}^3$, $D_{5,8}A_{1,2}$ and $D_{6,5}A_{1,1}^2$. Our main result is as follows. \[Thm:main\] The structure of a strongly regular holomorphic vertex operator algebra of central charge $24$ is uniquely determined by its weight one Lie algebra if the Lie algebra has the type $A_{3,4}^3A_{1,2}$, $A_{4,5}^2$, $D_{4,12}A_{2,6}$, $A_{6,7}$, $A_{7,4}A_{1,1}^3$, $D_{5,8}A_{1,2}$ or $D_{6,5}A_{1,1}^2$. Our result also implies that holomorphic VOAs whose weight one Lie algebras have the type $A_{3,4}^3A_{1,2}$, $A_{4,5}^2$, $D_{4,12}A_{2,6}$, $A_{7,4}A_{1,1}^3$, $D_{5,8}A_{1,2}$ and $D_{6,5}A_{1,1}^2$ can be constructed from the Leech lattice VOA by orbifold constructions. We remark that the uniqueness for the cases $A_{3,4}^3A_{1,2}$ and $A_{4,5}^2$ are proved in [@EMS2] by using the orbifold construction from the Niemeier lattices with root lattice $D_4^6$ and $A_4^6$, respectively. Hence there are still $9$ Lie algebras, including $V_1=0$ case, that the corresponding uniqueness result has not been established yet (see Remark \[R:Nils\] for explicit types). The organization of the article is as follows: In Section 2, we review some preliminary results about integral lattices, Lie algebras and VOAs. In Section 3, we review some facts about conjugacy classes of the Conway group and sublattices of the Leech lattice; these facts are verified by the computer algebra system MAGMA ([@MAGMA]). In Section 4, we review some basic properties of lattice VOAs, their automorphism groups and irreducible twisted modules. We also discuss (standard) lifts of isometries of a lattice in the automorphism group of the lattice VOA. In Section 5, we prove the uniqueness of certain conjugacy classes of the automorphism group of the Leech lattice VOA under some assumptions on the fixed-point subspaces and irreducible twisted modules. The proofs are based on the results in Section 3. In Section 6, we prove the main theorem by using the reverse orbifold construction (Section 2.6) and the results in Section 5. [**Notations**]{} ------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------- $(\cdot|\cdot)$ the positive-definite symmetric bilinear form of a lattice, or the normalized Killing form so that $(\alpha|\alpha)=2$ for any long root $\alpha$. $\langle\cdot|\cdot\rangle$ the normalized symmetric invariant bilinear form on a VOA $V$ so that $\langle { \mathds{1}}|{ \mathds{1}}\rangle=-1$, equivalently, $\langle a|b\rangle{ \mathds{1}}=a_{(1)}b$ for $a,b\in V_1$. $L^g$, $L_g$ for an isometry $g$ of $L$, $L^g=\{v\in L\mid g(v)=v\}$ and $L_g=\{v\in L\mid (v|L^g)=0\}$. $L_\mathfrak{g}(k,0)$ the simple affine VOA associated with a simple Lie algebra $\mathfrak{g}$ at level $k$. $L_{\mathfrak{g}}(k,\lambda)$ the irreducible $L_{\mathfrak{g}}(k,0)$-module with the highest weight $\lambda$. $\Lambda$ the Leech lattice. $\Lambda(g,p,q)$ $\{x+P_0^g(\Lambda)\in p\Lambda^g/P_0^g(\Lambda)\mid (x+P_0^g(\Lambda))(q)\neq\emptyset\}$ for $g\in O(\Lambda)$ and $p,q\in{\mathbb{Q}}$, where $(x+P_0^g(\Lambda))(q)=\{y\in x+P_0^g(\Lambda)\mid (y|y)=q\}$. $M^{(u)}$ the $\sigma_u$-twisted $V$-module constructed from a $V$-module $M$ by Li’s $\Delta$-operator. $\mu$ the canonical surjective map from ${\mathrm{Aut}\,}V_L$ to $O(L)$ when $L$ has no roots. $\mathcal{M}(S)$ the square matrix indexed by a set $S\subset{\mathbb{R}}^m$ with the entry $\frac{2(x|y)}{(x|x)}$, $x,y\in S$. $O(L)$ the isometry group of a lattice $L$. $\phi_g$ a standard lift of an isometry $g$ of $L$ to $O(\hat{L})$. $P_0^g$ the orthogonal projection from ${\mathbb{R}}\otimes_{\mathbb{Z}}L$ to ${\mathbb{R}}\otimes_{\mathbb{Z}}L^g$. $\Pi(M)$ the set of ${\mathfrak{h}}$-weights of a module $M$ for a reductive Lie algebra and a Cartan subalgebra ${\mathfrak{h}}$. $\sigma_u$ the inner automorphism $\exp(-2\pi\sqrt{-1}u_{(0)})$ of a VOA $V$ associated with $u\in V_1$. $V^{\sigma}$ the set of fixed-points of an automorphism $\sigma$ of a VOA $V$. $V[\sigma]$ the irreducible $\sigma$-twisted module for a holomorphic VOA $V$. $\tilde{V}_\sigma$ the VOA obtained by the orbifold construction associated with $V$ and $\sigma$. $X_{n,k}$ (the type of) a simple Lie algebra whose type is $X_n$ and level is $k$. ------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------- Preliminary =========== In this section, we will review basics about integral lattices, Lie algebras and VOAs. Even lattices {#S:lattice} ------------- Let $(\cdot|\cdot)$ be a positive-definite symmetric bilinear form on ${\mathbb{R}}^m$. A subset $L$ of ${\mathbb{R}}^m$ is called a *lattice* of rank $m$ if $L$ has a basis $e_1,e_2,\dots,e_m$ of ${\mathbb{R}}^m$ satisfying $L=\bigoplus_{i=1}^m{\mathbb{Z}}e_i$. Let $L^*$ denote the dual lattice of a lattice $L$ of rank $m$, that is, $$L^*=\{v\in {\mathbb{R}}^m\mid ( v| L)\subset{\mathbb{Z}}\}.$$ A lattice $L$ is said to be *even* if $( v|v)\in2{\mathbb{Z}}$ for all $v\in L$, and is said to be *unimodular* if $L=L^*$. Note that any even lattice $L$ is integral, i.e., $(v|w)\in{\mathbb{Z}}$ for all $v,w\in L$. For $v\in {\mathbb{R}}^m$, we call $(v|v)$ the (squared) *norm* of $v$ and often denote it by $|v|^2$. \[Lem:lattice1\] Let $L$ be a lattice and let $M$ be a direct summand sublattice of $L$ as an abelian group. Let $N=\{v\in L\mid (v|M)=0\}$. Then for any $v\in M^*$, there exists $x\in L^*$ such that $v-x\in N^*$. In particular, $M^*$ is equal to the image of $L^*$ under the orthogonal projection from ${\mathbb{R}}\otimes_{\mathbb{Z}}L$ to ${\mathbb{R}}\otimes_{\mathbb{Z}}M$. Let $v\in M^*$. Since $M$ is a direct summand of $L$, there exists $x\in L^*$ such that $(v|\alpha)=(x|\alpha)$ for all $\alpha\in M$. Hence $(v-x|M)=0$. It follows from $(v|N)=0$ and $(x|N)\subset{\mathbb{Z}}$ that $v-x\in N^*$. Let $L$ be a lattice. A group automorphism $g$ of $L$ is called an *isometry* of $L$ if $(g(v)|g(w))=(v|w)$ for all $v,w\in L$. Let $O(L)$ denote the isometry group of $L$. For $g\in O(L)$, let $L^g$ denote the fixed-point set of $g$, that is, $L^g=\{v\in L\mid g(v)=v\}$. Clearly $L^g$ is a sublattice of $L$, and it is a direct summand of $L$ as an abelian group. Let $P_0^g$ denote the orthogonal projection from ${\mathbb{R}}\otimes_{\mathbb{Z}}L$ to ${\mathbb{R}}\otimes_{\mathbb{Z}}L^g$, i.e.,$$P_0^{g}=\frac{1}{n}\sum_{i=0}^{n-1}g^i,\label{Eq:OP}$$ where $n$ is the order of $g$. The following lemma is immediate from Lemma \[Lem:lattice1\]. \[L:P0\] Let $L$ be an even unimodular lattice and $g\in O(L)$. Then $P_0^g(L)=(L^g)^*$. Regular automorphisms of simple Lie algebras {#regularauto} -------------------------------------------- Let $\mathfrak{g}$ be a simple finite-dimensional Lie algebra over the complex field ${\mathbb{C}}$. Take a Cartan subalgebra ${\mathfrak{h}}$. Let $(\cdot|\cdot)$ be the normalized Killing form on ${\mathfrak{g}}$ such that $(\alpha|\alpha)=2$ for any long root $\alpha$. Here we identify ${\mathfrak{h}}$ with ${\mathfrak{h}}^*$ via $(\cdot|\cdot)$. For a root $\beta$, the vector $\beta^\vee=\frac{2\beta}{(\beta|\beta)}$ is called a coroot. Fix the simple roots $\alpha_1,\dots,\alpha_m$. The fundamental weights $\Lambda_1,\dots,\Lambda_m$ (resp. the fundamental coweights $\Lambda_1^\vee,\dots,\Lambda_m^\vee$) are defined by $(\alpha_i^\vee|\Lambda_j)=\delta_{i,j}$ (resp. $(\alpha_i|\Lambda_j^\vee)=\delta_{i,j}$) for all $i,j$. A finite order automorphism of a (semi)simple finite-dimensional Lie algebra (over ${\mathbb{C}}$) is said to be *regular* if the fixed-point Lie subalgebra is abelian. \[Lem:Kacfpa\] The minimum order of a regular automorphism of a simple finite-dimensional Lie algebra is the Coxeter number $h$. Moreover, $$\exp(2\pi\sqrt{-1}{\mathrm{ad}}(\frac{1}{h}\tilde\rho))\label{Eq:reg}$$ is a unique order $h$ regular automorphism, up to conjugation, where $\tilde\rho=\sum_{i=1}^m\Lambda_i^\vee$. Next, we will consider some matrix related to a finite set in ${\mathbb{R}}^m$: \[D:M\] Let $S$ be a finite set in ${\mathbb{R}}^m\setminus\{0\}$. We define $\mathcal{M}(S)$ to be the square matrix indexed by $S$ with the entry $\frac{2(x|y)}{(x|x)}$, $x,y\in S$. For finite sets $S,S'\subset{\mathbb{R}}^m\setminus\{0\}$, the matrices $\mathcal{M}(S)$ and $\mathcal{M}(S')$ are *equivalent* if $|S|=|S'|$ and there exists a permutation matrix $P$ such that $\mathcal{M}(S)=P^{-1}\mathcal{M}(S')P$. For example, if $S$ is the set of simple roots of a root system (resp. affine root system), then $\mathcal{M}(S)$ is a Cartan matrix (resp. generalized Cartan matrix). For a ${\mathfrak{g}}$-module $M$, an element $\gamma\in{\mathfrak{h}}$ is called an ${\mathfrak{h}}$-weight of $M$, or simply a weight, if $\{x\in M\mid {\mathrm{ad}}(a)x=(a|\gamma)x,\ (a\in{\mathfrak{h}})\}\neq\{0\}$. Note that ${\mathfrak{h}}$-weights are also defined for semisimple Lie algebra by the same manner. Let $\Pi(M)$ denote the set of ${\mathfrak{h}}$-weights of $M$. If $\dim M<\infty$, then $\Pi(M)$ is a finite set in the ${\mathbb{Q}}$-vector space spanned by roots. This allows us to consider the matrix $\mathcal{M}(\Pi(M))$ if $0\notin \Pi(M)$ (see Definition \[D:M\]). Let $\sigma$ be a regular automorphism of ${\mathfrak{g}}$ of order $n$. Assume that $n$ is equal to the Coxeter number $h$ of ${\mathfrak{g}}$. Then, $\sigma$ is given by , up to conjugation, and ${\mathfrak{h}}$ is a Cartan subalgebra of ${\mathfrak{g}}^\sigma$. For $i\in{\mathbb{Z}}$, set ${\mathfrak{g}}_{(i)}=\{x\in{\mathfrak{g}}\mid \sigma(x)=\exp((i/n)2\pi\sqrt{-1})x\}$. Then ${\mathfrak{g}}_{(0)}={\mathfrak{g}}^\sigma$ and ${\mathfrak{g}}_{(i)}$ is a finite-dimensional ${\mathfrak{g}}^\sigma$-module. \[L:CM\] Assume that $i$ is relatively prime to $n$. Then, the matrix $\mathcal{M}(\Pi({\mathfrak{g}}_{(i)}))$ is equivalent to the generalized Cartan matrix of the affine root system of ${\mathfrak{g}}$. By the assumption on $i$, there exists $j\in{\mathbb{Z}}$ such that $ij\equiv1\pmod n$. Then $\sigma^j$ is also a regular automorphism of order $n$. By Lemma \[Lem:Kacfpa\], $\sigma^j$ is conjugate to $\sigma$. Hence, replacing $\sigma$ by $\sigma^j$, we may assume that $i=1$. By , $\Pi({\mathfrak{g}}_{(1)})$ is the set of roots $\alpha$ of ${\mathfrak{g}}$ such that $(\tilde{\rho}|\alpha)\in 1+h{\mathbb{Z}}$. Hence, $\Pi({\mathfrak{g}}_{(1)})$ consists of the simple roots and the negated highest root, and we obtain the result. Holomorphic vertex operator algebras and weight one Lie algebras ---------------------------------------------------------------- A *vertex operator algebra* (VOA) $(V,Y,{ \mathds{1}},\omega)$ is a ${\mathbb{Z}}$-graded vector space $V=\bigoplus_{m\in{\mathbb{Z}}}V_m$ over the complex field ${\mathbb{C}}$ equipped with a linear map $$Y(a,z)=\sum_{i\in{\mathbb{Z}}}a_{(i)}z^{-i-1}\in ({\rm End}\ V)[[z,z^{-1}]],\quad a\in V,$$ the *vacuum vector* ${ \mathds{1}}\in V_0$ and the *conformal vector* $\omega\in V_2$ satisfying certain axioms ([@Bo; @FLM]). For $a\in V$ and $i\in{\mathbb{Z}}$, we call the operator $a_{(i)}$ the *$i$-th mode* of $a$. Note that the operators $L(m)=\omega_{(m+1)}$, $m\in {\mathbb{Z}}$, satisfy the Virasoro relation: $$[L{(m)},L{(n)}]=(m-n)L{(m+n)}+\frac{1}{12}(m^3-m)\delta_{m+n,0}c\ {\rm id}_V,$$ where $c\in{\mathbb{C}}$ is called the *central charge* of $V$, and $L(0)$ acts by the multiplication of a scalar $m$ on $V_m$. A linear automorphism $g$ of a VOA $V$ is called a (VOA) *automorphism* of $V$ if $$g\omega=\omega\quad {\rm and}\quad gY(v,z)=Y(gv,z)g\quad \text{ for all } v\in V.$$ The group of all (VOA) automorphisms of $V$ will be denoted by ${\mathrm{Aut}\,}V$. A *vertex operator subalgebra* (or a *subVOA*) is a graded subspace of $V$ which has a structure of a VOA such that the operations and its grading agree with the restriction of those of $V$ and they share the vacuum vector. For an automorphism $g$ of a VOA $V$, let $V^g$ denote the fixed-point set of $g$. Note that $V^g$ is a subVOA of $V$ and contains the conformal vector of $V$. A VOA is said to be *rational* if the admissible module category is semisimple. A rational VOA is said to be *holomorphic* if there is only one irreducible module up to isomorphism. A VOA is said to be *of CFT-type* if $V_0={\mathbb{C}}{ \mathds{1}}$ (note that $V_i=0$ for all $i<0$ if $V_0={\mathbb{C}}{ \mathds{1}}$), and is said to be *$C_2$-cofinite* if the codimension in $V$ of the subspace spanned by the vectors of form $u_{(-2)}v$, $u,v\in V$, is finite. A module is said to be *self-dual* if it is isomorphic to its contragredient module. A VOA is said to be *strongly regular* if it is rational, $C_2$-cofinite, self-dual and of CFT-type. For $g\in{\mathrm{Aut}\,}V$ of order $n$, a $g$-twisted $V$-module $(M,Y_M)$ is a ${\mathbb{C}}$-graded vector space $M=\bigoplus_{m\in{\mathbb{C}}} M_{m}$ equipped with a linear map $$Y_M(a,z)=\sum_{i\in(1/n){\mathbb{Z}}}a_{(i)}z^{-i-1}\in ({\mathrm{End}}M)[[z^{1/n},z^{-1/n}]],\quad a\in V$$ satisfying a number of conditions ([@FHL; @DLM2]). We often denote it by $M$. Note that an (untwisted) $V$-module is a $1$-twisted $V$-module and that a $g$-twisted $V$-module is an (untwisted) $V^g$-module. For $v\in M_k$, the *conformal weight* of $v$ is $k$ and $L(0)v=kv$. If $M$ is irreducible, then there exists $w\in{\mathbb{C}}$ such that $M=\bigoplus_{m\in(1/n){\mathbb{Z}}_{\geq 0}}M_{w+m}$ and $M_w\neq0$. The number $w$ is called the *conformal weight* of $M$ Let $V$ be a VOA of CFT-type. Then, the weight one space $V_1$ has a Lie algebra structure via the $0$-th mode, which we call the *weight one Lie algebra* of $V$. Moreover, the $n$-th modes $v_{(n)}$, $v\in V_1$, $n\in{\mathbb{Z}}$, define an affine representation of the Lie algebra $V_1$ on $V$. For a simple Lie subalgebra $\mathfrak{s}$ of $V_1$, the *level* of $\mathfrak{s}$ is defined to be the scalar by which the canonical central element acts on $V$ as the affine representation. When the type of the root system of $\mathfrak{s}$ is $X_n$ and the level of $\mathfrak{s}$ is $k$, we denote the type of $\mathfrak{s}$ by $X_{n,k}$. Assume that $V$ is self-dual. Then there exists a non-degenerate symmetric invariant bilinear form $\langle\cdot|\cdot\rangle$ on $V$, which is unique up to scalar ([@Li3]). We normalize it so that $\langle{ \mathds{1}}|{ \mathds{1}}\rangle=-1$. Then for any $v,w\in V_1$, we have $\langle v|w\rangle{ \mathds{1}}=v_{(1)}w$. We, in addition, assume that the weight one Lie algebra $V_1$ is semisimple. Let $\mathfrak{h}$ be a Cartan subalgebra of $V_1$ and let $(\cdot|\cdot)$ be the Killing form on $V_1$. We identify $\mathfrak{h}^*$ with $\mathfrak{h}$ via $(\cdot|\cdot)$ and normalize the form so that $(\alpha|\alpha)=2$ for any long root $\alpha\in\mathfrak{h}$. The following lemma is immediate from the commutator relations of $n$-th modes (cf. [[@DM06 (3.2)]]{}). \[Lem:form\] If the level of a simple Lie subalgebra of $V_1$ is $k$, then $\langle\cdot|\cdot\rangle=k(\cdot|\cdot)$ on it. Let us recall some facts related to the Lie algebra $V_1$, which will be used later. \[Prop:posl\] Let $V$ be a strongly regular VOA. Then $V_1$ is reductive. Let $\mathfrak{s}$ be a simple Lie subalgebra of $V_1$. Then $V$ is an integrable module for the affine representation of $\mathfrak{s}$ on $V$, and the subVOA generated by $\mathfrak{s}$ is isomorphic to the simple affine VOA associated with $\mathfrak{s}$ at some positive integral level. \[[[@DMb (1.1), Theorem 3 and Proposition 4.1]]{}\]\[Prop:conf\] Let $V$ be a strongly regular holomorphic VOA of central charge $24$. Then the weight one Lie algebra $V_1$ is $0$, abelian of rank $24$ or semisimple. Moreover, the following hold: 1. If $V_1$ is abelian of rank $24$, then $V$ is isomorphic to the Leech lattice VOA. 2. If $V_1$ is semisimple, then the conformal vectors of $V$ and the subVOA generated by $V_1$ are the same. In addition, for any simple ideal of $V_1$ at level $k$, the identity $$\frac{h^\vee}{k}=\frac{\dim V_1-24}{24}\label{E:hk}$$ holds, where $h^\vee$ is the dual Coxeter number. \[L:u\] Let $V$ be a strongly regular holomorphic VOA of central charge $24$. Assume that $V_1$ is semisimple. Let $V_1=\bigoplus_{i=1}^t {\mathfrak{g}}_i$ be the decomposition of $V_1$ into the direct sum of simple ideals ${\mathfrak{g}}_i$. For $1\le i\le t$, let $h_i$ be the Coxeter number of ${\mathfrak{g}}_i$ and let $\tilde\rho_i\in {\mathfrak{g}}_i$ be the sum of all fundamental coweights with respect to a (fixed) set of simple roots of ${\mathfrak{g}}_i$. Set $$u=\sum_{i=1}^t\frac{1}{h_i}\tilde\rho_i,\quad \text{ and } \quad \sigma_u=\exp(-2\pi\sqrt{-1}u_{(0)})\in{\mathrm{Aut}\,}V.$$ Then, the restriction of $\sigma_u$ to $V_1$ is a regular automorphism whose order is the least common multiple of the Coxeter numbers $h_1,h_2,\dots,h_t$. If all ${\mathfrak{g}}_i$ are $ADE$-type, then $$\langle u|u\rangle=\frac{2\dim V_1}{\dim V_1-24}.$$ Since $\tilde{\rho}_i$ and $-\tilde{\rho}_i$ are conjugate by an element in the Weyl group, the former assertion follows from Lemma \[Lem:Kacfpa\]. Recall the following “strange" formula for ${\mathfrak{g}}_i$ (see [@Kac (13.11.4)]): $$(\rho_i|\rho_i)=\frac{1}{12}h_i^\vee\dim{\mathfrak{g}}_i,$$ where $\rho_i$ and $h_i^\vee$ are the Weyl vector and the dual Coxeter number of ${\mathfrak{g}}_i$, respectively. Assume that all ${\mathfrak{g}}_i$ are $ADE$-type. Then $\rho_i=\tilde\rho_i$ and $h_i^\vee=h_i$ for all $i$. Let $k_i$ be the level of ${\mathfrak{g}}_i$. By the identity in Proposition \[Prop:conf\] (2) and the “strange" formula, we obtain $$\langle u|u\rangle=\sum_{i=1}^t\frac{1}{h_i^2}k_i(\rho_i|\rho_i)=\frac{2\dim V_1}{\dim V_1-24}$$ as desired. The following lemma is well-known, but we include a proof for completeness. \[L:conjiso\] Let $V$ be a VOA of CFT-type and $M$ a $V$-module. Let $a\in V_1$ and set $g=\exp(a_{(0)})\in{\mathrm{Aut}\,}V$. Then the $V$-modules $M\circ g$ and $M$ are isomorphic, where the $g$-conjugate $M\circ g=(M,Y_g)$ of $M$ is defined by $Y_g(v,z)=Y(gv,z)$ for $v\in V$. Let $f:M\to M$ be the linear isomorphism defined by $f(u)=\exp(a_{(0)})u$, $u\in M$. Then for any $v\in V$, $u\in M$ and $n\in{\mathbb{Z}}$, we have $a_{(0)}(v_{(n)}u)=(a_{(0)}v)_{(n)}u+v_{(n)}(a_{(0)}u)$. Hence $f(Y(v,z)u)=Y(gv,z)f(u)=Y_g(v,z)f(u)$, and $f$ is a $V$-module isomorphism from $M$ to $M\circ g$. Li’s $\Delta$-operator ---------------------- Let $V$ be a self-dual VOA of CFT-type. Let $u\in V_1$ such that $u_{(0)}$ acts semisimply on $V$. Let $\sigma_u=\exp(-2\pi\sqrt{-1}u_{(0)})$ be the inner automorphism of $V$ associated with $u$. We assume that there exists a positive integer $T$ such that the spectrum of $u_{(0)}$ on $V$ belongs to $(1/T){\mathbb{Z}}$. Then we have $\sigma_u^T=1$ on $V$. Conversely, if $\sigma_u^T=1$, then the spectrum of $u_{(0)}$ on $V$ belongs to $(1/T){\mathbb{Z}}$. Let $\Delta(u,z)$ be Li’s $\Delta$-operator defined in [@Li], i.e., $$\Delta(u, z) = z^{u_{(0)}} \exp\left( \sum_{n=1}^\infty \frac{u_{(n)}}{-n} (-z)^{-n}\right).$$ \[Prop:twist\] Let $\sigma$ be an automorphism of $V$ of finite order and let $u\in V_1$ be as above such that $\sigma(u) = u$. Let $(M, Y_M)$ be a $\sigma$-twisted $V$-module and define $(M^{(u)}, Y_{M^{(u)}}(\cdot, z)) $ as follows: $$\begin{split} & M^{(u)} =M \quad \text{ as a vector space;}\\ & Y_{M^{(u)}} (a, z) = Y_M(\Delta(u, z)a, z)\quad \text{ for } a\in V. \end{split}$$ Then $(M^{(u)}, Y_{M^{(u)}}(\cdot, z))$ is a $\sigma_u\sigma$-twisted $V$-module. Furthermore, if $M$ is irreducible, then so is $M^{(u)}$. For a $\sigma$-twisted $V$-module $M$ and $a\in V$, we denote by $a_{(i)}^{(u)}$ the $i$-th mode of $a$ on $M^{(u)}$, i.e., $$Y_{M^{(u)}}(a,z)=\sum_{i\in{\mathbb{C}}}a_{(i)}^{(u)}z^{-i-1}. $$ By the definition of Li’s $\Delta$-operator, the $0$-th mode of $v\in V_1$ on $M^{(u)}$ is given by $$v^{(u)}_{(0)}=v_{(0)}+\langle u|v\rangle {\rm id},\label{Eq:V1h}$$ and the $1$-st mode of the conformal vector ${\omega}$ on $M^{(u)}$ is given by $$\omega^{(u)}_{(1)}=\omega_{(1)}+u_{(0)}+\frac{\langle u|u\rangle}{2}{\rm id}.\label{Eq:Lh}$$ Conformal weights of modules for simple affine VOAs --------------------------------------------------- In this subsection, we use the same notation as in Section \[regularauto\]. Let ${\mathfrak{g}}$ be a finite-dimensional simple Lie algebra. Let $Q$ be the root lattice associated with a fixed Cartan subalgebra ${\mathfrak{h}}$. For the finite-dimensional irreducible ${\mathfrak{g}}$-module $M(\lambda)$ with the highest weight $\lambda$, let $\Pi(M(\lambda))$ denote the set of all ${\mathfrak{h}}$-weights of $M$; we often denote it by $\Pi(\lambda)$ simply. Since the longest element in the Weyl group maps the set of positive roots to the set of negative roots, we obtain the following lemma. \[Lem:min\] Let $\lambda$ be a dominant integral weight. Let $r$ be the longest element in the Weyl group. Let $u\in{\mathbb{Q}}\otimes_{\mathbb{Z}}Q$ such that $(u|\alpha)\ge0$ for all simple roots $\alpha$. Then $$\min\{(u|\mu)\mid \mu\in\Pi(\lambda)\}=(u|r(\lambda)).$$ \[Lem:longest\] 1. If the type of $Q$ is $A_1$ or $D_{2n}$ $(n\ge2)$, then $r=-1$. 2. If the type of $Q$ is $A_n$ $(n\ge2)$ or $D_{2n+1}$ $(n\ge2)$, then $r$ is the product of $-1$ and the standard Dynkin diagram automorphism of order $2$. Let $L_{{\mathfrak{g}}}(k,0)$ be the simple affine VOA associated with ${\mathfrak{g}}$ at a positive integral level $k$. Let $L_{{\mathfrak{g}}}(k,\lambda)$ be the irreducible $L_{{\mathfrak{g}}}(k,0)$-module with the highest weight $\lambda$. Note that $\lambda$ is a dominant integral weight of $\mathfrak{g}$ such that $(\lambda|\theta)\le k$, where $\theta$ is the highest root of $\mathfrak{g}$. For the detail of $L_{{\mathfrak{g}}}(k,0)$ and $L_{{\mathfrak{g}}}(k,\lambda)$, see [@FZ]. Note that the conformal weight of $L_{{\mathfrak{g}}}(k,\lambda)$ is $\frac{(\lambda+2\rho,\lambda)}{2(k+h^\vee)}$, where $\rho$ is the Weyl vector and $h^\vee$ is the dual Coxeter number. For $u\in {\mathbb{Q}}\otimes_{\mathbb{Z}}Q$, the inner automorphism $\sigma_u$ has finite order on ${\mathfrak{g}}$ and has the same order on $L_{\mathfrak{g}}(k,0)$. \[L:uconj\] Let $M$ be a $L_{{\mathfrak{g}}}(k,0)$-module and let $u\in {\mathbb{Q}}\otimes_{\mathbb{Z}}Q$. 1. For an element $\alpha$ of the coroot lattice $Q^\vee$, $M^{(u)}\cong M^{(u+\alpha)}$ as $\sigma_u$-twisted $L_{{\mathfrak{g}}}(k,0)$-modules. 2. For an element $g$ of the Weyl group, the characters of $M^{(u)}$ and $M^{(g(u))}$ are the same. Note that $\sigma_\alpha=id$ on $L_{{\mathfrak{g}}}(k,0)$ and $(M^{(u)})^{(\alpha)}\cong M^{(u+\alpha)}$. Then the assertion (1) was proved in [@Li01 Proposition 2.24]. Let $\hat{g}\in{\mathrm{Aut}\,}L_{{\mathfrak{g}}}(k,0)$ be a lift of $g$. Note that $\hat{g}$ is inner and $\hat{g}$ acts on ${\mathfrak{h}}$ as $g$. The $\hat{g}$-conjugate $M^{(u)}\circ \hat{g}$ (see Lemma \[L:conjiso\] for the definition) is a $\sigma_{g(u)}$-twisted $L_{{\mathfrak{g}}}(k,0)$-module and its character is equal to that of $M^{(u)}$. In addition, $M^{(u)}\circ \hat{g}\cong (M\circ \hat{g})^{(g(u))}$ as $\sigma_{g(u)}$-twisted $L_{{\mathfrak{g}}}(k,0)$-modules. Since $\hat{g}$ is inner, we have $M\circ \hat{g}\cong M$ by Lemma \[L:conjiso\]. Thus we obtain (2). The following lemma gives a necessary and sufficient condition for the module $L_{{\mathfrak{g}}}(k,\lambda)^{(u)}$ having the conformal weight zero, which corrects a mistake in [@LS16 Lemma 3.5]. \[Lem:lowestwt0\] Let $u\in {\mathbb{Q}}\otimes_{\mathbb{Z}}Q$ such that $$(u|\beta)\ge-1\label{C:u}$$ for any root $\beta$ of $\mathfrak{g}$. Then the conformal weight of the irreducible $\sigma_u$-twisted $L_{{\mathfrak{g}}}(k,0)$-module $L_{{\mathfrak{g}}}(k,\lambda)^{(u)}$ is non-negative. In addition, the conformal weight is zero if and only if $(\lambda,u)=(0,0)$ or $(k\eta,-g(\eta))$ for some element $g$ in the Weyl group and fundamental coweight $\eta$ with $(\eta|\theta)=1$. In fact, a fundamental coweight $\eta$ with $(\eta|\theta)=1$ is also a fundamental weight, namely, the corresponding simple root is long. Let $g$ be an element in the Weyl group. Then, the vector $g(\eta)-\eta$ belongs to the coroot lattice, and $(g(\eta)|\beta)\ge-1$ for any root $\beta$. Hence, $\sigma_{-g(\eta)}=\sigma_{-\eta}=id$ on $L_{{\mathfrak{g}}}(k,0)$, and $L_{\mathfrak{g}}(k,k\eta)^{(-g(\eta))}\cong L_{\mathfrak{g}}(k,k\eta)^{(-\eta)}\cong L_{\mathfrak{g}}(k,0)$ (see [@Li01 Propositions 2.20] and Lemma \[L:uconj\] (1)). Now, let us consider the semisimple Lie algebra $\bigoplus_{i=1}^t{\mathfrak{g}}_i$, where ${\mathfrak{g}}_i$ are simple ideals. For $1\le i\le t$, fix a Cartan subalgebra ${\mathfrak{h}}_i$ of ${\mathfrak{g}}_i$ and a set of simple roots of ${\mathfrak{g}}_i$. Let $Q_i$ be the root lattice of ${\mathfrak{g}}_i$. Let $k_i$ be a positive integer and let $\lambda_i$ be a dominant integral weight of ${\mathfrak{g}}_i$ such that $(\lambda_i|\theta_i)\le k_i$, where $\theta_i$ is the highest root of ${\mathfrak{g}}_i$. Set $U=\bigotimes_{i=1}^tL_{{\mathfrak{g}}_i}(k_i,0)$ and $M=\bigotimes_{i=1}^tL_{{\mathfrak{g}}_i}(k_i,\lambda_i)$. \[Lem:lowestwt\] Let $u_i\in {\mathbb{Q}}\otimes_{\mathbb{Z}}Q_i$ and set $u=\sum_{i=1}^tu_i$. Assume that $$(u|\beta)\ge-1$$ for any root $\beta$ of $\bigoplus_{i=1}^t{\mathfrak{g}}_i$. Then the conformal weight of the irreducible $\sigma_u$-twisted $U$-module $M^{(u)}$ is $$w_M+\sum_{i=1}^t\min\{(u_i|\mu)\mid \mu\in\Pi(\lambda_i)\}+\frac{\langle u|u\rangle}{2},\label{Eq:twisttop}$$ where $w_M$ is the conformal weight of $M$. In addition, if $u$ does not belong to the coweight lattice of $\bigoplus_{i=1}^t{\mathfrak{g}}_i$, then the conformal weight of $M^{(u)}$ is positive. Note that $M^{(u)}\cong \bigotimes_{i=1}^t L_{{\mathfrak{g}}_i}(k_i,\lambda_i)^{(u_i)}$. The former assertion follows from [@LS16 Lemma 3.6]. The latter assertion is immediate from Lemma \[Lem:lowestwt0\]. Next we will consider several semisimple Lie algebras of ADE-type. \[L:wtu\] Assume that the semisimple Lie algebra $U_1=\bigoplus_{i=1}^t{\mathfrak{g}}_i$ has the type $$A_{3,4}^3A_{1,2},\ A_{4,5}^2,\ D_{4,12}A_{2,6},\ A_{6,7},\ A_{7,4}A_{1,1}^3,\ D_{5,8}A_{1,2}\ \text{or}\ D_{6,5}A_{1,1}^2$$ and that the conformal weight of $M$ is an integer at least $2$. Let $u\in{\mathfrak{h}}$ be the vector described in Lemma \[L:u\]. Let $n$ be the order of $\sigma_u$ on $U_1$. Then the following hold: 1. $(\sigma_u)^n$ acts on $M$ as the identity operator; 2. for $j\in{\mathbb{Z}}$ with $0<|j|\le \lfloor n/2\rfloor$, the conformal weights of the $(\sigma_u)^j$-twisted $U$-modules $U^{(ju)}$ and $M^{(ju)}$ are at least $1$. By the definition of $u$, we have $(u|\beta)\ge-1$ for any root $\beta$ of $U_1$ and $(u|\alpha)\ge0$ for any simple root $\alpha$ of $U_1$. For all $i$, we have $Q_i=Q_i^\vee$ and $h_i=h_i^\vee$ since ${\mathfrak{g}}_i$ is $ADE$-type. Let $w_M$ be the conformal weight of $M=\bigotimes_{i=1}^tL_{{\mathfrak{g}}_i}(k_i,\lambda_i)$. By Lemmas \[Lem:min\] and \[Lem:lowestwt\], the conformal weight $w_{M^{(u)}}$ of $M^{(u)}$ is $$w_{M^{(u)}}=w_M+\left(u\left|\sum_{i=1}^tr_i(\lambda_i)\right.\right)+\frac{\langle u|u\rangle}{2},$$ where $r_i$ is the longest element of the Weyl group of ${\mathfrak{g}}_i$. By direct computation, one can list all weights $(\lambda_1,\dots,\lambda_t)$ with $w_M\in{\mathbb{Z}}_{>1}$; the number of such weights is $1526$, $852$, $463$, $47$, $100$, $46$ or $35$ if $V_1$ has the type $A_{3,4}^3A_{1,2}$, $A_{4,5}^2$, $D_{4,12}A_{2,6}$, $A_{6,7}$, $A_{7,4}A_{1,1}^3$, $D_{5,8}A_{1,2}$ or $D_{6,5}A_{1,1}^2$, respectively. In addition, for every weight, one can directly check the following: (i) $(\sum_{i=1}^t\lambda_i|u)\in (1/n){\mathbb{Z}}$\[Eq:spec\]; (ii) $w_{M^{(u)}}\ge1$.\[Eq:spec2\] Note that for the cases $A_{4,5}^2$, $D_{4,12}A_{2,6}$, $A_{6,7}$ and $D_{5,8}A_{1,2}$, the claim (i) also follows from the fact that $u\in(1/n)\bigoplus_{i=1}^tQ_i$. See Tables \[T:A67\], \[T:D58A12\] and \[T:D65A112\] for the cases $A_{6,7}$, $D_{5,8}A_{1,2}$ and $D_{6,5}A_{1,1}^2$, respectively. (In these tables, $[a_1,\dots,a_m]$ denotes the weight $\sum_{i=1}^m a_i\Lambda_i$.) By Lemma \[L:u\], we have $w_{U^{(u)}}=\langle u|u\rangle/2\ge1$. Then, the assertions and prove (1) and (2) for $j=1$, respectively. Let $j\in{\mathbb{Z}}$ with $0<|j|\le \lfloor n/2\rfloor$. We choose a representative $\overline{ju}$ of $ju+\bigoplus_{i=1}^tQ_i$ as in Table \[T:rep\] if $1\le j\le \lfloor n/2\rfloor$, and $\overline{ju}=-\overline{-ju}$ if $j<0$. Then we have $(\overline{ju}|\beta)\ge-1$ for all roots $\beta$ of $V_1$. By Lemma \[L:uconj\] (1), $M^{(ju)}\cong M^{(\overline{ju})}$ as $\sigma_{ju}$-twisted $U$-modules. Let $g$ be an element in the Weyl group such that $g(\overline{ju})$ is dominant integral; our choices of $g(\overline{ju})$, $0\le j\le \lfloor n/2\rfloor$, are summarized in Table \[T:rep\]. Note that $gr(-\overline{ju})=g(\overline{ju})$, where $r$ is the product of the longest elements of the Weyl groups of ${\mathfrak{g}}_i$. By Lemma \[L:uconj\] (2), we have $w_{M^{(\overline{ju})}}=w_{M^{(g(\overline{ju}))}}$. Hence by the same manner as in the case $j=1$, we obtain $w_{M^{(g(\overline{ju}))}}\ge1$ for $j\neq1$. Also, by Table \[T:rep\], we have $w_{U^{(\overline{ju})}}=\langle g(\overline{ju})|g(\overline{ju}))\rangle/2\ge1$. Weights $(\sum_{i=1}^t\lambda_i|u)$ $w_M$ $w_M^{(u)}$ Weights $(\sum_{i=1}^t\lambda_i|u)$ $w_M$ $w_M^{(u)}$ ----------------------- ----------------------------- ------- ------------- ----------------------- ----------------------------- ------- ------------- $[ 0, 0, 0, 0, 0, 7]$ $3$ $3$ $2$ $[ 0, 0, 0, 0, 7, 0]$ $ 5$ $5$ $2$ $[ 0, 0, 0, 1, 0, 4]$ $ 18/7$ $2$ $10/7$ $[ 0, 0, 0, 2, 4, 0]$ $ 32/7$ $4$ $10/7$ $[ 0, 0, 0, 7, 0, 0]$ $ 6$ $6$ $2$ $[ 0, 0, 1, 0, 4, 2]$ $ 32/7$ $4$ $10/7$ $[ 0, 0, 1, 3, 0, 1]$ $ 27/7$ $3$ $8/7$ $[ 0, 0, 2, 0, 3, 0]$ $ 27/7$ $3$ $8/7$ $[ 0, 0, 2, 1, 0, 3]$ $ 27/7$ $3$ $8/7$ $[ 0, 0, 2, 4, 0, 1]$ $ 39/7$ $5$ $10/7$ $[ 0, 0, 7, 0, 0, 0]$ $ 6$ $6$ $2$ $[ 0, 1, 0, 1, 2, 2]$ $ 27/7$ $3$ $8/7$ $[ 0, 1, 0, 4, 2, 0]$ $ 39/7$ $5$ $10/7$ $[ 0, 1, 2, 0, 0, 1]$ $ 20/7$ $2$ $8/7$ $[ 0, 1, 2, 2, 1, 0]$ $ 34/7$ $4$ $8/7$ $[ 0, 1, 3, 0, 1, 2]$ $ 34/7$ $4$ $8/7$ $[ 0, 2, 0, 0, 2, 0]$ $ 20/7$ $2$ $8/7$ $[ 0, 2, 0, 3, 0, 2]$ $ 34/7$ $4$ $8/7$ $[ 0, 2, 1, 0, 3, 1]$ $ 34/7$ $4$ $8/7$ $[ 0, 2, 4, 0, 1, 0]$ $ 39/7$ $5$ $10/7$ $[ 0, 3, 0, 2, 0, 0]$ $ 27/7$ $3$ $8/7$ $[ 0, 3, 1, 0, 0, 2]$ $ 27/7$ $3$ $8/7$ $[ 0, 4, 2, 0, 0, 0]$ $ 32/7$ $4$ $10/7$ $[ 0, 7, 0, 0, 0, 0]$ $ 5$ $5$ $2$ $[ 1, 0, 0, 0, 2, 4]$ $ 25/7$ $3$ $10/7$ $[ 1, 0, 0, 2, 1, 0]$ $ 20/7$ $2$ $8/7$ $[ 1, 0, 1, 0, 1, 2]$ $ 20/7$ $2$ $8/7$ $[ 1, 0, 1, 2, 2, 1]$ $ 34/7$ $4$ $8/7$ $[ 1, 0, 3, 1, 0, 0]$ $ 27/7$ $3$ $8/7$ $[ 1, 0, 4, 2, 0, 0]$ $ 39/7$ $5$ $10/7$ $[ 1, 1, 1, 1, 1, 1]$ $ 4$ $3$ $1$ $[ 1, 2, 0, 0, 1, 3]$ $ 27/7$ $3$ $8/7$ $[ 1, 2, 2, 1, 0, 1]$ $ 34/7$ $4$ $8/7$ $[ 1, 3, 0, 1, 2, 0]$ $ 34/7$ $4$ $8/7$ $[ 2, 0, 0, 1, 3, 0]$ $ 27/7$ $3$ $8/7$ $[ 2, 0, 0, 2, 0, 3]$ $ 27/7$ $3$ $8/7$ $[ 2, 0, 3, 0, 2, 0]$ $ 34/7$ $4$ $8/7$ $[ 2, 1, 0, 1, 0, 1]$ $ 20/7$ $2$ $8/7$ $[ 2, 1, 0, 3, 1, 0]$ $ 34/7$ $4$ $8/7$ $[ 2, 2, 1, 0, 1, 0]$ $ 27/7$ $3$ $8/7$ $[ 2, 4, 0, 1, 0, 0]$ $ 32/7$ $4$ $10/7$ $[ 3, 0, 1, 2, 0, 0]$ $ 27/7$ $3$ $8/7$ $[ 3, 0, 2, 0, 0, 2]$ $ 27/7$ $3$ $8/7$ $[ 3, 1, 0, 0, 2, 1]$ $ 27/7$ $3$ $8/7$ $[ 4, 0, 1, 0, 0, 0]$ $ 18/7$ $2$ $10/7$ $[ 4, 2, 0, 0, 0, 1]$ $ 25/7$ $3$ $10/7$ $[ 7, 0, 0, 0, 0, 0]$ $ 3$ $3$ $ 2$ : Weights of irreducible modules with $w_M\in{\mathbb{Z}}_{>1}$ for the case $A_{6,7}$[]{data-label="T:A67"} [|c|c|c|c||c|c|c|c|]{} \ Weights& $(\sum_{i=1}^t\lambda_i|u)$& $w_M$ & $w_M^{(u)}$ & Weights& $(\sum_{i=1}^t\lambda_i|u)$& $w_M$ & $w_M^{(u)}$\ $([0, 0, 0, 0, 8], [0])$ & $5$ & $5$ & $2$ & $([0, 0, 0, 3, 3], [0])$ & $15/4$ & $3$ & $5/4$\ $([0, 0, 0, 4, 4], [2])$ & $11/2$ & $5$ & $3/2$ & $([0, 0, 0, 8, 0], [0])$ & $5$ & $5$ & $2$\ $([0, 0, 1, 0, 6], [2])$ & $43/8$ & $5$ & $13/8$ & $([0, 0, 1, 1, 3], [1])$ & $31/8$ & $3$ & $9/8$\ $([0, 0, 1, 3, 1], [1])$ & $31/8$ & $3$ & $9/8$ & $([0, 0, 1, 6, 0], [2])$ & $43/8$ & $5$ & $13/8$\ $([0, 0, 2, 0, 0], [2])$ & $11/4$ & $2$ & $5/4$ & $([0, 0, 2, 2, 2], [0])$ & $19/4$ & $4$ & $5/4$\ $([0, 1, 0, 1, 5], [0])$ & $37/8$ & $4$ & $11/8$ & $([0, 1, 0, 2, 2], [2])$ & $31/8$ & $3$ & $9/8$\ $([0, 1, 0, 5, 1], [0])$ & $37/8$ & $4$ & $11/8$ & $([0, 1, 2, 1, 1], [2])$ & $39/8$ & $4$ & $9/8$\ $([0, 2, 0, 0, 4], [2])$ & $19/4$ & $4$ & $5/4$ & $([0, 2, 0, 4, 0], [2])$ & $19/4$ & $4$ & $5/4$\ $([0, 3, 0, 1, 1], [0])$ & $31/8$ & $3$ & $9/8$ & $([1, 0, 0, 2, 4], [2])$ & $19/4$ & $4$ & $5/4$\ $([1, 0, 0, 4, 2], [2])$ & $19/4$ & $4$ & $5/4$ & $([1, 0, 1, 1, 1], [0])$ & $23/8$ & $2$ & $9/8$\ $([1, 0, 3, 0, 0], [0])$ & $31/8$ & $3$ & $9/8$ & $([1, 1, 0, 0, 2], [1])$ & $23/8$ & $2$ & $9/8$\ $([1, 1, 0, 1, 3], [0])$ & $31/8$ & $3$ & $9/8$ & $([1, 1, 0, 2, 0], [1])$ & $23/8$ & $2$ & $9/8$\ $([1, 1, 0, 3, 1], [0])$ & $31/8$ & $3$ & $9/8$ & $([1, 1, 1, 1, 1], [1])$ & $4$ & $3$ & $1$\ $([1, 2, 1, 0, 0], [2])$ & $31/8$ & $3$ & $9/8$ & $([2, 0, 0, 1, 1], [2])$ & $11/4$ & $2$ & $5/4$\ $([2, 0, 0, 3, 3], [0])$ & $19/4$ & $4$ & $5/4$ & $([2, 0, 1, 0, 2], [2])$ & $31/8$ & $3$ & $9/8$\ $([2, 0, 1, 1, 3], [1])$ & $39/8$ & $4$ & $9/8$ & $([2, 0, 1, 2, 0], [2])$ & $31/8$ & $3$ & $9/8$\ $([2, 0, 1, 3, 1], [1])$ & $39/8$ & $4$ & $9/8$ & $([2, 1, 0, 2, 2], [2])$ & $39/8$ & $4$ & $9/8$\ $([2, 2, 0, 0, 0], [0])$ & $11/4$ & $2$ & $5/4$ & $([3, 0, 0, 0, 2], [0])$ & $11/4$ & $2$ & $5/4$\ $([3, 0, 0, 2, 0], [0])$ & $11/4$ & $2$ & $5/4$ & $([3, 0, 1, 1, 1], [0])$ & $31/8$ & $3$ & $9/8$\ $([3, 1, 0, 0, 2], [1])$ & $31/8$ & $3$ & $9/8$ & $([3, 1, 0, 2, 0], [1])$ & $31/8$ & $3$ & $9/8$\ $([4, 0, 0, 0, 0], [2])$ & $5/2$ & $2$ & $3/2$ & $([4, 0, 0, 1, 1], [2])$ & $15/4$ & $3$ & $5/4$\ $([4, 0, 2, 0, 0], [2])$ & $19/4$ & $4$ & $5/4$ & $([5, 0, 1, 0, 0], [0])$ & $29/8$ & $3$ & $11/8$\ $([6, 1, 0, 0, 0], [2])$ & $35/8$ & $4$ & $13/8$ & $([8, 0, 0, 0, 0], [0])$ & $4$ & $4$ & $2$\ Weights $(\sum_{i=1}^t\lambda_i|u)$ $w_M$ $w_M^{(u)}$ Weights $(\sum_{i=1}^t\lambda_i|u)$ $w_M$ $w_M^{(u)}$ ---------------------------------- ----------------------------- ------- ------------- ---------------------------------- ----------------------------- ------- ------------- $([0, 0, 0, 0, 0, 5], [0], [1])$ $4$ $4$ $3/2$ $([0, 0, 0, 0, 0, 5], [1], [0])$ $4$ $4$ $3/2$ $([0, 0, 0, 0, 5, 0], [0], [1])$ $4$ $4$ $3/2$ $([0, 0, 0, 0, 5, 0], [1], [0])$ $4$ $4$ $3/2$ $([0, 0, 0, 1, 0, 1], [0], [1])$ $12/5$ $2$ $11/10$ $([0, 0, 0, 1, 0, 1], [1], [0])$ $12/5$ $2$ $11/10$ $([0, 0, 0, 1, 1, 0], [0], [1])$ $12/5$ $2$ $11/10$ $([0, 0, 0, 1, 1, 0], [1], [0])$ $12/5$ $2$ $11/10$ $([0, 0, 0, 1, 1, 1], [1], [1])$ $17/5$ $3$ $11/10$ $([0, 0, 2, 0, 0, 0], [0], [0])$ $12/5$ $2$ $11/10$ $([0, 0, 2, 0, 0, 1], [0], [1])$ $17/5$ $3$ $11/10$ $([0, 0, 2, 0, 0, 1], [1], [0])$ $17/5$ $3$ $11/10$ $([0, 0, 2, 0, 1, 0], [0], [1])$ $17/5$ $3$ $11/10$ $([0, 0, 2, 0, 1, 0], [1], [0])$ $17/5$ $3$ $11/10$ $([0, 1, 0, 0, 0, 2], [0], [0])$ $12/5$ $2$ $11/10$ $([0, 1, 0, 0, 1, 2], [0], [1])$ $17/5$ $3$ $11/10$ $([0, 1, 0, 0, 1, 2], [1], [0])$ $17/5$ $3$ $11/10$ $([0, 1, 0, 0, 2, 0], [0], [0])$ $12/5$ $2$ $11/10$ $([0, 1, 0, 0, 2, 1], [0], [1])$ $17/5$ $3$ $11/10$ $([0, 1, 0, 0, 2, 1], [1], [0])$ $17/5$ $3$ $11/10$ $([1, 0, 0, 1, 0, 0], [1], [1])$ $12/5$ $2$ $11/10$ $([1, 0, 0, 1, 1, 1], [0], [0])$ $17/5$ $3$ $11/10$ $([1, 0, 2, 0, 0, 0], [1], [1])$ $17/5$ $3$ $11/10$ $([1, 1, 0, 0, 0, 1], [0], [1])$ $12/5$ $2$ $11/10$ $([1, 1, 0, 0, 0, 1], [1], [0])$ $12/5$ $2$ $11/10$ $([1, 1, 0, 0, 0, 2], [1], [1])$ $17/5$ $3$ $11/10$ $([1, 1, 0, 0, 1, 0], [0], [1])$ $12/5$ $2$ $11/10$ $([1, 1, 0, 0, 1, 0], [1], [0])$ $12/5$ $2$ $11/10$ $([1, 1, 0, 0, 2, 0], [1], [1])$ $17/5$ $3$ $11/10$ $([2, 0, 0, 1, 0, 0], [0], [0])$ $12/5$ $2$ $11/10$ $([2, 0, 0, 1, 0, 1], [0], [1])$ $17/5$ $3$ $11/10$ $([2, 0, 0, 1, 0, 1], [1], [0])$ $17/5$ $3$ $11/10$ $([2, 0, 0, 1, 1, 0], [0], [1])$ $17/5$ $3$ $11/10$ $([2, 0, 0, 1, 1, 0], [1], [0])$ $17/5$ $3$ $11/10$ $([5, 0, 0, 0, 0, 0], [1], [1])$ $3$ $3$ $3/2$ : Weights of irreducible modules with $w_M\in{\mathbb{Z}}_{>1}$ for the case $D_{6,5}A_{1,1}^2$[]{data-label="T:D65A112"} [|c|c|l|l|c|]{} \ $U_1$& $j$& $\overline{ju}$&$g(\overline{ju})$&$\langle g(\overline{ju})|g(\overline{ju})\rangle$\ $A_{3,4}^3A_{1,2}$ &$1$&$u=(\frac14[1,1,1],\frac14[1,1,1],\frac14[1,1,1],\frac12[1])$&$u$&$4$\ &$2$&$(\frac{1}{2}[-1,1,-1],\frac{1}{2}[-1,1,-1],\frac{1}{2}[-1,1,-1],[1])$&$(\frac{1}{2}[0,1,0],\frac{1}{2}[0,1,0],\frac{1}{2}[0,1,0],[1])$&$4$\ $A_{4,5}^2$&$1$&$u=(\frac15[1,1,1,1],\frac15[1,1,1,1])$&$u$&$4$\ &$2$&$(\frac{1}{5}[-3,2,2,-3],\frac15[-3,2,2,-3])$&$u$&$4$\ $D_{4,12}A_{2,6}$&$1$ &$u=(\frac16[1,1,1,1],\frac13[1,1])$&$u$&$6$\ & $2$&$(\frac{1}{3}[1,-2,1,1],\frac{1}{3}[-1,-1])$&$(\frac{1}{3}[0,1,0,0],\frac{1}{3}[1,1])$&$4$\ &$3$&$(\frac{1}{2}[1,-1,1,1],[0])$&$(\frac{1}{2}[0,1,0,0],[0])$&$6$\ $A_{6,7}$&$1$&$u=\frac{1}{7}[1,1,1,1,1,1]$&$u$&$4$\ &$2$&$\frac{1}{7}[2,-5,2,2,-5,2]$&$u$&$4$\ &$3$&$\frac{1}{7}[3,-4,3,3,-4,3]$&$u$&$4$\ $A_{7,4}A_{1,1}^3$&$1$ &$u=(\frac18[1,1,1,1,1,1,1],\frac12[1],\frac12[1],\frac12[1])$&$u$&$3$\ & $2$& $\frac{1}{4}([1,-3,1,1,1,-3,1],[1],[1],[1])$&$(\frac{1}{4}[0,1,0,1,0,1,0],[1],[1],[1])$&$4$\ &$3$&$(\frac{1}{8}[3,3,-5,3-5,3,3],\frac{1}{2}[-1],\frac{1}{2}[-1],\frac{1}{2}[-1])$&$u$&$3$\ & $4$&$(\frac{1}{2}[-1,1,-1,1,-1,1,-1],[0],[0],[0])$&$(\frac{1}{2}[0,0,0,1,0,0,0],[0],[0],[0])$&$2$\ $D_{5,8}A_{1,2}$ &$1$&$u=(\frac14[1,1,1,1,1],\frac12[1])$&$u$&$4$\ &$2$&$(\frac{1}{4}[1,-3,1,1,1],[1])$&$(\frac{1}{4}[1,0,1,0,0],[1])$&$4$\ &$3$& $(\frac{1}{8}[-5,3,-5,3,3],\frac{1}{2}[-1])$&$u$&$4$\ &$4$&$(\frac{1}{2}[-1,1,-1,1,1],[0])$&$(\frac{1}{2}[0,1,0,0,0],[0])$&$4$\ $D_{6,5}A_{1,1}^2$ &$1$&$u=(\frac1{10}[1,1,1,1,1,1],\frac12[1],\frac12[1])$&$u$&$3$\ & $2$&$(\frac{1}{5}[1,-4,1,1,1,1],[1],[1])$&$(\frac{1}{5}[1,1,0,1,0,0],[1],[1])$&$4$\ & $3$&$(\frac{1}{10}[3,3,3,-7,3,3],\frac12[-1],\frac12[-1])$&$u$&$3$\ &$4$&$(\frac{1}{5}[2,-3,2,-3,2,2],[0],[0])$&$(\frac15[0,1,0,1,0,0],[0],[0])$&$2$\ & $5$&$(\frac{1}{2}[1,-1,1,-1,1,1],\frac{1}{2}[1],\frac{1}{2}[1])$&$(\frac{1}{2}[0,0,1,0,0,0],\frac{1}{2}[1],\frac{1}{2}[1])$&$4$\ Orbifold construction and uniqueness of holomorphic VOA ------------------------------------------------------- Let $V$ be a strongly regular holomorphic VOA. Suppose $\sigma\in{\mathrm{Aut}\,}V$ has finite order $n$. It was proved in [@CM; @Mi] that $V^\sigma=\{v\in V\mid \sigma(v)=v\}$ is also strongly regular. For $1\le i\le n-1$, let $V[\sigma^i]$ be the irreducible $\sigma^i$-twisted $V$-module. By [@DLM2], such a module exists and is unique up to isomorphism. We recall the orbifold construction established in [@EMS]. \[T:EMS\] Assume the following: 1. for $1\le i\le n-1$, the conformal weight of $V[\sigma^i]$ is positive; 2. the conformal weight of $V[\sigma]$ belongs to $(1/n){\mathbb{Z}}_{>0}$. Then, for $1\le i\le n-1$, there exists a unique irreducible $V^\sigma$-submodule $\overline{V[\sigma^i]}$ of $V[\sigma^i]$ with the integral conformal weight such that $$\widetilde{V}_\sigma:=V^\sigma\oplus\bigoplus_{i=1}^{n-1}\overline{V[\sigma^i]}$$ has a strongly regular holomorphic VOA structure as a ${\mathbb{Z}}_n$-graded simple current extension of $V^\sigma$. This construction is often called the *${\mathbb{Z}}_n$-orbifold construction* associated with $V$ and $\sigma$. Note that the resulting VOA $\widetilde{V}_\sigma$ is uniquely determined by $V$ and $\sigma$, up to isomorphism. Moreover, if $\sigma'\in {\mathrm{Aut}\,}V$ is conjugate to $\sigma$, then $\widetilde{V}_{\sigma'}$ is isomorphic to $\widetilde{V}_{\sigma}$. By “reversing" the ${\mathbb{Z}}_n$-orbifold construction, the following theorem was proved in [@LS]. \[T:RO\] Let ${\mathfrak{g}}$ be a Lie algebra and $\mathfrak{p}$ a subalgebra of ${\mathfrak{g}}$. Let $n\in {\mathbb{Z}}_{>0}$ and let $W$ be a strongly regular holomorphic VOA of central charge $c$. Assume that for any strongly regular holomorphic VOA $V$ of central charge $c$ whose weight one Lie algebra is ${\mathfrak{g}}$, there exists an order $n$ automorphism $\sigma$ of $V$ such that the following conditions hold: 1. ${\mathfrak{g}}^{\sigma}\cong\mathfrak{p}$; 2. $\sigma$ satisfies Conditions (I) and (II) in Theorem \[T:EMS\] and $\widetilde{V}_{\sigma}$ is isomorphic to $W$. In addition, we assume that any automorphism $\varphi\in{\mathrm{Aut}\,}W$ of order $n$ satisfying (I) and (II) and the conditions (A) and (B) below belongs to a unique conjugacy class in ${\mathrm{Aut}\,}W$: 1. $(W^\varphi)_1$ is isomorphic to $\mathfrak{p}$; 2. $(\widetilde{W}_\varphi)_1$ is isomorphic to ${\mathfrak{g}}$. Then any strongly regular holomorphic VOA of central charge $c$ with weight one Lie algebra ${\mathfrak{g}}$ is isomorphic to $\widetilde{W}_\varphi$. In particular, such a holomorphic VOA is unique up to isomorphism. \[R:RO\] In general, the condition (B) is strong for the uniqueness of the conjugacy class of ${\varphi}$. For example, the condition (B) implies that $W[\varphi]_1$ is isomorphic to ${\mathfrak{g}}_{(i)}$ as $\mathfrak {p}(\cong (W^{\varphi})_1\cong {\mathfrak{g}}^{\sigma})$-modules for some $i$ relatively prime to $n$, where ${\mathfrak{g}}_{(i)}=\{x\in{\mathfrak{g}}\mid \sigma(x)=\exp((i/n)2\pi\sqrt{-1})x\}$. Later, we will consider the weaker condition: 1. The matrices $\mathcal{M}(\Pi(W[{\varphi}]_1))$ and $\mathcal{M}(\Pi({\mathfrak{g}}_{(i)}))$ are equivalent for some $i$ relatively prime to $n$. In general, (B’) is strictly weaker than (B) since the Lie algebra structure of $\mathfrak{g}$ may not be recovered from the $\mathfrak{g}^\sigma$-module structure of $\mathfrak{g}_{(i)}$. Dimension formulae associated with orbifold constructions {#S:df} --------------------------------------------------------- In this subsection, we recall the dimension formulae from [@EMS2]. Let $n\in\{2,3,4,5,6,7,8,9,10,12,13,16,18,25\}$ and let $V$ be a strongly regular holomorphic VOA of central charge $24$. Let $\sigma$ be an order $n$ automorphism of $V$ satisfying the conditions (I) and (II) in Theorem \[T:EMS\]. Assume that for $1\le i \le n-1$, the conformal weight of the irreducible $\sigma^i$-twisted $V$-module is at least $1$. Then $$\sum_{d|n}\frac{\phi((d,n/d))}{(d,n/d)}\left(24+\frac{n}{d}\dim V_1^\sigma-\dim (\tilde{V}_{\sigma^d})_1\right)=24,$$ where $\phi$ is Euler’s totient function. The explicit formulae for several $n$ are given as follows (cf. [@EMS2]): - For $n=5,7$, $$\dim (\tilde{V}_\sigma)_1=24+(n+1)\dim V_1^\sigma-\dim V_1.$$ - For $n=4$, $$\dim(\tilde{V}_\sigma)_1=24+6\dim V_1^\sigma -\frac{3}{2}\dim V_1^{\sigma^2}-\frac{1}{2}\dim V_1.$$ - For $n=6$, $$\dim(\tilde{V}_\sigma)_1=24+12\dim V_1^\sigma-4\dim V_1^{\sigma^2}-3\dim V_1^{\sigma^3}+\dim V_1.$$ - For $n=8$, $$\dim (\tilde{V}_\sigma)_1=24+12\dim V_1^\sigma-3\dim V_1^{\sigma^2}-\frac{3}{4}\dim V_1^{\sigma^4}-\frac{1}{4}\dim V_1.$$ - For $n=10$, $$\dim (\tilde{V}_\sigma)_1=24+18\dim V_1^\sigma-6\dim V_1^{\sigma^2}-3\dim V_1^{\sigma^5}+\dim V_1.$$ Leech lattice and isometries {#S:Leech} ============================ Let $\Lambda$ be the Leech lattice, the unique even unimodular lattice of rank $24$ having no norm $2$ vectors. In this article, we adopt the notations in [@HL90] for the conjugacy classes of the isometry group $O(\Lambda)$ of $\Lambda$. For $g\in O(\Lambda)$, set $\Lambda^g=\{v\in\Lambda\mid g(v)=v\}$. It follows from Lemma \[L:P0\] that $P_0^g(\Lambda)=(\Lambda^g)^*$ and $P_0^g(\Lambda)\subset(1/|g|)\Lambda^g$, where $P_0^g$ is the orthogonal projection (see ). Let $C_{O(\Lambda)}(g)$ denote the centralizer of $g$ in $O(\Lambda)$. Note that the sublattices $\Lambda^g$ and the quotient groups $C_{O(\Lambda)}(g)/\langle-1\rangle$ are described in [@HL90 Table 1] (cf. [@HM16]) and in [@Wi83 Table 1], respectively. For $g\in O(\Lambda)$, let $p\in {\mathbb{Q}}$ such that $P_0^g(\Lambda)\subset p\Lambda^g$. For $q\in{\mathbb{Q}}$, set $$\Lambda(g,p,q):=\{x+P_0^g(\Lambda)\in p\Lambda^g/P_0^g(\Lambda) \mid (x+P_0^g(\Lambda))(q)\neq\emptyset\},\label{E:Lambdagpq}$$ where $(x+P_0^g(\Lambda))(q)=\{y\in x+P_0^g(\Lambda)\mid (y|y)=q\}$. In this section, we describe the $C_{O(\Lambda)}(g)$-orbits of $\Lambda(g,p,q)$ and the matrix $\mathcal{M}((x+P_0^g(\Lambda))(q))$ for some cases. Throughout this section, let $X_n$ (resp. $\tilde{X}_n$) denote the Cartan matrix of the root system of type $X_n$ (resp. the generalized Cartan matrix of the affine root system of type $\tilde{X}_n$). We omit the proofs of the following lemmas, which can be verified by the computer algebra system MAGMA [@MAGMA]. \[L:uni\] Let $g$ be an isometry of $\Lambda$ whose conjugacy class is $4C$, $5B$, $6G$, $7B$ or $8E$. Let $n$ be the order of $g$. Set $$s:=\begin{cases}1/2n& (g\in 6G),\\ 1/n &(g\in 4C,5B,7B,8E).\end{cases}$$ 1. The rank of $\Lambda^g$ is $10$, $8$, $6$, $6$ or $6$ if the conjugacy class of $g$ is $4C$, $5B$, $6G$, $7B$ or $8E$, respectively. 2. The minimum norm of $P_0^g(\Lambda)(=(\Lambda^g)^*)$ is $4s$. 3. If the conjugacy class of $g$ is $5B$ or $7B$, then $C_{O(\Lambda)}(g)$ acts transitively on $\Lambda(g,s,2s)$. In addition, for $x+P_0^g(\Lambda)\in \Lambda(g,s,2s)$, the matrix $\mathcal{M}((x+P_0^g(\Lambda))(2s))$ is equivalent to the generalized Caran matrix of type $\tilde{A}_4^2$ or $\tilde{A}_6$, respectively. 4. If the conjugacy class of $g$ is $4C$, $6E$ or $8E$, then the $C_{O(\Lambda)}(g)$-orbits of $\Lambda(g,s,2s)$ are given in Tables \[T:4C8\], \[T:6G24\] or \[T:8E16\], respectively. 5. If the conjugacy class of $g$ is $4C$ or $5B$, then the $C_{O(\Lambda)}(g)$-orbits of $\Lambda(g,s/2,2s)$ are given in Tables \[T:4C32\] or \[T:5B40\], respectively. In Tables \[T:6G24\], \[T:4C32\] and \[T:5B40\], the matrices $\mathcal{M}((x+P_0^g(\Lambda))(2s))$ are identified only if $|(x+P_0^g(\Lambda))(2s)|=5$, $8$ and $7$, respectively, which are enough for our argument. Indeed, we will use in Proposition \[P:con\] the fact that the $C_{O(\Lambda)}(g)$-orbit is unique if $\mathcal{M}((x+P_0^g(\Lambda))(2s))$ is equivalent to the generalized Cartan matrix of type $\tilde{D}_4$, $\tilde{A}_7$ and $\tilde{D}_6$, respectively. Orbit length $|(x+P_0^g(\Lambda)(1/2)|$ $\mathcal{M}((x+P_0^g(\Lambda))(1/2))$ -------------- ---------------------------- ---------------------------------------- $15$ $12$ $\tilde{A}_1^6$ $240$ $12$ $\tilde{A}_3^3$ : $C_{O(\Lambda)}(g)$-orbits of $\Lambda(g,1/4,1/2)$ for $g\in 4C$[]{data-label="T:4C8"} Orbit length In $(1/6)\Lambda^g$? $|(x+P_0^g(\Lambda)(1/6)|$ $\mathcal{M}((x+P_0^g(\Lambda))(1/6))$ -------------- ---------------------- ---------------------------- ---------------------------------------- $1$ N $6$ $8$ Y $4$ $8$ N $5$ $\tilde{D}_4$ $18$ N $2$ $144$ N $3$ : $C_{O(\Lambda)}(g)$-orbits of $\Lambda(g,1/12,1/6)$ for $g\in6G$[]{data-label="T:6G24"} Orbit length $|(x+P_0^g(\Lambda)(1/4)|$ $\mathcal{M}((x+P_0^g(\Lambda))(1/4))$ -------------- ---------------------------- ---------------------------------------- $3$ $6$ $\tilde{A}_1^3$ $12$ $6$ $A_1^2\tilde{A}_3$ $48$ $6$ $\tilde{D}_5$ : $C_{O(\Lambda)}(g)$-orbits of $\Lambda(g,1/8,1/4)$ for $g\in 8E$[]{data-label="T:8E16"} Orbit length In $(1/4)\Lambda^g$? $|(x+P_0^g(\Lambda)(1/2)|$ $\mathcal{M}((x+P_0^g(\Lambda))(1/2))$ -------------- ---------------------- ---------------------------- ---------------------------------------- $15$ Y $12$ $240$ Y $12$ $360$ N $8$ $\tilde{A}_1^4$ $1440$ N $8$ $\tilde{A}_3^2$ $2880$ N $4$ $2880$ N $8$ $\tilde{A}_3A_1^4$ $11520$ N $7$ $11520$ N $5$ $ 15360$ N $3$ $ 15360$ N $9$ $23040$ N $4$ $ 23040$ N $8$ $\tilde{D}_5\tilde{A}_1$ $ 23040$ N $8$ $\tilde{A}_7$ : $C_{O(\Lambda)}(g)$-orbits of $\Lambda(g,1/8,1/2)$ for $g\in 4C$[]{data-label="T:4C32"} Orbit length In $(1/5)\Lambda^g$? $|(x+P_0^g(\Lambda)(2/5)|$ $\mathcal{M}((x+P_0^g(\Lambda))(2/5))$ -------------- ---------------------- ---------------------------- ---------------------------------------- $75$ N $8$ $144$ Y $10$ $1440$ N $2$ $3600$ N $8$ $3600$ N $7$ $\tilde{D}_6$ $3600$ N $4$ $7200$ N $4$ : $C_{O(\Lambda)}(g)$-orbits of $\Lambda(g,1/10,2/5)$ for $g\in5B$[]{data-label="T:5B40"} \[L:o2\] Let $g$ be an order $2$ isometry of $\Lambda$. 1. If $g$ belongs to the conjugacy class $\pm 2A$, then $(\alpha|g(\alpha))\in2{\mathbb{Z}}$ for all $\alpha\in\Lambda$. 2. If $g$ belongs to the conjugacy class $2C$, then $(\alpha|g(\alpha))\in 1+2{\mathbb{Z}}$ for some $\alpha\in\Lambda$. Lattice VOAs, automorphisms and twisted modules =============================================== In this section, we review the construction of a lattice VOA and the structure of its automorphism group from [@FLM; @DN]. We also review a construction of irreducible twisted modules for (standard) lifts of isometries from [@Le; @DL] and study the conjugacy classes of (standard) lifts in the automorphism group of a lattice VOA. Lattice VOA and the automorphism group -------------------------------------- Let $L$ be an even lattice of rank $m$ and let $(\cdot |\cdot )$ be the positive-definite symmetric bilinear form on ${\mathbb{R}}\otimes_{\mathbb{Z}}L\cong{\mathbb{R}}^m$. The lattice VOA $V_L$ associated with $L$ is defined to be $M(1) \otimes {\mathbb{C}}\{L\}$. Here $M(1)$ is the Heisenberg VOA associated with $\mathfrak{h}={\mathbb{C}}\otimes_{\mathbb{Z}}L$ and the form $(\cdot|\cdot)$ extended ${\mathbb{C}}$-bilinearly. That ${\mathbb{C}}\{L\}=\bigoplus_{\alpha\in L}{\mathbb{C}}e^\alpha$ is the twisted group algebra with commutator relation $e^\alpha e^\beta=(-1)^{(\alpha|\beta)}e^{\beta}e^{\alpha}$, for $\alpha,\beta\in L$. We fix a $2$-cocycle $\varepsilon(\cdot|\cdot):L\times L\to\{\pm1\}$ for ${\mathbb{C}}\{L\}$ such that $e^\alpha e^\beta=\varepsilon(\alpha|\beta)e^{\alpha+\beta}$, $\varepsilon(\alpha|\alpha)=(-1)^{(\alpha|\alpha)/2}$ and $\varepsilon(\alpha|0)=\varepsilon(0|\alpha)=1$ for all $\alpha,\beta\in L$. It is well-known that the lattice VOA $V_L$ is strongly regular, and its central charge is equal to $m$, the rank of $L$. Let $\hat{L}$ be the central extension of $L$ by $\langle-1\rangle$ associated with the $2$-cocycle $\varepsilon(\cdot|\cdot)$. Let ${\mathrm{Aut}\,}\hat{L}$ be the set of all group automorphisms of $\hat L$. For $\varphi\in {\mathrm{Aut}\,}\hat{L}$, we define the element $\bar{\varphi}\in{\mathrm{Aut}\,}L$ by ${\varphi}(e^\alpha)\in\{\pm e^{\bar{\varphi}(\alpha)}\}$, $\alpha\in L$. Set $$O(\hat{L})=\{{\varphi}\in{\mathrm{Aut}\,}\hat L\mid \bar{\varphi}\in O(L)\}.$$ For $\chi\in\mathrm{Hom}(L,{\mathbb{Z}}_2)$, the map $\hat{L}\to\hat{L}$, $e^{\alpha}\mapsto (-1)^{\chi(\alpha)}e^{\alpha}$, is an element in $O(\hat{L})$. Such automorphisms form an elementary abelian $2$-subgroup of $O(\hat{L})$ of rank $m$, which is also denoted by $\mathrm{Hom}(L,{\mathbb{Z}}_2)$ without confusion. It was proved in [@FLM Proposition 5.4.1] that the following sequence is exact: $$1 \longrightarrow \mathrm{Hom}(L, {\mathbb{Z}}_2) { \longrightarrow} O(\hat{L}) \bar\longrightarrow O(L)\longrightarrow 1.\label{Exact1}$$ We identify $O(\hat{L})$ as a subgroup of ${\mathrm{Aut}\,}V_L$ as follows: for $\varphi\in O(\hat{L})$, the map $$\alpha_1(-n_1)\dots\alpha_m(-n_s)e^\beta\mapsto \bar{\varphi}(\alpha_1)(-n_1)\dots\bar{{\varphi}}(\alpha_s)(-n_s){\varphi}(e^\beta)$$ is an automorphism of $V_L$, where $n_1,\dots,n_s\in{\mathbb{Z}}_{>0}$ and $\alpha_1,\dots,\alpha_s,\beta\in L$. Let $N(V_L)=\langle\exp({a_{(0)}})\mid a\in (V_L)_1\rangle,$ which is called the *inner automorphism group* of $V_L$. We often identify $\mathfrak{h}$ with $\mathfrak{h}(-1){ \mathds{1}}$ via $h\mapsto h(-1){ \mathds{1}}$. For $v\in\mathfrak{h}$, set $$\sigma_{v}=\exp(-2\pi\sqrt{-1}v_{(0)})\in N(V_L).$$ Note that $\sigma_v$ is the identity map of $V_L$ if and only if $v\in L^*$. Let $$D=\{\sigma_v\mid v\in\mathfrak{h}/L^*\}\subset N(V_L).$$ Note that ${\mathrm{Hom}}(L,{\mathbb{Z}}_2)=\{\sigma_v\mid v\in (L^*/2)/L^*\}\subset D$ and that for ${\varphi}\in O(\hat{L})$ and $v\in{\mathfrak{h}}$, we have ${\varphi}\sigma_v{\varphi}^{-1}=\sigma_{\bar{{\varphi}}(v)}$. \[Prop:AutVLambda\] The automorphism group ${\mathrm{Aut}\,}V_L$ of $V_L$ is generated by the normal subgroup $N(V_L)$ and the subgroup $O(\hat L)$. At the end of this subsection, we will consider the case where $L$ has no norm $2$ vectors. Then $(V_L)_1=\{h(-1){ \mathds{1}}\mid h\in\mathfrak{h}\}\cong{\mathfrak{h}}$, $N(V_L)=D$ and $N(V_L)\cap O(\hat L)={\rm Hom}(L,{\mathbb{Z}}_2)\cong{\mathbb{Z}}_2^{m}.$ Hence we obtain a canonical group homomorphism $$\mu:{\mathrm{Aut}\,}V_L\to{\mathrm{Aut}\,}V_L/N(V_L)\cong O(\hat L)/(O(\hat L)\cap N(V_L))\cong O(L).\label{Def:mu}$$ Standard lifts of isometries of lattices ---------------------------------------- Let $L$ be an even lattice. A (standard) lift in $O(\hat{L})$ of an isometry of $L$ is defined as follows. An element ${\varphi}\in O(\hat{L})$ is called a *lift* of $g\in O(L)$ if $\bar{{\varphi}}=g$, where the map $\ \bar{}\ $ is defined as in . A lift $\phi_g$ of $g\in O(L)$ is said to be *standard* if $\phi_g(e^\alpha)=e^{\alpha}$ for all $\alpha\in L^g=\{\beta\in L\mid g(\beta)=\beta\}$. For any isometry of $L$, there exists a standard lift. The orders of standard lifts are determined in [@EMS] as follows: \[Lem:ordSLift\] Let $g\in O(L)$ be of order $n$ and let $\phi_g$ be a standard lift of $g$. 1. If $n$ is odd, then the order of $\phi_g$ is also $n$. 2. Assume that $n$ is even. Then $\phi^n_g(e^\alpha)=(-1)^{(\alpha|g^{n/2}(\alpha))}e^\alpha$ for all $\alpha\in L$. In particular, if $(\alpha|g^{n/2}(\alpha))\in2{\mathbb{Z}}$ for all $\alpha\in L$, then the order of $\phi_g$ is $n$; otherwise the order of $\phi_g$ is $2n$. Next we discuss the conjugacy classes of lifts in ${\mathrm{Aut}\,}V_L$. Recall that $L_g=\{\beta\in L\mid (\beta|L^g)=0\}$ and $P_0^g$ is the orthogonal projection from ${\mathbb{R}}\otimes_{\mathbb{Z}}L$ to ${\mathbb{R}}\otimes_{\mathbb{Z}}L^g$. For the definitions of $\sigma_v$ and $D$, see the previous subsection. \[Lem:conjD\] Let ${\varphi}\in O(\hat{L})$ and set $g=\bar{\varphi}$. 1. For $v\in{\mathbb{C}}\otimes_{\mathbb{Z}}L_{g}$, $\sigma_v{\varphi}$ is conjugate to ${\varphi}$ by an element of $D$. 2. For $v\in P_0^g(L^*)(=(L^g)^*)$, $\sigma_v{\varphi}$ is conjugate to ${\varphi}$ by an element of $D$. Let $n$ be the order of $g$. Since the action of $g$ on ${\mathbb{C}}\otimes_{\mathbb{Z}}L_g$ is fixed-point free, we have $\sum_{i=0}^{n-1}g^{i}=0$ on it. Set $f=\sum_{i=1}^{n-1}ig^{i}$. Then $(g-1)f= -\sum_{i=1}^{n-1}g^i+(n-1)id=n\cdot id$ on ${\mathbb{C}}\otimes_{\mathbb{Z}}L_g$. For $v\in{\mathbb{C}}\otimes_{\mathbb{Z}}L_{g}$, we obtain $$\sigma_{{f}(v/n)}(\sigma_v{\varphi})(\sigma_{f(v/n)})^{-1}=\sigma_{v-{(g-1)}f(v/n)}{\varphi}={\varphi},$$ which proves (1). Let $v\in P_0^g(L^*)$. Then there exists $v'\in {\mathbb{Q}}\otimes L_g$ such that $v+v'\in L^*$. Note that $\sigma_{v+v'}=id$ on $V_L$. It follows from (1) that $\sigma_{v+v'}{\varphi}(={\varphi})$ is conjugate to $\sigma_{v}{\varphi}$ by an element of $D$. Hence we obtain (2). \[Lem:conjD2\] For any isometry of $L$, its standard lift is unique up to conjugation by $D$. Let $g\in O(L)$ and let $\phi_g$ and $\phi'_g$ be standard lifts of $g$. By the exact sequence , there exists $v\in L^*/2$ such that $\phi'_g=\sigma_v\phi_g$. Since both $\phi_g$ and $\phi'_g$ are standard lifts of $g$, $\sigma_v(e^{\alpha})=e^{\alpha}$ for all $\alpha\in L^g$. Hence $(v|L^g)\subset{\mathbb{Z}}$. Set $v'=P_0^g(v)$. Then $v'\in (L^g)^*$ since $(v'|\alpha)=(v|\alpha)\in{\mathbb{Z}}$ for all $\alpha\in L^g$. By Lemma \[Lem:conjD\] (1), $\phi'_g=\sigma_{v}\phi_g$ is conjugate to $\sigma_{v'}\phi_g$ by an element of $D$, and by Lemma \[Lem:conjD\] (2), $\sigma_{v'}\phi_g$ is conjugate to $\phi_g$ by an element of $D$. \[Lem:surjC\] Assume that $L$ has no norm $2$ vectors. Let $\phi_g$ be a standard lift of $g\in O(L)$ and let $\mu$ be the group homomorphism described in . 1. $\mu(C_{{\mathrm{Aut}\,}V_L}(\phi_g))=C_{O(L)}(g)$. 2. For ${\varphi}\in\mu^{-1}(g)$, there exists $v\in {\mathbb{C}}\otimes_{\mathbb{Z}}L^g$ such that ${\varphi}$ is conjugate to $\sigma_{v}\phi_g$ by an element of $D$. Clearly, $\mu(C_{{\mathrm{Aut}\,}V_L}(\phi_g))\subset C_{O(L)}(g)$. Let $f\in C_{O(L)}(g)$ and let $\phi_f\in O(\hat L)$ be a standard lift of $f$. Since $f$ commutes with $g$, we have $f(L^g)=L^g$. Hence $\phi_f\phi_g\phi_f^{-1}(e^\alpha)=e^\alpha$ for $\alpha\in L^g$, that is, $\phi_f\phi_g\phi_f^{-1}$ is also a standard lift of $g$. By Proposition \[Lem:conjD2\], there exists $\sigma\in D$ such that $\sigma\phi_f\phi_g\phi_f^{-1}\sigma^{-1}=\phi_g$. Hence $\sigma\phi_f\in C_{{\mathrm{Aut}\,}V_L}(\phi_g)$ and $\mu (\sigma\phi_f)=f$, which proves (1). Since the kernel of $\mu$ is $D$, there exists $v\in\mathfrak{h}$ such that ${\varphi}=\sigma_{v}\phi_g$. Set $v'=P_0^g(v)$. Then $v-v'\in {\mathbb{C}}\otimes_{\mathbb{Z}}L_g$. By Lemma \[Lem:conjD\] (1), ${\varphi}=\sigma_{v}\phi_g$ is conjugate to $\sigma_{v'}\phi_g$ by an element of $D$. Hence we obtain (2). Irreducible twisted modules for lattice VOAs {#Sec:twist} -------------------------------------------- Let $L$ be an even unimodular lattice. Let $g\in O(L)$ be of order $n$ and $\phi_g\in O(\hat{L})$ be a standard lift of $g$. Then $V_L$ has a unique irreducible $\phi_g$-twisted $V_L$-module, up to isomorphism ([@DLM2]). Such a module $V_L[\phi_g]$ was constructed in [@Le; @DL] explicitly; as a vector space, $$V_L[\phi_g]\cong M(1)[g]\otimes{\mathbb{C}}[P_0^g(L)]\otimes T,$$ where $M(1)[g]$ is the “$g$-twisted" free bosonic space, ${\mathbb{C}}[P_0^g(L)]$ is the group algebra of $P_0^g(L)$ and $T$ is an irreducible module for a certain “$g$-twisted" central extension of $L$. (see [@Le Propositions 6.1 and 6.2] and [@DL Remark 4.2] for detail). Recall that $$\dim T=|L_g/(1-g)L|^{1/2}$$ and that the conformal weight $\rho_T$ of $T$ is given by $$\rho_T:=\frac{1}{4n^2}\sum_{j=1}^{n-1}j(n-j)\dim \mathfrak{h}_{(j)},\label{Eq:rho}$$ where $\mathfrak{h}_{(j)}=\{x\in{\mathfrak{h}}\mid g(x)=\exp((j/n)2\pi\sqrt{-1})x\}$. Note that $M(1)[g]$ is spanned by vectors of the form $$x_1(-m_1)\dots x_s(-m_s)1,$$ where $m_i\in(1/n){\mathbb{Z}}_{>0}$ and $x_i\in\mathfrak{h}_{(nm_i)}$ for $1\le i\le s$. In addition, the conformal weight of $x_1(-m_1)\dots x_s(-m_s)\otimes e^\alpha\otimes t\in V_L[\phi_g]$ is given by $$\sum_{i=1}^s m_i+\frac{(\alpha|\alpha)}{2}+\rho_T,\label{Eq:wtpg}$$ where $x_1(-m_1)\dots x_s(-m_s)\in M(1)[g]$, $e^\alpha\in{\mathbb{C}}[P_0^g(L)]$ and $t\in T$. Note that $m_i\in(1/n){\mathbb{Z}}_{>0}$ and that the conformal weight of $V_L[\phi_g]$ is $\rho_T$. Let $v\in{\mathbb{Q}}\otimes_{\mathbb{Z}}L^g\subset \mathfrak{h}_{(0)}$. Then $\sigma_v$ has finite order on $V_L$ and commutes with $\phi_g$. Note that on $(V_L)_1$, $(\alpha|\beta)=\langle\alpha|\beta\rangle$ for $\alpha,\beta\in{\mathfrak{h}}$. Let $V_L[\phi_g]^{(v)}$ be the irreducible $\sigma_v\phi_g$-twisted $V_L$-module defined as in Proposition \[Prop:twist\]. It is the unique irreducible $\sigma_v\phi_g$-twisted $V_L$-module, up to isomorphism, and is also denoted by $V_L[\sigma_v\phi_g]$. By the action of $\omega_{(1)}^{(v)}$ (see ), we know that the conformal weight of $x_1(-m_1)\dots x_s(-m_s)\otimes e^\alpha\otimes t$ in $V_L[\phi_g]^{(v)}$ ($m_i\in(1/n){\mathbb{Z}}_{>0}$, $\alpha\in P_0^g(\Lambda)$ and $t\in T$) is $$\sum_{i=1}^s m_i+\frac{(\alpha|\alpha)}{2}+\rho_T +\langle v|\alpha\rangle+\frac{\langle v|v\rangle}2=\sum_{i=1}^s m_i+\frac{(v+\alpha|v+\alpha)}{2}+\rho_T.\label{Eq:wttw}$$ Notice that the conformal weight of $V_L[\phi_g]^{(v)}$ is $$\frac{1}2\min\{(\beta|\beta)\mid \beta\in v+P_0^g(L)\}+\rho_T.\label{Eq:lowtw}$$ By the explicit description of $\phi_g$-twisted and $\sigma_v\phi_g$-twisted vertex operators in [@Le; @DL] and , the $0$-th mode of $v\in\mathfrak{h}_{(0)}\subset (V_L^{\sigma_v\phi_g})_1$ on $V_L[\phi_g]^{(v)}$ is given by $$x^{(v)}_{(0)}(w\otimes e^\alpha\otimes t)=(x|v+\alpha)w\otimes e^\alpha\otimes t,\label{Eq:h0tw}$$ where $w\in M(1)[g]$, $e^\alpha\in{\mathbb{C}}[P_0^g(L)]$ and $t\in T$. Now, we assume that ${\mathfrak{h}}_{(0)}$ is a Cartan subalgebra of the reductive Lie algebra $(V_L^{\sigma_v\phi_g})_1$; note that this assumption is clearly satisfied when $L$ is the Leech lattice $\Lambda$ since $(V_\Lambda^{\sigma_v\phi_g})_1={\mathfrak{h}}_{(0)}$. Recall that $V_L[\sigma_v\phi_g]$ is a module for $(V_L^{\sigma_v\phi_g})_1$ via the $0$-th product and $(V_L[\sigma_v\phi_g])_1$ is a submodule. Let $\Pi(V_L[\sigma_v\phi_g])$ (resp. $\Pi((V_L[\sigma_v\phi_g])_1)$) be the set of ${\mathfrak{h}}_{(0)}$-weights of $V_L[\sigma_v\phi_g]$ (resp. $(V_L[\sigma_v\phi_g])_1$). The equation above shows that the ${\mathfrak{h}}_{(0)}$-weight of $w\otimes e^\alpha\otimes t$ is $v+\alpha\in{\mathfrak{h}}_{(0)}$. Then, we have $$\Pi(V_L[\sigma_v\phi_g])=v+P_0^g(L).\label{Eq:pi}$$ The following lemma is immediate from . \[L:hwtt\] Assume that $\rho_T\ge (1-1/n)$. 1. $\Pi((V_L[\sigma_v\phi_g])_1)=(v+P_0^g(L))(2(1-\rho_T))(=\{x\in v+P_0^g(L)\mid (x|x)=2(1-\rho_T)\}).$ 2. If one of the following holds, then $\Pi((V_L[\sigma_v\phi_g])_1)=\{0\}$: 1. $\rho_T\ge1$; 2. The minimum norm of $v+P_0^g(L)$ is greater than $2(1-\rho_T)$. Conjugacy class of the automorphism group of the Leech lattice VOA ================================================================== In this section, we study conjugacy classes of the automorphism group of the Leech lattice VOA. We use the same notation as in Sections 3 and 4 for the Leech lattice $\Lambda$, the isometry group $O(\Lambda)$ and the Leech lattice VOA $V_\Lambda$. Note that for (non fixed-point free) elements in $O(\Lambda)$, the characteristic polynomials and the fixed-point sublattices are summarized in [@HL90 Table 1] (cf. [@HM16]). Note also that the conjugacy class of a power of an element of $O(\Lambda)/\langle-1\rangle$ is described in [@Wi83 Table 1]. Recall that for ${\varphi}\in{\mathrm{Aut}\,}V_\Lambda$, $(V_\Lambda^{\varphi})_1=\mathfrak{h}_{(0)}=\{v\in\mathfrak{h}\mid \mu({\varphi})(v)=v\}$, where $\mu:{\mathrm{Aut}\,}V_\Lambda\to O(\Lambda)$ is the surjective map given in . For an isometry $g$ of $\Lambda$, let $\phi_g\in{\mathrm{Aut}\,}V_\Lambda$ denote a standard lift of $g$ (see Section 4.2) and $\rho_T$ is the conformal weight of the subspace $T$ of $V_\Lambda[\phi_{g}]$ given in . For the detail of the set $\Pi((V_\Lambda[\varphi])_1)$ of $\mathfrak{h}_{(0)}$-weights, see Sections \[regularauto\] and \[Sec:twist\]. For the related matrix $\mathcal{M}(\cdot)$, see Section \[S:lattice\]. Throughout this section, let $\tilde{X}_n$ denote the generalized Cartan matrix of the affine root system of type $\tilde{X}_n$. $|\varphi|$ $\dim (V_\Lambda^\varphi)_1$ $\mathcal{M}(\Pi((V_\Lambda[\varphi])_1))$ Conjugacy class of $g$ $|\phi_g|$ $\rho_T$ ------------- ------------------------------ -------------------------------------------- ------------------------ ------------ ---------- $4$ $10$ $\tilde{A}_3^3$ $4C$ $4$ $3/4$ $5$ $8$ $\tilde{A}_4^2$ $5B$ $5$ $4/5$ $6$ $6$ $\tilde{D}_4$ $6G$ $12$ $11/12$ $7$ $6$ $\tilde{A}_6$ $7B$ $7$ $6/7$ $8$ $10$ $\tilde{A}_7$ $4C$ $4$ $3/4$ $8$ $6$ $\tilde{D}_5$ $8E$ $8$ $7/8$ $10$ $8$ $\tilde{D}_6$ $5B$ $5$ $4/5$ : $\varphi\in{\mathrm{Aut}\,}V_\Lambda$ and $g=\mu({\varphi})\in O(\Lambda)$[]{data-label="T:CAut"} \[P:con\] Let $\varphi$ be an automorphism of $V_\Lambda$. Assume that the order $|\varphi|$ of $\varphi$, $\dim (V_\Lambda^g)_1$ and the matrix $\mathcal{M}(\Pi((V_\Lambda[\varphi])_1))$ are given as in a row of Table \[T:CAut\]. Then 1. $g=\mu(\varphi)$ belongs to the conjugacy class of $O(\Lambda)$ given in the same row of Table \[T:CAut\]; 2. $\varphi$ belongs to a unique conjugacy class of ${\mathrm{Aut}\,}V_\Lambda$. By Lemma \[Lem:surjC\] (2), we may assume that $\varphi=\sigma_v\phi_g$ for some $v\in{\mathbb{C}}\otimes_{\mathbb{Z}}\Lambda^g$, up to conjugation in ${\mathrm{Aut}\,}V_\Lambda$. Since ${\varphi}$ has finite order, $v\in {\mathbb{Q}}\otimes_{\mathbb{Z}}\Lambda^g$, and by Lemma \[Lem:conjD\] (2), we may regard $u$ as an element in $({\mathbb{Q}}\otimes_{\mathbb{Z}}\Lambda^g)/P_0^g(\Lambda)$, up to conjugation in ${\mathrm{Aut}\,}V_\Lambda$. Note that $\dim (V_\Lambda^\varphi)_1$ is equal to the rank of $\Lambda^g$ and that $|g|$ divides $|\varphi|$. Note also that $\sigma_v$ and $\phi_g$ are mutually commutative since $g(v)=v$. By [@HL90 Table 1] (cf. [@HM16 p634]), if $(|\varphi|,\dim(V_\Lambda^\varphi)_1)=(4,10),(5,8),(7,6)$, or $(8,10)$, then the conjugacy class of $g$ is uniquely determined as desired; if $(|\varphi|,\dim(V_\Lambda^\varphi)_1)=(6,6)$ (resp. $(8,6)$, $(10.8)$), then the conjugacy class of $g$ is one of $\{3C,6C,-6C,-6D,6G\}$ (resp. $\{-4C,4F,8E\}$, $\{-2A,5B\}$). We will show that it is $6G$ (resp. $8E$, $5B$) as in Table \[T:CAut\]. Note that by using the characteristic polynomial of $g$ (e.g. [@HL90 Table 1]), one can easily compute $\rho_T$. First, $g$ does not belong to the conjugacy class $-2A$, $3C$, $-4C$ and $-6C$; otherwise $\rho_T=1$, and by Lemma \[L:hwtt\] (2), $\Pi((V_\Lambda[\varphi])_1)=\{0\}$, which contradicts the assumption. Next, we suppose, for a contradiction, that $g$ belongs to the conjugacy class $4F$ (resp. $6C$, $-6D$). Set $s=8$ (resp. $6$, $6$). Since $g^2$ belongs to the conjugacy class $2C$ (resp. $2A$, $2A$), the order of $\phi_g$ is $s$ by Lemmas \[L:o2\] and \[Lem:ordSLift\]. It follows from $\sigma_v^s=id$ that $v\in (1/s)\Lambda^g$. Recall from [@HL90 Table 1] that $\Lambda^g\cong 2{\mathbb{Z}}^{\oplus6}$ (resp. $\sqrt2E_6$, $\sqrt3E_6^*$). Then $P_0^g(\Lambda)\cong (\Lambda^g)^*\cong (1/2){\mathbb{Z}}^{\oplus6}$ (resp. $(1/\sqrt2)E_6^*$, $(1/\sqrt3)E_6$) and its minimum norm is $1/4$ (resp. $1/3$, $1/3$). By $\rho_T=15/16$ (resp. $5/6$, $5/6$), Lemma \[L:hwtt\] (2) and the assumption $\Pi((V_\Lambda[\varphi])_1)\neq\{0\}$, $v+P_0^g(\Lambda)$ has vectors of norm $1/8$ (resp. $1/3$, $1/3$). By the structure of $\Lambda^g$, the coset $v+P_0^g(\Lambda)$ has exactly $4$ (resp. $9$, $10$) vectors of norm $1/8$ (resp. $1/3$, $1/3$); notice that this number is independent of the choice of $v\in (1/s)\Lambda^g$. This contradicts the assumption $|\Pi((V_\Lambda[\varphi])_1)|=6$ (resp. $5$, $5$) by Lemma \[L:hwtt\] (1). Thus we obtain (1). Let us prove (2). For the conjugacy classes specified in (1), $\rho_T$ and $|\phi_g|$ are summarized in Table \[T:CAut\]. Since $\mu$ is surjective, we may fix $g$ up to conjugation in ${\mathrm{Aut}\,}V_\Lambda$. Let $n$ be the order of $g$. We suppose, for a contradiction, that ${\varphi}=\phi_g$. By Table \[T:CAut\], $\rho_T\ge (n-1)/n$. In addition, by Lemma \[L:uni\] (2), the minimum norm of $P_0^g(\Lambda)$ is greater than $2(1-\rho_T)$. By Lemma \[L:hwtt\] (2), we have $\Pi((V_\Lambda[\varphi])_1)=\{0\}$, which contradicts the assumption. Hence ${\varphi}\neq\phi_g$, and $v\notin P_0^g(\Lambda)$. By Lemma \[L:hwtt\] (1), $v+P_0^g(\Lambda)$ has vectors of norm $2(1-\rho_T)$. Let us show that $v+P_0^g(\Lambda)$ is unique, up to the action of $C_{O(\Lambda)}(g)$. If $(|{\varphi}|,\dim (V_\Lambda^g)_1)=(6,6)$ (resp. $(8,10)$ or $(10,8)$), then $(|\phi_g|,|{\varphi}|)=(2n,n)$ (resp. $(n,2n)$) and hence $|\sigma_v|=2n$. This implies that $v\in (1/2n)\Lambda^g$. Since $v+P_0^g(\Lambda)$ has vectors of norm $2(1-\rho_T)$, we have $v+P_0^g(\Lambda)\in\Lambda(g,1/2n,1/n)$ (resp. $\Lambda(g,1/2n,2/n)$). (For the definition of $\Lambda(g,p,q)$, see .) For the other cases, $|\phi_g|=|{\varphi}|=n$, and $v\in (1/n)\Lambda^g$. Hence $v+P_0^g(\Lambda)\in \Lambda(g,1/n,2/n)$. Then by Lemma \[L:uni\] (3), (4), (5) (see also Tables \[T:4C8\], \[T:6G24\], \[T:8E16\], \[T:4C32\] and \[T:5B40\]), the assumption on $\mathcal{M}(\Pi((V_\Lambda[\varphi])_1))$ in Table \[T:CAut\] and Lemma \[L:hwtt\] (1), $v+P_0^g(\Lambda)$ is unique up to the action of $C_{O(\Lambda)}(g)$. Since $\mu(C_{{\mathrm{Aut}\,}V_\Lambda}(\phi_g))=C_{O(\Lambda)}(g)$ (see Lemma \[Lem:surjC\] (1)), ${\varphi}=\sigma_{v}\phi_g$ is unique, up to conjugation in ${\mathrm{Aut}\,}V_\Lambda$. Therefore we obtain (2). (1) In the cases $(|{\varphi}|,\dim (V_\Lambda^g)_1)=(6,6), (8,10)$ and $(10,8)$, the order of $\sigma_v$ is actually $2n$; indeed $v\notin (p/n)\Lambda^g$ for any integer $p$ relatively prime $n$ by using the tables of Lemma \[L:uni\] and the minimum norm of $v+P_0^g(\Lambda)$. (2) In the case $(|{\varphi}|,\dim (V_\Lambda^g)_1)=(6,6)$, we could check that $\phi_g^6=\sigma_v^6$ directly. Hence the order of ${\varphi}=\sigma_v\phi_g$ is actually six. Uniqueness of holomorphic VOAs of central charge $24$ ===================================================== In this section, we will prove the main theorem using Theorem \[T:RO\] and the Leech lattice VOA. Let $V$ be a strongly regular holomorphic VOA of central charge $24$ whose weight one Lie algebra has the type $A_{3,4}^3A_{1,2}$, $A_{4,5}^2$, $D_{4,12}A_{2,6}$, $A_{6,7}$, $A_{7,4}A_{1,1}^3$, $D_{5,8}A_{1,2}$ or $D_{6,5}A_{1,1}^2$. Set ${\mathfrak{g}}=V_1=\bigoplus_{i=1}^t{\mathfrak{g}}_i$, where ${\mathfrak{g}}_i$ are simple ideals. Let $n$ be the least common multiple of the Coxeter numbers of ${\mathfrak{g}}_i$, namely, $n=4,5,6,7,8,8$ or $10$, respectively. Let $k_i$ be the level of ${\mathfrak{g}}_i$. Let $\mathfrak{h}$ be a Cartan subalgebra of ${\mathfrak{g}}$. We fix a set of simple roots. Let $\tilde\rho_i$ be the sum of all fundamental (co)weights of ${\mathfrak{g}}_i$. As in Lemma \[L:u\], we set $$u:=\sum_{i=1}^t\frac{1}{h_i}\tilde\rho_i,\qquad \sigma:=\sigma_u=\exp(-2\pi\sqrt{-1}u_{(0)}) \in{\mathrm{Aut}\,}V.\label{Def:tau7}$$ By Lemma \[L:u\], we have $\langle u|u\rangle\in(2/n){\mathbb{Z}}$. By Lemma \[Lem:Kacfpa\], the restriction of $\sigma$ to ${\mathfrak{g}}$ is a regular automorphism of order $n$. Hence $V^{\sigma}_1={\mathfrak{g}}^{\sigma}=\mathfrak{h}$. Let $U$ be the subVOA of $V$ generated by $V_1$. By Proposition \[Prop:posl\], $U\cong \bigotimes_{i=1}^tL_{{\mathfrak{g}}_i}(k_i,0)$. Then $U$ is strongly regular and by Proposition \[Prop:conf\], the conformal vectors of $U$ and $V$ are the same. Hence $V$ is a direct sum of finitely many irreducible $U$-submodules. \[L:Ord7\] The spectrum of $u_{(0)}$ on $V$ belongs to $(1/n){\mathbb{Z}}$ and the order of $\sigma$ on $V$ is $n$. Let $M\cong \bigotimes_{i=1}^tL_{{\mathfrak{g}}_i}(k_i,\lambda_i)$ be an irreducible $U$-submodule of $V$. Since $U$ and $V$ share the same conformal vector, the conformal weight of $M$ is an integer. In addition, if $M\neq U$, then the conformal weight of $M$ is at least $2$ since $U_0=V_0$ and $U_1=V_1$. By Lemma \[L:wtu\] (1), $\sigma^n$ acts on $M$ as the identity operator. Clearly, $\sigma$ acts on $U$ as an order $n$ automorphism. Hence, the order of $\sigma$ is $n$ on $V$. The assertion on the spectrum of $u_{(0)}$ has also been verified by the claim (i) in the proof of Lemma \[L:wtu\] (1). Consider the irreducible $\sigma^j$-twisted $V$-module $V^{(ju)}$ constructed in Proposition \[Prop:twist\] for $1\le j\le n-1$. Note that $\sigma^j=\sigma^{j-n}$ by Lemma \[L:Ord7\]. \[Prop:twist1\] For $1\le j\le n-1$, the conformal weight of $V^{(ju)}$ belongs to $(1/n){\mathbb{Z}}$ and is at least $1$. By , $\langle u|u\rangle\in(2/n){\mathbb{Z}}$ and Lemma \[L:Ord7\], the conformal weight of $V^{(ju)}$ belongs to $(1/n){\mathbb{Z}}$. For any irreducible $U$-submodule $M$ of $V$, the conformal weight of $M^{(ju)}$ is at least $1$ (see Lemma \[L:wtu\] (2)), and we obtain the result. By Proposition \[Prop:twist1\], $\sigma$ satisfies the conditions (I) and (II) of Theorem \[T:EMS\]; let $\widetilde{V}_{\sigma}$ be the strongly regular holomorphic VOA of central charge $24$ obtained by applying the ${\mathbb{Z}}_n$-orbifold construction to $V$ and $\sigma$. \[Prop:abel7\] The VOA $\widetilde{V}_{\sigma}$ is isomorphic to the Leech lattice VOA. For each case, one can show that $\dim (\widetilde{V}_{\sigma})_1=24$ by using the dimension formula in Section \[S:df\] and Table \[T:df\]; note that $U(1)$ means a $1$-dimensional abelian Lie algebra. By Proposition \[Prop:conf\], $\widetilde{V}_{\sigma}$ is isomorphic to the Leech lattice VOA. $n$ $V_1$ $\dim V_1$ $V_1^{\sigma}$ $\dim V_1^{\sigma}$ $V_1^{\sigma^2}$ $\dim V_1^{\sigma^2}$ $ V_1^{\sigma^{(n/2)}}$ $ \dim V_1^{\sigma^{(n/2)}}$ ------ -------------------- ------------ ---------------- --------------------- ------------------ ----------------------- ------------------------- ------------------------------ $4$ $A_{3,4}^3A_{1,2}$ $48$ $U(1)^{10}$ $10$ $A_1^7U(1)^3$ $24$ $5$ $A_{4,5}^2$ $48$ $U(1)^8$ $8$ $6$ $D_{4,12}A_{2,6}$ $36$ $U(1)^6$ $6$ $A_1^3U(1)^3$ $12$ $A_2A_1^4$ $20$ $7$ $A_{6,7}$ $48$ $U(1)^6$ $6$ $8$ $A_{7,4}A_{1,1}^3$ $72$ $U(1)^{10}$ $10$ $A_1^7U(1)^3$ $24$ $A_3^2A_1^3U(1)$ $40$ $8$ $D_{5,8}A_{1,2}$ $48$ $U(1)^6$ $6$ $A_1^4U(1)^2$ $14$ $A_3A_1^3$ $24$ $10$ $D_{6,5}A_{1,1}^2$ $72$ $U(1)^8$ $8$ $A_1^6U(1)^2$ $20$ $A_3^2U(1)^2$ $32$ : Dimensions of $V^{\sigma^i}_1$[]{data-label="T:df"} For the cases $A_{3,4}^3A_{1,2}$, $A_{4,5}^2$, $D_{4,12}A_{2,6}$ and $A_{6,7}$, by using the explicit action of $V_1^{\sigma}$ on $V[\sigma^i]_1$, one could show that $(\widetilde{V}_{\sigma})_1$ is abelian, which also shows that $\widetilde{V}_{\sigma}$ is isomorphic to the Leech lattice VOA by Proposition \[Prop:conf\]. Theorem \[Thm:main\], the main theorem of this article, is a corollary of the following theorem: \[Thm:uni7\] Let $V$ be a strongly regular holomorphic VOA of central charge $24$ such that the Lie algebra structure of $V_1$ is $A_{3,4}^3A_{1,2}$, $A_{4,5}^2$, $D_{4,12}A_{2,6}$, $A_{6,7}$, $A_{7,4}A_{1,1}^3$, $D_{5,8}A_{1,2}$ or $D_{6,5}A_{1,1}^2$. Then $V$ is isomorphic to the holomorphic VOA $(\widetilde{V_\Lambda})_{\varphi}$, where $\varphi$ is an automorphism in the conjugacy class of ${\mathrm{Aut}\,}V_\Lambda$ in Proposition \[P:con\] (see also Table \[T:CAut\]) with the property $(|\varphi|,\dim(V_\Lambda^\varphi)_1)=(n,\text{Lie rank of }V_1)$. Set ${\mathfrak{g}}=A_{3,4}^3A_{1,2}$, $A_{4,5}^2$, $D_{4,12}A_{2,6}$, $A_{6,7}$, $A_{7,4}A_{1,1}^3$, $D_{5,8}A_{1,2}$ or $D_{6,5}A_{1,1}^2$. Then $n=4,5,6,7,8,8$ or $10$, respectively. It suffices to verify the hypotheses in Theorem \[T:RO\] for ${\mathfrak{g}}$, $\mathfrak{p}={\mathfrak{h}}$ and $W=V_{\Lambda}$. Take $u\in{\mathfrak{h}}$ as in and set $\sigma=\sigma_u$. Then the order of $\sigma$ is $n$ on $V$ by Lemma \[L:Ord7\]. The hypothesis (a) holds by the definition of $u$ and (b) holds by Proposition \[Prop:abel7\]. The hypothesis about the uniqueness of the conjugacy class follows from Proposition \[P:con\]. Here, we use the condition (B’) in Remark \[R:RO\]. The detail is as follows. Let $i$ be an integer relatively prime to $n$ and let ${\mathfrak{g}}_{(i)}=\{x\in{\mathfrak{g}}\mid \sigma(x)=\exp((i/n)2\pi\sqrt{-1})x\}$. Note that for a simple ideal $\mathfrak{s}$ of ${\mathfrak{g}}$ whose the Coxeter number is less than $n$, we have $\mathfrak{s}\cap{\mathfrak{g}}_{(i)}=\{0\}$. Hence ${\mathfrak{g}}_{(i)}$ is contained in the ideal of ${\mathfrak{g}}$ of type $A_{3}^3$, $A_{4}^2$, $D_{4}$, $A_{6}$, $A_{7}$, $D_{5}$ or $D_{6}$, respectively. Let $\Pi({\mathfrak{g}}_{(i)})$ be the set of ${\mathfrak{h}}$-weights of ${\mathfrak{g}}_{(i)}$. By Lemma \[L:CM\], $\mathcal{M}(\Pi({\mathfrak{g}}_{(i)}))$ is equivalent to the generalized Cartan matrix of type $\tilde{A}_3^3$, $\tilde{A}_4^2$, $\tilde{D}_4$, $\tilde{A}_6$, $\tilde{A}_7$, $\tilde{D}_5$ or $\tilde{D}_6$, respectively. By Proposition \[P:con\], the conjugacy class is uniquely determined by the conditions (A) and (B’). In [@LS16b], a strongly regular holomorphic VOA of central charge $24$ whose weight one Lie algebra has the type $A_{6,7}$ was constructed explicitly from the Leech lattice VOA and an order $7$ automorphism in the conjugacy class described in Proposition \[P:con\]. By the same manner, we could prove that $((\widetilde{V_\Lambda})_\varphi)_1$ is isomorphic to ${\mathfrak{g}}$ directly in Theorem \[Thm:uni7\]. However, we omit the proof since the “reverse" process guarantees this isomorphism. In Proposition \[Prop:abel7\], we obtain the Leech lattice VOA by the orbifold constructions. Considering the reverse process, we obtain holomorphic VOAs of central charge $24$ whose weight one Lie algebras have type $A_{3,4}^3A_{1,2}$, $A_{4,5}^2$, $D_{4,12}A_{2,6}$, $D_{5,8}A_{1,2}$, $A_{7,4}A_{1,1}^3$ or $D_{6,5}A_{1,1}^2$ from the Leech lattice VOA by ${\mathbb{Z}}_n$-orbifold constructions, where $n=4,5,6,8,8,10$, respectively, which are different from the previous constructions ([@Lam; @LS16; @EMS]). Therefore we also obtain alternative constructions for these VOAs. \[R:Nils\] Recently, the uniqueness for $15$ cases has been established in [@EMS2], which includes the $3$ cases $A_{4,5}^2$, $A_{1,2}A_{3,4}^3$ and $B_{8,1}E_{8,2}$ that we have discussed in this article and in [@LS15]. Up to now, there are still $9$ remaining cases for the uniqueness part: $$\begin{aligned} &F_{4,6}A_{2,2},\quad E_{7,3}A_{5,1},\quad D_{7,3}A_{3,1}G_{2,1},\quad C_{4,10},\quad A_{5,6}C_{2,3}A_{1,2},\\ & D_{5,4}C_{3,2} A_{1,1}^2,\quad A_{3,1} C_{7,2},\quad E_{6,4}A_{2,1}C_{2,1}\quad \text{and}\quad \emptyset.\end{aligned}$$ #### **Acknowledgement.** The authors wish to thank Nils Scheithauer for sending his preprint. They also wish to thank Sven Möller for pointing out a gap in the early version of this article. [99]{} R.E. Borcherds, Vertex algebras, Kac-Moody algebras, and the Monster, *Proc. Nat’l. Acad. Sci. U.S.A.* [**83**]{} (1986), 3068–3071. W. Bosma, J. Cannon and C. Playoust, The Magma algebra system I: The user language, *J. Symbolic Comput.* [**24**]{} (1997), 235–265. S. Carnahan and M. Miyamoto, Regularity of fixed-point vertex operator subalgebras; arXiv:1603.05645. L. Dolan, P. Goddard and P. Montague, Conformal field theories, representations and lattice constructions, [*Comm. Math. Phys.*]{} [**179**]{} (1996), 61–120. C. Dong and J. Lepowsky, The algebraic structure of relative twisted vertex operators, *J. Pure Appl. Algebra* [**110**]{} (1996), 259–295. C. Dong, H. Li, and G. Mason, Modular-invariance of trace functions in orbifold theory and generalized Moonshine, *Comm. Math. Phys.* [**214**]{} (2000), 1–56. C. Dong and G. Mason, Holomorphic vertex operator algebras of small central charge, *Pacific J. Math.* [**213**]{} (2004), 253–266. C. Dong and G. Mason, Rational vertex operator algebras and the effective central charge, *Int. Math. Res. Not.* (2004), 2989–3008. C. Dong and G. Mason, Integrability of $C_2$-cofinite vertex operator algebras. *Int. Math. Res. Not.* (2006), Art. ID 80468, 15 pp. C. Dong and K. Nagatomo, Automorphism groups and twisted modules for lattice vertex operator algebras, *in* Recent developments in quantum affine algebras and related topics (Raleigh, NC, 1998), 117–133, *Contemp. Math.*, [**248**]{}, Amer. Math. Soc., Providence, RI, 1999. J. van Ekeren, S. Möller and N. Scheithauer, Construction and classification of holomorphic vertex operator algebras; arXiv:1507.08142. J. van Ekeren, S. Möller and N. Scheithauer, Dimension Formulae in Genus Zero and Uniqueness of Vertex Operator Algebras; arXiv:1704.00478. I.B. Frenkel, Y. Huang and J. Lepowsky, On axiomatic approaches to vertex operator algebras and modules, *Mem. Amer. Math. Soc.* [**104**]{} (1993), viii+64 pp. I.B. Frenkel, J. Lepowsky, and A. Meurman, Vertex operator algebras and the monster, Pure and Appl. Math., vol. 134, Academic Press, Boston, 1988. I. Frenkel and Y. Zhu, Vertex operator algebras associated to representations of affine and Virasoro algebras, *Duke Math. J.* [**66**]{} (1992), 123–168. K. Harada and M.L. Lang, On some sublattices of the Leech lattice, *Hokkaido Math. J.* [**19**]{} (1990), 435–446. G. Höhn and G.  Mason, The 290 fixed-point sublattices of the Leech lattice, *J. Algebra* [**448**]{} (2016), 618–637. V.G. Kac, Infinite-dimensional Lie algebras, Third edition, Cambridge University Press, Cambridge, 1990. K. Kawasetsu, C.H. Lam and X. Lin, $\mathbb{Z}_2$-orbifold construction associated with $(-1)$-isometry and uniqueness of holomorphic vertex operator algebras of central charge 24; arXiv:1611.07655. C.H. Lam, On the constructions of holomorphic vertex operator algebras of central charge $24$, *Comm. Math. Phys.* [**305**]{} (2011), 153–198 C.H. Lam and X. Lin, A Holomorphic vertex operator algebra of central charge $24$ with weight one Lie algebra $F_{4,6}A_{2,2}$; arXiv:1612.08123. C.H. Lam and H. Shimakura, Quadratic spaces and holomorphic framed vertex operator algebras of central charge 24, *Proc. Lond. Math. Soc.* [**104**]{} (2012), 540–576. C.H. Lam and H. Shimakura, Classification of holomorphic framed vertex operator algebras of central charge 24, *Amer. J. Math.* [**137**]{} (2015), 111–137. C.H. Lam and H. Shimakura, Orbifold construction of holomorphic vertex operator algebras associated to inner automorphisms, *Comm. Math. Phys.*, **342** (2016), 803–841. C.H. Lam and H. Shimakura, A holomorphic vertex operator algebra of central charge 24 whose weight one Lie algebra has the type $A_{6,7}$, *Lett. Math. Phys.*, **106** (2016), 1575–1585. C.H. Lam and H. Shimakura, Reverse orbifold construction and uniqueness of holomorphic vertex operator algebras; arXiv:1606.08979. J. Lepowsky, Calculus of twisted vertex operators, *Proc. Natl. Acad. Sci. USA* [**82**]{} (1985), 8295–8299. H. Li, Symmetric invariant bilinear forms on vertex operator algebras, *J. Pure Appl. Algebra*, [**96**]{} (1994), 279–297. H. Li, Local systems of twisted vertex operators, vertex operator superalgebras and twisted modules, *in* Moonshine, the Monster, and related topics, 203–236, *Contemp. Math.*, [**193**]{}, Amer. Math. Soc., Providence, RI, 1996. H. Li, Certain extensions of vertex operator algebras of affine type, *Comm. Math. Phys.* **217** (2001), 653–696. M. Miyamoto, A ${\mathbb{Z}}_3$-orbifold theory of lattice vertex operator algebra and ${\mathbb{Z}}_3$-orbifold constructions, *in* Symmetries, integrable systems and representations, 319–344, *Springer Proc. Math. Stat.* [**40**]{}, Springer, Heidelberg, 2013. M. Miyamoto, $C_2$-cofiniteness of cyclic-orbifold models, *Comm. Math. Phys.* [**335**]{} (2015), 1279–1286. S. Möller, A Cyclic Orbifold Theory for Holomorphic Vertex Operator Algebras and Applications, Ph.D. thesis; arXiv:1611.09843. P.S. Montague, Orbifold constructions and the classification of self-dual $c=24$ conformal field theories, [*Nuclear Phys.*]{} B [**428**]{} (1994), 233–258. D. Sagaki and H. Shimakura, Application of a $\mathbb{Z}_{3}$-orbifold construction to the lattice vertex operator algebras associated to Niemeier lattices, *Trans. Amer. Math. Soc.* [**368**]{} (2016), 1621–1646. A.N. Schellekens, Meromorphic $c=24$ conformal field theories, *Comm. Math. Phys.* [**153**]{} (1993), 159–185. R.A. Wilson, The maximal subgroups of Conway’s group $Co_1$, *J. Algebra* [**85**]{} (1983), 144–165.
--- abstract: 'The Lugiato-Lefever equation is a damped and driven version of the well-known nonlinear Schrödinger equation. It is a mathematical model describing complex phenomena in dissipative and nonlinear optical cavities. Within the last two decades, the equation has gained a wide attention as it becomes the basic model describing optical frequency combs. Recent works derive the Lugiato-Lefever equation from a class of damped driven $\phi^4$ equations closed to resonance. In this paper, we provide a justification of the envelope approximation. From the analysis point of view, the result is novel and non-trivial as the drive yields a perturbation term that is not square integrable. The main approach proposed in this work is to decompose the solutions into a combination of the background and the integrable component. This paper is the first part of a two-manuscript series.' author: - | [Fiki T. Akbar$^{\sharp}$, Bobby E. Gunara]{}$^{\sharp}$, Hadi Susanto$^{\flat}$,\ \ $^{\sharp}$*Theoretical Physics Laboratory*\ *Theoretical High Energy Physics and Instrumentation Research Group,*\ *Faculty of Mathematics and Natural Sciences,*\ *Institut Teknologi Bandung*\ *Jl. Ganesha no. 10 Bandung, Indonesia, 40132*\ [and]{}\ $^{\flat}$*Department of Mathematical Sciences,*\ *University of Essex,*\ *Colchester, CO4 3SQ, United Kingdom*\ email: ftakbar@fi.itb.ac.id, bobby@fi.itb.ac.id, hsusanto@essex.ac.uk title: '**Justification of the Lugiato-Lefever model from a damped driven $\phi^4$ equation**' --- Introduction {#sec1} ============ The Lugiato-Lefever equation is given by [@lugi87] $$\mathrm{i} A_{\tau} = - A_{\xi\xi} - \frac{\mathrm{i}\alpha}{2} A - \frac{3\lambda}{2\omega} |A|^2 A + F,\quad \xi\in\mathbb{R},\,\tau\geq0, \label{NLSeq}$$ which is nothing else but a damped driven nonlinear Schrödinger equation. It models spatiotemporal pattern formation in dissipative, diffractive and nonlinear optical cavities submitted to a continuous laser pump. The same model was shown rather immediately to also appear in dispersive optical ring cavities [@hael92]. Lugiato-Lefever equation has raised a wide interest particularly following its recent succesful experimental application in the study of broadband microresonator-based optical frequency combs [@delh07; @kipp11], that has opened applicative avenues (see [@lugi15; @chem17] for reviews on the subject). Recently Ferré et al. [@ferr17] showed that the dynamics of the Lugiato-Lefever equation can also be obtained from a driven dissipative sine-Gordon model. The former equation is a single envelope approximation, i.e., modulation equation, of the latter. Even in the region far from the conservative limit, where the approximation is expectedly no longer valid, they were reported to still exhibit qualitatively similar dynamical behaviors. Herein, instead of the sine-Gordon equation, we consider a nonlinear damped driven $\phi^4$ model $$u_{tt} + \epsilon^2 \alpha u_{t} - \beta u_{xx} + \gamma u - \lambda u^3 = \epsilon^3 h\left(e^{\mathrm{i}\Omega t} + e^{-\mathrm{i}\Omega t}\right), \label{NKGeq}$$ where $\alpha, \beta, \gamma > 0$ and $\epsilon$ is a small positive parameter. The nonlinearity is considered to be ’softening’, i.e., $\lambda < 0$. The ’hardening’ case $\lambda > 0$ will be discussed in the second part of this paper series, whose results can be extended to the sine-Gordon equation. Introducing the slow time and space $\tau$ and $\xi$ defined as $\tau = \epsilon^2 t$ and $\xi = \epsilon\sqrt{\frac{2\omega^3}{\gamma \beta}}\left(x - vt\right)$, where $v = d\omega/dk = \beta k/\omega$ is a group velocity of the linear traveling wave and $k$ and $\omega$ are satisfying dispersion relation $\omega^2 = \beta k^2 + \gamma$, we define the slowly modulated ansatz function as $$X(t,x) = \epsilon A(\tau,\xi)e^{\mathrm{i}(kx - \omega t)} + \frac{\lambda\epsilon^3}{9\beta k^2 - 9\omega^2 + \gamma}A(\tau,\xi)^3 e^{3\mathrm{i}(kx - \omega t)} + \mathrm{c.c}\:. \label{AnsatzFunction}$$ The modulation amplitude $A$ is a complex valued function satisfying Eq. , where $F(\tau,\xi) = - \frac{h}{2\omega} e^{-\mathrm{i}(\kappa \xi - \nu\tau)}$ with $\kappa = k/\epsilon$ and $\Omega = \gamma/\omega - \epsilon^2 \nu$. Inserting the ansatz function (\[AnsatzFunction\]) into (\[NKGeq\]), we get the residual terms $$\label{residualterm} \mathrm{Res}(t) = \mathcal{O}(\epsilon^4).$$ The same modulation equation has been derived in early reports, e.g., in [@mora74; @kaup78; @noza84] to describe matters driven by an external ac field. Analytical studies of various solutions of the damped, driven continuous nonlinear Schrödinger equation have also been reported in [@24; @5]. Nevertheless, despite the long history of the problem, a rigorous justification of the approximation is interestingly still lacking. The main challange is due to the external drive $F$ that is not integrable. Our work presented in this paper is to provide an answer to the missing piece. An early work justifying the modulation equation without damping and drive was due to [@14; @9]. The presence of external drives would not bring any problem should one consider nonlinear systems that correspond to a parabolic linear operator [@14; @coll90; @hart91; @schn94; @schn94b]. In the context of Eq. , this corresponds to $\alpha\to\infty$, in which case the modulation equation would be a Ginzburg-Landau-type equation, i.e., there is no factor $\mathrm{i}$ on the left hand side of . Recently we consider the reduction of a Klein-Gordon equation with external damping and drive into a damped driven discrete nonlinear Schrödinger equation [@muda18]. To overcome the nonintegrability of the solutions, we worked in a periodic domain. The present report extends our result in [@muda18] by proposing a method that works also in $L^2(\mathbb{R})$. This paper is organized as follows. To provide a rigorous proof justifying the modulation equation, we formulate our method in Section \[sec2\] by decomposing the solutions into the background and particular parts. Using a small-amplitude appeoximation, we derive the Lugiato-Lefever equation from Section \[sec3\] presents the local and global existence of homogeneous solutions of the amplitude equation. The main result on the error bound of the approximation as time evolves is presented in Section \[sec4\]. Solution decomposition {#sec2} ====================== Since the external drive term $F(\tau,\xi)$ is not integrable in the spatial variable, i.e., $F(\tau,\xi) \notin L^{p}(\mathbb{R})$ for any integer $ 1 \leq p < \infty$, in general $A(\tau,\xi)$ is also not integrable. Let $A_p(\tau,\xi)$ be a particular solution of equation (\[NLSeq\]) which can be written as $$A_p(\tau,\xi) = R\;e^{-\mathrm{i}(\kappa \xi - \nu\tau)}\:, \label{partsoln}$$ where $R$ is a complex constant such that $$R = -\frac{h}{2\omega}\;\frac{1}{\frac{3\lambda}{2\omega}|R|^2 - (\kappa^2 + \nu) +\frac{\mathrm{i}\alpha}{2}}\:,$$ and $|R|^2$ satisfies cubic equation, $$\frac{9\lambda^2}{4\omega^2}|R|^6 - \frac{6\lambda}{2\omega}(\kappa^2 + \nu) |R|^4 + \left[\frac{\alpha^2}{4} + (\kappa^2 + \nu)^2\right] |R|^2 - \frac{h^2}{4\omega^2} = 0\:.$$ Since $\lambda<0$, the cubic equation has only one real solution. Furthermore, we have $$\|A_{p}(\tau,\xi)\|_{L^{\infty}(\mathbb{R})} = \sup_{\xi \in \mathbb{R}}\; |A_{p}(\tau,\xi)| = |R| \leq +\infty\:.$$ To handle the non-integrability condition we can work on $\mathbb{T}$, rather than on $\mathbb{R}$ (see [@muda18] where a similar problem was considered in the discrete case). In this paper, we propose a different approach by introducing the decomposition, $$\begin{aligned} \label{decomposition} A(\tau,\xi) & := & e^{\mathrm{i}\nu \tau}\phi(\tau,\xi) + A_{p}(\tau,\xi) \nonumber \\ & = & e^{\mathrm{i}\nu \tau}\left[\phi(\tau,\xi) + \eta(\xi)\right],\end{aligned}$$ where $\phi(\tau,\xi)$ is the integrable term and $\eta(\xi) = R e^{-\mathrm{i}\kappa \xi}$. The initial condition for our system is $$A(0,\xi) = \varphi(\xi) + \eta(\xi)\:. \label{tamb}$$ with $\varphi \in H^{k}(\mathbb{R})$. Here, the space $H^k(\mathbb{R})$ with $k$ is a nonnegative integer denotes the Sobolev space with norm defined as $$\|\phi\|_{H^{k}(\mathbb{R})} = \left[\sum_{i=0}^{k} \|D^{i}_{\xi}\phi\|^2_{L^2(\mathbb{R})}\right]^{1/2}\:,$$ and $H^0(\mathbb{R}) = L^2(\mathbb{R})$ with $$\|\phi\|_{L^{2}(\mathbb{R})} =\left[\int_{\mathbb{R}}\;|\phi(\xi)|^2\:d\xi\right]^{1/2}\:.$$ The differential equation for $\phi$ is given by, $$\mathrm{i} \phi_{\tau} = - \phi_{\xi\xi} - \frac{\mathrm{i}\alpha}{2}\phi - \left( \frac{3\lambda}{2\omega}|R|^2 - \nu\right)\phi + N(\phi) \:, \label{NLSHomoDE}$$ where the nonlinearity term is given by, $$N(\phi) := - \frac{3\lambda}{2\omega}\left[|\phi + \eta|^2 - |\eta|^2\right](\phi + \eta)\:. \label{NonlinearPart}$$ Using , we have the initial condition $$\phi(0,\cdot) = \varphi.$$ Local and Global Existence of the inhomogeneous Nonlinear Schrödinger equation {#sec3} ============================================================================== In this section, we prove the local and global existence for the inhomogeneous part of the nonlinear Schrödinger equation. For an excellent review, an interested reader can further consult, for example, [@bourgain99; @cazenave]. The local existence is stated in the following theorem. \[localsoln\] Let $k \geq 1$ be an integer. For every $\varphi \in H^k(\mathbb{R})$, there exists a positive constant $\tau_m$ depending on the initial data and $k$ such that the differential equation (\[NLSHomoDE\]) admits an unique maximal solution $\phi(\tau)$ on $[0,\tau_m)$ with $$\phi \in C\left([0,\tau_m), H^k(\mathbb{R})\right)\:,$$ and either, 1. $\tau_m = +\infty$, and (\[NLSHomoDE\]) admits a global solution, or 2. $\tau_m < +\infty$, then $\|\phi(\tau)\|_{H^k(\mathbb{R})} \rightarrow \infty$ as $\tau \rightarrow \tau_m$ and the solution blow up at finite time $\tau_m$. Moreover, $\limsup \|\phi(\tau)\|_{L^{\infty}(\mathbb{R})} \rightarrow \infty$ as $\tau \rightarrow \tau_m$. We prove the theorem in three steps. **Step 1. Local existence**. Using Duhamel’s formula, we can write the solution of the differential equation (\[NLSHomoDE\]) as $$\label{integralEq} \phi(\tau) = U(\tau)\phi_{0} + \mathrm{i}\int_{0}^{\tau}\:U(\tau-\tau')\left[\frac{\mathrm{i}\alpha}{2}\phi(\tau') + \left( \frac{3\lambda}{2\omega}|R|^2 - \nu\right)\phi(\tau') - N(\phi(\tau'))\right]\:d\tau' \:,$$ where $U(\tau)$ is one dimensional free Schrödinger time evolution operator. Define $$\mathcal{B} = \left\{\phi\in C\left([0,\tilde{\tau}],H^k(\mathbb{R})\right) \left| \|\phi(\tau)\|_{H^k(\mathbb{R})} \leq M,\; \forall \tau \in [0,\tilde{\tau}) \right.\right\}$$ be a Banach space equipped with norm, $$\|\phi\|_{\mathcal{B}} = \sup_{\tau \in [0,\tilde{\tau})}\;\|\phi(\tau)\|_{H^k(\mathbb{R})}\:.$$ For $\phi \in H^k(\mathbb{R})$, we define a nonlinear operator $$\mathcal{K}[\phi](\tau) = U(\tau)\phi_0 + \mathrm{i}\int_{0}^{\tau}\:U(\tau-\tau')\left[\frac{\mathrm{i}\alpha}{2}\phi(\tau') + \left( \frac{3\lambda}{2\omega}|R|^2 - \nu\right)\phi(\tau') - N(\phi(\tau'))\right]\:d\tau'.$$ We want to prove that the operator $\mathcal{K}$ is a contraction mapping on $\mathcal{B}$. Using the fact that the free Schrödinger operator $U(\tau)$ is a linear operator and unitary on $H^k(\mathbb{R})$, we have $$\|U(\tau)\phi\|_{H^k(\mathbb{R})} = \|\phi\|_{H^k(\mathbb{R})}\:,$$ for any $\phi \in H^k(\mathbb{R})$, thus we get $$\begin{aligned} \|\mathcal{K}[\phi](\tau)\|_{H^k(\mathbb{R})} & \leq & \|U(\tau)\phi_0\|_{H^k(\mathbb{R})} + \int_{0}^{\tau}\:\left\|U(\tau-\tau')\left[\frac{\mathrm{i}\alpha}{2}\phi(\tau') + \left( \frac{3\lambda}{2\omega}|R|^2 - \nu\right)\phi(\tau') - N(\phi(\tau'))\right]\right\|_{H^k(\mathbb{R})}\:d\tau' \nonumber \\ & \leq & \|\phi_0\|_{H^k(\mathbb{R})} + \int_{0}^{\tau}\:\left\|\left[\frac{\mathrm{i}\alpha}{2}\phi(\tau') + \left( \frac{3\lambda}{2\omega}|R|^2 - \nu\right)\phi(\tau') - N(\phi(\tau'))\right]\right\|_{H^k(\mathbb{R})}\:d\tau'\nonumber\\ & \leq & \|\phi_0\|_{H^k(\mathbb{R})} + \left(\frac{\alpha}{2} + \frac{3|\lambda|}{2\omega}|R|^2 + |\nu|\right)\tau \delta + \tau \sup_{\tau' \in [0,\tau]} \;\|N(\phi(\tau'))\|_{H^k(\mathbb{R})} \end{aligned}$$ Note that, $$\left[|\phi + \eta|^2 - |\eta|^2\right](\phi + \eta) = |\phi|^2\phi + 2|\phi|^2 \eta + |\eta|^2\phi + \eta^2 \bar{\phi} + \phi^2 \bar{\eta}\:.$$ Since $H^{k}(\mathbb{R})$ is an algebra for $k > 1/2$, thus for $\tau \in [0,\tilde{\tau})$ we get $$\begin{aligned} \|N(\phi(\tau))\|_{H^k(\mathbb{R})} & \leq & \frac{3|\lambda|}{2\omega}\left(\|\phi\|^3_{H^k(\mathbb{R})} + 3|R|\|\phi\|^2_{H^k(\mathbb{R})} + 2|R|^2\|\phi\|_{H^k(\mathbb{R})}\right) \nonumber\\ & \leq & \frac{3|\lambda|M}{2\omega}\left(M^2 + 3|R|M + 2|R|^2\right)\:. \end{aligned}$$ Assuming that $\|\phi_0\|_{H^k(\mathbb{R})} < \delta/2$, hence we have, $$\|\mathcal{K}[\phi](\tau)\|_{\mathcal{B}} \leq \frac{M}{2} + \frac{\tilde{\tau} M}{2}\left[\alpha + 2|\nu| + \frac{3|\lambda|}{\omega}\left(3|R|^2 + 3|R|M + M^2\right)\right]\:.$$ If we pick, $$\tilde{\tau} < \frac{1}{\alpha + 2|\nu| + \frac{3|\lambda|}{\omega}\left(3|R|^2 + 3|R|M + M^2\right)}\:,$$ then $\mathcal{K}$ map $\mathcal{B}$ to itself. Let $\phi,\varphi \in \mathcal{B}$, then [ $$\begin{aligned} \|\mathcal{K}[\phi](\tau) - \mathcal{K}[\varphi](\tau)\|_{H^k(\mathbb{R})} & \leq & \int_{0}^{\tau}\:\left\|\left(\frac{\mathrm{i}\alpha}{2} + \frac{3\lambda}{2\omega}|R|^2 - \nu\right)(\phi(\tau') - \varphi(\tau')) - (N(\phi(\tau')) - N(\varphi(\tau')))\right\|_{H^k(\mathbb{R})}\:d\tau' \nonumber\\ & \leq & \int_{0}^{\tau}\:\left(\frac{\alpha}{2} + \frac{3|\lambda|}{2\omega}|R|^2 + |\nu|\right)\left\|(\phi(\tau') - \varphi(\tau')) \right\|_{H^s(\mathbb{R})}\;d\tau' \nonumber \\ & & \qquad + \int_{0}^{\tau}\:\|N(\phi(\tau')) - N(\varphi(\tau'))\|_{H^k(\mathbb{R})}\:d\tau' \nonumber\\ & \leq & \tau \left(\frac{\alpha}{2} + \frac{3|\lambda|}{2\omega}|R|^2 + |\nu|\right) \sup_{\tau' \in [0,\tau]} \left\|(\phi(\tau') - \varphi(\tau')) \right\|_{H^k(\mathbb{R})} \nonumber \\ & & \qquad + \tau \sup_{\tau' \in [0,\tau]}\|N(\phi(\tau')) - N(\varphi(\tau'))\|_{H^k(\mathbb{R})} \end{aligned}$$ ]{} Note we have the following inequalities, $$\begin{aligned} |\phi^2 - \varphi^2| & = & |\phi + \varphi||\phi - \varphi| \leq (|\phi| + |\varphi|)|\phi - \varphi|, \nonumber\\ ||\phi|^2 - |\varphi|^2| & = &|\phi\bar{\phi} - \varphi\bar{\varphi}| = |\phi(\bar{\phi} - \bar{\varphi}) + \bar{\varphi}(\phi - \varphi)| \leq (|\phi| + |\varphi|)|\phi - \varphi|, \nonumber \\ ||\phi|^2\phi - |\varphi|^2\varphi| & = & |(|\phi|^2 + |\varphi|^2)(\phi-\varphi) + \phi\varphi(\bar{\phi}-\bar{\varphi})| \leq \frac{3}{2}(|\phi|^2 + |\varphi|^2)|\phi-\varphi|. \nonumber \end{aligned}$$ Thus, for $\tau \in [0,\tilde{\tau}]$ we get $$\|N(\phi(\tau)) - N(\varphi(\tau))\|_{H^k(\mathbb{R})} \leq \frac{3|\lambda|}{2\omega}\left(2|R|^2 + 6|R|M + 3M^2\right)\|\phi(\tau) - \varphi(\tau)\|_{H^k(\mathbb{R})}.$$ Hence, we have $$\|\mathcal{K}[\phi](\tau) - \mathcal{K}[\varphi](\tau)\|_{\mathcal{B}} \leq \frac{\tilde{\tau}}{2}\left[\alpha + 2|\nu| + \frac{9|\lambda|}{\omega}\left(|R|^2 + 2|R|M + M^2\right)\right]\|\phi(\tau) - \varphi(\tau)\|_{\mathcal{B}}$$ If we pick, $$\tilde{\tau} < \min\left[\frac{1}{\alpha + 2|\nu| + \frac{3|\lambda|}{\omega}\left(3|R|^2 + 3|R|M + M^2\right)}, \frac{2}{\alpha + 2|\nu| + \frac{9|\lambda|}{\omega}\left(|R|^2 + 2|R|M + M^2\right)}\right]\:,$$ then the nonlinear operator $\mathcal{K}$ is a contraction mapping in $\mathcal{B}$. Therefore by Banach fixed point theorem, there exist a fixed point of $\mathcal{K}$ which is a solution of (\[integralEq\]), hence of (\[NLSHomoDE\]). **Step 2. Uniqueness** Let $\phi,\tilde{\phi} \in C\left([0,\tilde{\tau}), H^{k}(\mathbb{R}) \right)$ are solutions of (\[NLSHomoDE\]) with the same initial condition $\varphi \in H^{k}(\mathbb{R})$. Let $\eta = \tilde{\phi} - \phi$. By Duhamel’s formula, we have $$\eta(\tau) = \mathrm{i}\int_{0}^{\tau}\:U(\tau-\tau')\left[\frac{\mathrm{i}\alpha}{2}\eta(\tau') + \left( \frac{3\lambda}{2\omega}|R|^2 - \nu\right)\eta(\tau') - \left(N(\tilde{\phi}(\tau')) - N(\phi(\tau'))\right)\right]\:d\tau'\:.$$ Taking norm in $H^{k}(\mathbb{R})$ and using the property of the nonlinear term as in the local existence proof, we get $$\|\eta(\tau)\|_{H^{k}(\mathbb{R})} \leq C(M)\int_0^{\tau}\:\|\eta(\tau')\|_{H^{k}(\mathbb{R})}\;d\tau'\:.$$ By Gronwall’s inequality, we conclude that $\|\eta(\tau)\|_{H^{k}(\mathbb{R})} = 0$ for all $\tau \in [0,\tilde{\tau})$, hence $\phi' = \phi$. **Step 3. Maximal solution.** We can construct the maximal solution by repeating the step 1 with the initial condition $\phi(\tilde{\tau}-\tau_0)$ for some $0< \tau_0 <\tilde{\tau}$ and using the uniqueness condition to glue the solutions. Clearly, if $\tau_m = +\infty$, then we have a global solution and if $\tau_m < +\infty$, then $\|\phi(\tau)\|_{H^k(\mathbb{R})} \rightarrow \infty$ as $\tau \rightarrow \tau_m$. Finally, we will show that if $\tau_m < +\infty$, then $\limsup \|\phi_{\tau}\|_{L^\infty(\mathbb{R})} \rightarrow \infty$ as $\tau \rightarrow \tau_m$. Suppose that $\limsup \|\phi_{\tau}\|_{L^\infty(\mathbb{R})} < \infty$ as $\tau \rightarrow \tau_m$. Since $\phi \in C\left([0,\tau_m), H^{k}(\mathbb{R}) \right)$, and $H^k$ is embedded to $L^{\infty}$ for $k\geq1$, then $$\sup_{\tau \in [0,\tau_m)} \|\phi(\tau)\|_{L^\infty(\mathbb{R})} \leq \sup_{\tau \in [0,\tau_m)} \|\phi(\tau)\|_{H^k(\mathbb{R})} \leq M\:.$$ Using Duhamel’s formula using the property of the nonlinear term as in the local existence proof, we get $$\|\phi(\tau)\|_{H^k(\mathbb{R})} \leq \|\varphi\|_{H^k(\mathbb{R})} + C(M)\int_{0}^{\tau}\:\|\phi(\tau')\|_{H^k(\mathbb{R})}\;d\tau'\:.$$ Applying Gronwall’s inequality, then for $\tau \in [0,\tau_m)$, we have $$\|\phi(\tau)\|_{H^k(\mathbb{R})} \leq \|\varphi\|_{H^k(\mathbb{R})}e^{C(M)\tau_m}\:,$$ which contradicts with the blow up of $\|\phi(\tau)\|_{H^k(\mathbb{R})}$ at $\tau \rightarrow \tau_m$. Hence, $\limsup \|\phi_{\tau}\|_{L^\infty(\mathbb{R})} \rightarrow \infty$ as $\tau \rightarrow \tau_m$. Due to the type of nonlinearity and presence of the damping term, differential equation (\[NLSHomoDE\]) does not possess any conserved quantities. However, we can define the energy function associated to equation (\[NLSHomoDE\]) as $$E[\phi](\tau) = \int_{\mathbb{R}}\;\left\{|\phi_{\xi}|^2 - \left( \frac{3\lambda}{2\omega}|R|^2 - \nu\right)|\phi|^2 - \frac{3\lambda}{4\omega}\left[|\phi + \eta|^2 - |\eta|^2\right]^2 \right\}\;d\xi\:. \label{energyfunction}$$ Since $\lambda < 0$, then $E$ is non-negative. It is worth mentioning that if $\alpha = 0$ (no damping term), then this energy function $E$ is conserved. Furthermore, we still can use this energy function to prove the global existence in $H^{1}(\mathbb{R})$ for small initial energy and prove that the solution is in fact possess more regularity and prove the global existence on $H^{k}(\mathbb{R})$. Now, we prove the following lemma about energy estimate Let $\phi$ be a solution of differential equation (\[NLSHomoDE\]) with initial data $\varphi \in H^{1}(\mathbb{R})$ such that $E_0 = E[\varphi] \leq \delta$ with $\delta$ is a positive real constant. There exist a positive real constant $\delta_0$ such that for every $0 < \delta < \delta_0$ we have the following estimate, $$E[\phi](\tau) \leq K e^{-\alpha\tau}\:, \label{energydecay}$$ where $K$ is a real positive constant depending on the initial data. First, we multiply equation (\[NLSHomoDE\]) with $-\bar{\phi}_{\tau} - \frac{\alpha}{2} \bar{\phi}$, integrate over the spatial variable $\xi$ and keep the real part. Then, we get $$\frac{d E[\phi](\tau)}{d\tau} + \alpha E[\phi](\tau) = \frac{3\alpha\lambda}{4\omega} \int_{\mathbb{R}}\:\left[|\phi + \eta|^2 - |\eta|^2\right]|\phi|^2\;d\xi\;,$$ Since $\alpha >0$, using Cauchy-Schwartz we get $$\begin{aligned} \frac{3\alpha\lambda}{4\omega} \int_{\mathbb{R}}\:\left[|\phi + \eta|^2 - |\eta|^2\right]|\phi|^2\;d\xi & \leq & \alpha \left[\frac{3|\lambda|}{4\omega} \int_{\mathbb{R}}\:\left[|\phi + \eta|^2 - |\eta|^2\right]^2\;d\xi\right]^{\frac{1}{2}} \left[\frac{3|\lambda|}{4\omega} \int_{\mathbb{R}}\:|\phi|^4 \;d\xi\right]^{\frac{1}{2}} \nonumber \\ & \leq & \alpha E^{\frac{1}{2}}\left\|\frac{3|\lambda|}{4\omega}\phi\right\|^{2}_{L^{4}(\mathbb{R})}\:. \end{aligned}$$ Using the one dimensional Gagliardo-Nirenberg-Sobolev inequality, $$\left\|\frac{3|\lambda|}{4\omega}\phi\right\|_{L^{4}(\mathbb{R})} \leq C\|\phi_{\xi}\|^{1/4}_{L^{2}(\mathbb{R})}\left\|\frac{3|\lambda|}{4\omega}\phi\right\|^{3/4}_{L^{2}(\mathbb{R})}$$ and Young inequality, $$a b \leq p\; a^{\frac{1}{p}} + (1-p)b^{\frac{1}{1-p}}$$ for $a,b >0$ and $p \in (0,1)$, we get $$\left\|\frac{3|\lambda|}{4\omega}\phi\right\|_{L^{4}(\mathbb{R})} \leq C\left[\|\phi_{\xi}\|^{2}_{L^{2}(\mathbb{R})} + \frac{3|\lambda|}{2\omega}|R|^2 \|\phi\|^2_{L^{2}(\mathbb{R})} \right]^{1/2} \leq C\;E^{1/2}.$$ Thus, we have $$\frac{d E(\tau,\phi)}{d\tau} + \alpha E(\tau,\phi) \leq \alpha C E^{3/2}.$$ Integrating this inequality, we get $$E(\tau) \leq \frac{4 E_0}{\left[C\sqrt{E_0}(1 - e^{\alpha \tau/2}) + 2\;e^{\alpha \tau/2}\right]^2}\:.$$ Pick $\delta_0 = 4/C^2$, then we have the desired inequality (\[energydecay\]). Now, we prove the global existence in the following theorem. \[globalexistence\] Let $k \geq 1$ be an integer. For every $\varphi \in H^k(\mathbb{R})$ such that $E_0 = E[\varphi] \leq \delta$ with $\delta$ is a positive real constant. There exists a positive real constant $\delta_0$ such that for every $0 < \delta < \delta_0$, the differential equation (\[NLSHomoDE\]) admits a unique global solution $\phi$ which belongs to $$\phi \in C\left([0,+\infty), H^k(\mathbb{R})\right)\:.$$ First consider the case of $k=1$. Pick $\delta_0$ as in the previous theorem, then the global existence is directly follow from (\[energydecay\]). Thus, we have $$\label{GEH1} \phi \in C\left([0,+\infty), H^1(\mathbb{R})\right)\:.$$ Now consider the case of $k>1$. In Theorem \[localsoln\], we already construct a unique local maximal solution such that, $$\phi \in C\left([0,\tau^k_m), H^k(\mathbb{R})\right)\:.$$ We need to prove that $\tau^k_m = +\infty$. Consider $\tau_0 <+\infty$, then we have $$\sup_{\tau \in [0,\tau_0]} \|\phi(\tau)\|_{H^{1}(\mathbb{R})} < \infty\:.$$ Since $H^{1}(\mathbb{R}) \hookrightarrow L^{\infty}(\mathbb{R})$, then (\[GEH1\]) implies $$\sup_{\tau \in [0,\tau_0]} \|\phi(\tau)\|_{L^{\infty}(\mathbb{R})} < \infty\:.$$ Applying the blow up alternative in the local existence theorem, we deduce that $\tau_m^k > \tau_0$. Since $\tau_0$ is arbitrary, then we conclude $\tau_m^k = +\infty$ and the proof is finished. Main Result {#sec4} =========== Before we state the main result, we will prove a lemma about the bound of the leading approximation function and the residual function. \[lemmaA\] For every $A(0) = \varphi + \eta$, where $\varphi \in H^{k}(\mathbb{R})$ with integer $k > 4$ such that $E[\varphi]$ is small, then there exists a positive real constant $C_X$ and $C_R$ that depend on $\|A(0)\|_{L^\infty(\mathbb{R})}$ such that $$\begin{aligned} \|X_t(t)\|_{L^\infty(\mathbb{R})} + \|X(t)\|_{L^\infty(\mathbb{R})} & \leq & \epsilon\;C_X,\nonumber\\ \|\mathrm{Res}(t)\|_{L^\infty(\mathbb{R})} & \leq & \epsilon^4 \:C_R\:, \end{aligned}$$ for all $t \in [0,+\infty)$. Furthermore, we also have $X(t) \in C_b^{k-1}(\mathbb{R})$ and $X_t(t) \in C_b^{k-3}(\mathbb{R})$ for all $t \in [0,+\infty)$. From Theorem \[globalexistence\], we have $\phi \in C\left([0,+\infty), H^k(\mathbb{R})\right)$ for integer $k\geq 1$. Since $H^{k}(\mathbb{R})$ is embedded into $L^{\infty}(\mathbb{R})$ for $k \geq 1$, then using decomposition (\[decomposition\]) and the fact that $A_{p}(\tau) \in L^{\infty}(\mathbb{R})$, we get $$\|A(\tau)\|_{L^{\infty}(\mathbb{R})} \leq C_A.$$ Since $L^{\infty}(\mathbb{R})$ is Banach algebra with respect to pointwise multiplication, then we can estimate equation (\[AnsatzFunction\]), $$\|X(t)\|_{L^{\infty}(\mathbb{R})} \leq \epsilon \;C_{1}.$$ Since $k > 4$, $\|\phi_{\xi\xi}\|_{H^{k}(\mathbb{R})} \leq C\|\phi\|_{H^{k-2}(\mathbb{R})}$ then from equation (\[NLSHomoDE\]), we get $$\|\phi_{\tau}\|_{H^k(\mathbb{R})} \leq C\:.$$ Thus, using decomposition (\[decomposition\]), we have estimate for the first derivative of $A$ with respect to $\tau$ and conclude that $$\|X_{t}(t)\|_{L^{\infty}(\mathbb{R})} \leq \epsilon\;C_{2}\:,$$ hence proved the first inequality. The residual terms consist of power of $A$ and higher derivative of $A$ up to second order (both time and space). Since $k > 4$, then using Sobolev embedding and equation (\[NLSHomoDE\]), we get the bound for second derivative of $\phi$ with respect to both space and time. Then, using (\[decomposition\]), we proved the second inequality. For the second part of the theorem, note that $A_p(\tau,\xi)$ is smooth and bounded function. Since $\phi(\tau) \in H^k(\mathbb{R})$ and $\phi_{\tau}(\tau) \in H^{k-2}(\mathbb{R})$ and $k>4$, then by Sobolev embedding we have $$\|\phi(\tau)\|_{C_b^m(\mathbb{R})} \leq C \|f\|_{H^k(\mathbb{R})} \:,$$ for $k > m + 1$, hence we proved the theorem. We define the error term by writing $\epsilon^2 y(t) = u(t) + X(t)$, with $X(t)$ is a leading approximation term and $u(t)$ is the exact solution of our original equation. The evolution equation for the error term is given by $$\label{errorEq} \begin{cases} y_{tt} + \epsilon^2 \alpha y_{t} - \beta y_{xx} + \gamma y - \lambda (\epsilon^4 y^3 + 3 X^2 y + 3 \epsilon^2 X y^2) + \epsilon^{-2}\mathrm{Res}(t) = 0,\\ y(0,\cdot) = f, \\ y_t(0,\cdot) = \epsilon g. \end{cases}$$ Since we expect $\epsilon$ is small, then we assume that $\gamma > \frac{\epsilon^2\alpha}{2}$. We prove that the error function $y(t)$ remains bounded over time. First, we can convert the differential equation (\[errorEq\]) into integral equation [@ficken57], $$\label{errorIntEq} y(t,x) = \Phi[y](t,x) = A[f](t,x) + B[\epsilon g](t,x) + M\left[\lambda (y^3 + 3X^2 y + 3Xy^2) + \mathrm{Res}\right],$$ where $A$, $B$ and $M$ are integral operators defined as $$\begin{aligned} A[f](t,x) & = & \frac{1}{2}e^{-\hat{\alpha}t} \left[f(x+t) + f(x-t) + \hat{\alpha} \int_{x-t}^{x+t}\:f(z)J_0(\epsilon w \zeta_0)\:dz + \int_{x-t}^{x+t}\:f(z)\frac{\partial J_0(\epsilon w \zeta_0)}{\partial t}\:dz \right], \\ B[\epsilon g](t,x) & = & \frac{\epsilon}{2}e^{-\hat{\alpha}t}\int_{x-t}^{x+t}\:g(z)J_0(\epsilon w \zeta_0)\:dz, \\ M[h](t,x) & = & -\frac{1}{2} \int_{0}^{t}\:\int_{x-t+s}^{x+t-s}\:e^{-\hat{\alpha}(t-s)}h(z,s)J_0(\epsilon w \zeta)\:dz\;ds,\end{aligned}$$ with $\hat{\alpha} = \epsilon^2\alpha/2$, $J_0$ is a zeroth order Bessel function, and $$\begin{aligned} \epsilon w & = & \sqrt{\gamma^2 - \hat{\alpha}^2},\\ \zeta^2 & = & (t-s)^2 - (x-z)^2, \\ \zeta_0^2 & = & t^2 - (x-z)^2.\end{aligned}$$ Since the intervals of integration are $s \in [0,t]$ and $z \in [x-t+s,x+t-s]$, then $\zeta^2 \geq 0$, thus we can set $\zeta \geq 0$. Note that, $$|J_n(z)| \leq \frac{1}{\Gamma(1+n)}\left(\frac{|z|}{2}\right)^n e^{\mathrm{Im}(z)}\:.$$ Since $w\zeta \in \mathbb{R}$, then we have, $$\begin{aligned} |J_0(\epsilon w\zeta)| & \leq & 1, \nonumber \\ \left|\frac{\partial J_0}{\partial t}(\epsilon w\zeta) \right| = \left|-\frac{\epsilon w}{\zeta}(t-s)J_1(\epsilon w\zeta)\right| & \leq & \frac{\epsilon^2 w^2}{2} |t-s|\:.\end{aligned}$$ Using this two estimates, now we can estimate the integral operators $$\begin{aligned} \label{IntegralEstimate} |A[f](t,x)| & \leq & e^{-\hat{\alpha}t}\|f\|_{L^{\infty}(\mathbb{R})}\left(1 + \hat{\alpha}t + \frac{\epsilon^2 w^2 t^2}{2}\right), \nonumber \\ |B[\epsilon g](t,x)| & \leq & \epsilon e^{-\hat{\alpha}t} t \|g\|_{L^{\infty}(\mathbb{R})}, \nonumber \\ |M[h](t,x)| & \leq & \int_{0}^{t}\: e^{-\hat{\alpha}(t-s)} (t-s)\|h(s)\|_{L^{\infty}(\mathbb{R})}\;ds.\end{aligned}$$ We assume that $\|f\|_{L^{\infty}(\mathbb{R})},\|g\|_{L^{\infty}(\mathbb{R})} \leq C_0$. Banach algebra property of ${L^{\infty}(\mathbb{R})}$ enables us to bound the nonlinear term for each $D>0$ and for all $\|y\|_{L^{\infty}(\mathbb{R})} \leq \;D$ and then we have $$\begin{aligned} \|y(t)\|_{L^{\infty}(\mathbb{R})} & \leq & \left[\left(1 + \hat{\alpha}t + \epsilon t + \frac{\epsilon^2 w^2 t^2}{2}\right) C_0 + \frac{\epsilon^2 t^2}{2} C_R + \frac{|\lambda| \epsilon^2 t^2}{2}\left( \epsilon^4 D^3 + \epsilon^3 C_X D^2\right) \right]\nonumber\\ & &\qquad + |\lambda|\epsilon^2 C_{X}^{2}\int_{0}^{t}\:(t-s)\|y(s)\|_{L^{\infty}(\mathbb{R})}\;ds\:,\end{aligned}$$ where we already used the fact that $e^{-\hat{\alpha}t} \leq 1$ for $t \geq 0$. If $\epsilon > 0$ is sufficiently small, i.e. $\epsilon \in (0,\epsilon_0)$ with $\epsilon_0$ is a positive real constant, then for each $D>0$ and for every $\|y(t)\|_{L^{\infty}(\mathbb{R})} \leq D$, we can find a positive real constant $M$ independent of $\epsilon$ such that, $$\frac{1}{2}\left(|\lambda|\epsilon^4 D^3 + |\lambda|\epsilon^3 C_X D^2\right) < M\:.$$ Thus, as long as we have $\|y(t)\|_{L^{\infty}(\mathbb{R})}$ staying in the ball of radius $D$, we have $$\|y(t)\|_{L^{\infty}(\mathbb{R})} \leq a(t) + \int_{0}^{t}\;b(s)\|y(s)\|_{L^{\infty}(\mathbb{R})} \;ds\;,$$ where $$a(t) = \left[\left(1 + \hat{\alpha}t + \epsilon t + \frac{\epsilon^2 w^2 t^2}{2}\right) C_0 + \frac{\epsilon^2 t^2}{2} C_R + \epsilon^2 M t^2 \right]\;,\qquad b(s) = |\lambda|\epsilon^2 C_{X}^{2} (t-s)\:.$$ The function $a(t)$ is continuous and non-decreasing and $b(t)$ is positive for $t \in [0,T_0/\epsilon]$. Then applying Gronwall’s inequality we get $$\|y(t)\|_{L^{\infty}(\mathbb{R})} \leq a(t) e^{|\lambda| C_X^2 \epsilon^2 t^2/2}\:.$$ Therefore, $$\|y(t)\|_{L^{\infty}(\mathbb{R})} \leq \left[\left(1 + \frac{\epsilon \alpha T_0}{2} + T_0 + \frac{w^2 T_0^2}{2}\right) C_0 + \frac{T_0^2}{2} C_R + M T_0^2 \right] e^{|\lambda| C_X^2 T_0^2/2}\:.$$ Let $C_y = \left(1 + T_0 + \frac{w^2 T_0^2}{2}\right) C_0 + C M T_0^2$ and $D = C_y e^{|\lambda| C_X^2 T_0^2/2}$ and make $\epsilon_0$ smaller such that $\epsilon < \frac{2M}{\alpha}T_0$, then we have $$\|y(t)\|_{L^{\infty}(\mathbb{R})} \leq D\;,$$ for $t \in [0,T_0/\epsilon]$. Hence we proved the following theorem, \[MainTheorem\] Let $A(\tau,\xi)$ is the solution of equation (\[NLSeq\]) such that $A \in C^2\left([0,T_1],C_b^k(\mathbb{R})\right)$ for integer $k \geq 0$ and $X$ is leading approximation function (\[AnsatzFunction\]). Let $u(t,x)$ be a solution of equation (\[NKGeq\]). Then for each $T_0 < T_1$ and each $C_0 > 0$, there exist $\epsilon_0$ and $D>0$ such that for every $\epsilon \in (0,\epsilon_0)$ with $$\|u(0,\cdot) - X(0,\cdot)\|_{L^{\infty}(\mathbb{R})} \leq \epsilon^2 C_0,\qquad \|u(0,\cdot) - X(0,\cdot)\|_{L^{\infty}(\mathbb{R})} \leq \epsilon^3 C_0\;,$$ then the following inequality $$\|u(t,\cdot) - X(t,\cdot)\|_{L^{\infty}(\mathbb{R})} \leq \epsilon^2 D$$ holds for $t \in [0,T_0/\epsilon]$. Acknowledgement {#acknowledgement .unnumbered} =============== The work of FTA is partly supported by Ministry of Research, Technology and Higher Education of the Republic of Indonesia through PDUPT 2018. FTA would like to thank The Abdus Salam ICTP for associateship in 2018. The authors, BEG and HS, acknowledge Ministry of Research, Technology and Higher Education of the Republic of Indonesia for partial financial supports through the World Class Professor 2018 program. [99]{} L. Lugiato and R. Lefever, Phys. Rev. Lett. 58, 2209 (1987). M. Haelterman, S. Trillo, and S. Wabnitz, Opt. Commun. 91, 410 (1992). P. Del’Haye, A. Schliesser, O. Arcizet, T. Wilken, R. Holzwarth, and T.J. Kippenberg, Nature 450(7173), 1214-1217 (2007). T.J. Kippenberg, R. Holzwarth, and S.A. Diddams, Science 332, 555 (2011). L.A. Lugiato, F. Prati, and M. Brambilla, *Nonlinear Optical Systems* (Cambridge University Press, 2015). Eur. Phys. J. D 71, (2017) Topical Issue: “Theory and Applications of the Lugiato-Lefever Equation”, edited by Y.K. Chembo, D. Gomila, M. Tlidi, and C.R. Menyuk. M.A. Ferré, M.G. Clerc, S. Coulibally, R.G. Rojas, and M. Tlidi, Eur. Phys. J. D 71, 172 (2017) G.J. Morales, Y.C. Lee, Phys. Rev. Lett. 33, 1016 (1974) D.J. Kaup, A.C. Newell, Phys. Rev. B 18, 5162 (1978) K. Nozaki, N. Bekki, Phys. Lett. A 102, 383 (1984) G. Terrones, D.W. McLaughlin, E.A. Overman, and A.J. Pearlstein, SIAM (Soc. Ind. Appl. Math.) J. Appl. Math. 50, 791 (1990), SIAM (Soc. Ind. Appl. Math.) J. Appl. Math. 50 (1990): 791. I.V. Barashenkov, and Y.S. Smirnov, Existence and stability chart for the ac-driven, damped nonlinear Schrödinger solitons, Physical Review E 54, no. 5 (1996): 5707. P. Kirrmann, G. Schneider, and A. Mielke, The validity of modulation equations for extended systems with cubic nonlinearities, Proceedings of the Royal Society of Edinburgh Section A: Mathematics 122, no. 1-2 (1992): 85-91. W. Dörfler, A. Lechleiter, M. Plum, G. Schneider, and C. Wieners, Photonic crystals: Mathematical analysis and numerical approximation. Vol. 42, Springer Science & Business Media, 2011. P. Collet and J.P. Eckmann, Commun.Math. Phys. (1990) 132: 139. A. van Harten, J. Nonlinear Sci. (1991) 1: 397. G. Schneider, Z. angew. Math. Phys. (1994) 45: 433. G. Schneider, J. Nonlinear Sci. (1994) 4: 23. Y. Muda, F.T. Akbar, R. Kusdiantara, B.E. Gunara, and H. Susanto, *Reduction of damped, driven Klein-Gordon equations into discrete nonlinear Schrödinger equation: justification and numerical comparisons*, submitted to Asymptotic Analysis (2018). J. Bourgain, *Global Solutions of Nonlinear Schrödinger Equations*, American Mathematical Society, Colloquium Publications Volume 46 (1999). T. Cazenave, *Semilinear Schrödinger Equations*, Courant Lecture Series Volume 10, American Mathematical Society (2003). F. A. Ficken and B. A. Fleishman, *Initial Value Problems and Time-Periodic Solutions for a Nonlinear Wave Equation*, Communications on Pure and Applied Mathematics, vol. X, 331-356 (1957).
--- abstract: 'Deep spiking neural networks (SNNs) hold great potential for improving the latency and energy efficiency of deep neural networks through event-based computation. However, training such networks is difficult due to the non-differentiable nature of asynchronous spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are only considered as noise. This enables an error backpropagation mechanism for deep SNNs, which works directly on spike signals and membrane potentials. Thus, compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statics of spikes more precisely. Our novel framework outperforms all previously reported results for SNNs on the permutation invariant MNIST benchmark, as well as the N-MNIST benchmark recorded with event-based vision sensors.' author: - | Jun Haeng Lee$^*$$^\dag$, Tobi Delbruck$^\dag$, Michael Pfeiffer$^\dag$\ $^*$Samsung Advanced Institute of Technology, Samsung Electronics\ `junhaeng2.lee@samsung.com`\ $^\dag$Institute of Neuroinformatics, University of Zurich and ETH Zurich\ `{tobi, pfeiffer}@ini.uzh.ch`\ bibliography: - 'arXiv\_2016\_SNNbackprop.bib' title: Training Deep Spiking Neural Networks using Backpropagation --- Introduction ============ Deep learning is achieving outstanding results in various machine learning tasks [@he2015deep; @lecun2015deep], but for applications that require real-time interaction with the real environment, the repeated and often redundant update of large numbers of units becomes a bottleneck for efficiency. An alternative has been proposed in the form of spiking neural networks (SNNs), a major research topic in theoretical neuroscience and neuromorphic engineering. SNNs exploit event-based, data-driven updates to gain efficiency, especially if they are combined with inputs from event-based sensors, which reduce redundant information based on asynchronous event processing [@camunas2012event; @merolla2014million; @oconnor2013real]. Even though in theory [@maass2004computational] SNNs have been shown to be as computationally powerful as conventional artificial neural networks (ANNs, this term will be used to describe conventional deep neural networks in contrast with SNNs), practically SNNs have not quite reached the same accuracy levels of ANNs in traditional machine learning tasks. A major reason for this is the lack of adequate training algorithms for deep SNNs, since spike signals are not differentiable, but differentiable activation functions are fundamental for using error backpropagation. A recently proposed solution is to use different data representations between training and processing, i.e. training a conventional ANN and developing conversion algorithms that transfer the weights into equivalent deep SNNs [@diehl2015fast; @esser2015backpropagation; @hunsberger2015spiking; @oconnor2013real]. However, in these methods, details of statistics in spike trains that go beyond mean rates, such as required for processing event-based sensor data cannot be precisely represented by the signals used for training. It is therefore desirable to devise learning rules operating directly on spike trains, but so far it has only been possible to train single layers, and use unsupervised learning rules, which leads to a deterioration of accuracy [@diehl2015unsupervised; @masquelier2007unsupervised; @neftci2014event]. An alternative approach has recently been introduced by [@oconnor2016deep], in which a SNN learns from spikes, but requires keeping statistics for computing stochastic gradient descent (SGD) updates in order to approximate a conventional ANN. In this paper we introduce a novel supervised learning technique, which can train general forms of deep SNNs directly from spike signals. This includes SNNs with leaky membrane potential and spiking winner-takes-all (WTA) circuits. The key idea of our approach is to generate a continuous and differentiable signal on which SGD can work, using low-pass filtered spiking signals added onto the membrane potential and treating abrupt changes of the membrane potential as noise during error backpropagation. Additional techniques are presented that address particular challenges of SNN training: spiking neurons typically require large thresholds to achieve stability and reasonable firing rates, but this may result in many “dead” neurons, which do not participate in the optimization during training. Novel regularization and normalization techniques are presented, which contribute to stable and balanced learning. Our techniques lay the foundations for closing the performance gap between SNNs and ANNs, and promote their use for practical applications. Related Work {#related_work} ============ Gradient descent methods for SNNs have not been deeply investigated because of the non-differentiable nature of spikes. The most successful approaches to date have used indirect methods, such as training a network in the continuous rate domain and converting it into a spiking version. O’Connor et al. pioneered this area by training a spiking deep belief network (DBN) based on the Siegert event-rate approximation model [@oconnor2013real], but only reached accuracies around $94.09\%$ for the MNIST hand written digit classification task. Hunsberger and Eliasmith used the softened rate model for leaky integrate and fire (LIF) neurons [@hunsberger2015spiking], training an ANN with the rate model and converting it into a SNN consisting of LIF neurons. With the help of pre-training based on denoising autoencoders they achieved $98.6\%$ in the permutation-invariant (PI) MNIST task. Diehl et al. [@diehl2015fast] trained deep neural networks with conventional deep learning techniques and additional constraints necessary for conversion to SNNs. After the training units were converted into spiking neurons and the performance was optimized by normalization of weight parameters, yielding $98.64\%$ accuracy in the PI MNIST task. Esser et al. [@esser2015backpropagation] used a differentiable probabilistic spiking neuron model for training and statistically sampled the trained network for deployment. In all of these methods, training was performed indirectly using continuous signals, which may not capture important statistics of spikes generated by sensors used during processing time. Even though SNNs are optimally suited for processing signals from event-based sensors such as the Dynamic Vision Sensor (DVS) [@lichtsteiner2008dvs], the previous SNN training models require to get rid of time information and generate image frames from the event streams. Instead, we use the same signal format for training and processing deep SNNs, and can thus train SNNs directly on spatio-temporal event streams. This is demonstrated on the neuromorphic N-MNIST benchmark dataset [@orchard2015converting], outperforming all previous attempts that ignored spike timing. Spiking Neural Networks {#snn} ======================= In this article we study fully connected SNNs with multiple hidden layers. Let $M$ and $N$ be the number of synapses of a neuron and the number of neurons in a layer, respectively. On the other hand, $m$ and $n$ are the number of active synapses (i.e. synapses receiving spike inputs) of a neuron and the number of active neurons (sending spike outputs) in a layer. We will also use the simplified form of indices for active synapses and neurons throughout the paper as *Active synapses:* {$v_1, \cdots, v_m$}$\rightarrow${$1, \cdots, m$}, *Active neurons:* {$u_1, \cdots, u_n$}$\rightarrow${$1, \cdots, n$} Thus, if an index $i$, $j$, or $k$ is used for a synapse over \[1, $m$\] or a neuron over \[1, $n$\] (e.g. in (\[eq:mp\_full\])), it actually represents an index of an active synapse ($v_i$) or an active neuron ($u_j$). Leaky Integrate-and-Fire (LIF) Neuron {#lifn} ------------------------------------- The LIF neuron is one of the simplest models used for describing dynamics of spiking neurons [@gerstner2002spiking]. Since the states of LIF neurons can be updated asynchronously based on the timing of input events, it is a very efficient model in terms of computational cost. For a given input spike the membrane potential of a LIF neuron can be updated as $$\label{eq:lifmp} V_{mp} (t_p)=V_{mp}(t_{p-1})e^{\frac{t_{p-1} - t_p}{\tau_{mp}}} + w_i^{(p)}w_{dyn},$$ where $V_{mp}$ is the membrane potential, $\tau_{mp}$ is the membrane time constant, $t_p$ and $t_{p-1}$ are the present and previous input spike times, $w_i^{(p)}$ is the synaptic weight of the $i$-th synapse (through which the present $p$-th input spike arrives). $w_{dyn}$ is a dynamic weight controlling the refractory period, defined as $w_{dyn} = w_{d0}+(\Delta_t/T_{ref})^2$ if $\Delta_t < T_{ref}$ and $w_{dyn}<1$, and $w_{dyn}=1$ otherwise. $T_{ref}$ is the refractory period, $w_{d0}$ is the initial value (usually $0$), and $\Delta_t = t_{out} - t_p$, where $t_{out}$ is the time of the latest output spike produced by the neuron. Thus, the effect of input spikes on $V_{mp}$ is suppressed for a short period of time $T_{ref}$ after an output spike. $w_{dyn}$ recovers quadratically to $1$ after the output spike at $t_{out}$. Since $w_{dyn}$ is applied to all synapses identically, it is different from short-term plasticity, which is a synapse specific mechanism. When $V_{mp}$ crosses the threshold value $V_{th}$, the LIF neuron generates an output spike and $V_{mp}$ is decreased by a fixed amount proportional to the threshold: $$\label{eq:mpreset} V_{mp} (t_p^+)=V_{mp}(t_p) - \gamma V_{th},$$ where $\gamma$ is the membrane potential reset factor and $t_p^+$ is time right after the reset. We used $\gamma = 1$ for all the results in this paper. The valid range of the membrane potential is limited to \[$-V_{th}$, $V_{th}$\]. Since the upper limit is guaranteed by (\[eq:mpreset\]), the membrane potential is clipped to $-V_{th}$ when it falls below this value. This strategy helps balancing the participation of neurons during training. We will revisit this issue when we introduce threshold regularization in Section \[th\_regularization\]. Winner-Take-All (WTA) Circuit {#wta} ----------------------------- We found that the accuracy of SNNs could be improved by introducing a competitive recurrent architecture called WTA circuit in certain layers. In a WTA circuit, multiple neurons form a group with lateral inhibitory connections. Thus, as soon as any neuron produces an output spike, it inhibits all other neurons in the circuit and prevents them from spiking [@rozell2008sparse]. In this work, all lateral connections in a WTA circuit have the same strength, which reduces memory and computational costs for implementing them. The amount of lateral inhibition applied to the membrane potential is designed to be proportional to the inhibited neuron’s membrane potential threshold (see (\[eq:mp\_full\]) in Section \[tfunction\]). With this scheme, lateral connections inhibit neurons having small $V_{th}$ weakly and those having large $V_{th}$ strongly. This improves the balance of activities among neurons during training. As shown in Results, WTA competition in the SNN led to remarkable improvements, especially in networks with a single hidden layer. The WTA circuit also improves the stability and speed of training. Using Backpropagation in SNNs {#backprop} ============================= We now derive the transfer function for spiking neurons in WTA configuration and the SNN backpropagation equations. We also introduce simple methods to initialize parameters and normalize backpropagating errors to address vanishing or exploding gradients, and to stabilize training. Transfer function and derivatives {#tfunction} --------------------------------- From the event-based update in (\[eq:lifmp\]), the accumulated effects of the $k$-th synapse onto the membrane potential (normalized by synaptic weight) and the membrane potential reset in (\[eq:mpreset\]) (normalized by $\gamma V_{th}$) at time $t$ can be derived as $$\label{eq:activity} x_k(t)=\sum_{p}{\exp \left(\frac{t_p - t}{\tau_{mp}}\right)}, \quad a_i(t)=\sum_{q}{\exp \left(\frac{t_q - t}{\tau_{mp}}\right)},$$ where the sum is over all input spike times $t_p<t$ of the synapse for $x_k$, and the output spike times $t_q<t$ for $a_i$. The accumulated effects of lateral inhibitory signals in WTA circuits can be expressed analogously to (\[eq:activity\]). Ignoring the effect of refractory periods for now, this means that the membrane potential of the $i$-th active neuron in a WTA circuit can be written as $$\label{eq:mp_full} V_{mp,i} (t)=\sum_{k=1}^{m}{w_{ik}x_k(t)} - \gamma V_{th,i}a_i(t) + \sigma V_{th,i}\sum_{j=1, j\neq i}^{n} {\kappa_{ij}a_j(t)}.$$ The terms on the right side represent the input, membrane potential resets, and lateral inhibition, respectively. $x_k$ denotes the effect of the $k$-th active input neuron, and $a_i$ the effect induced by output activity of the $i$-th active neuron, as defined in (\[eq:activity\]). $\kappa_{ij}$ is the strength of lateral inhibition ($-1 \leq \kappa_{ij} < 0$) from the $j$-th active neuron to the $i$-th active neuron, and $\sigma$ is the expected efficacy of lateral inhibition. $\sigma$ should be smaller than $1$, since lateral inhibitions can affect the membrane potential only down to its lower bound (i.e. $-V_{th}$). We found a value of $\sigma \approx 0.5$ to work well in practice. Eq. (\[eq:mp\_full\]) reveals the relationship between inputs and outputs of spiking neurons which is not clearly shown in (\[eq:lifmp\]) and (\[eq:mpreset\]). Since the output ($a_i$) of the current layer becomes the input ($x_k$) of the next layer if all the neurons have same $\tau_{mp}$, (\[eq:mp\_full\]) provides the basis for backpropagation. Differentiation is not defined in (\[eq:activity\]) at the moment of each spike because of a step jump. However, we can regard these jumps as noise while treating (\[eq:activity\]) and (\[eq:mp\_full\]) as differentiable continuous signals to derive derivatives for backpropagation. In previous works [@diehl2015fast; @esser2015backpropagation; @hunsberger2015spiking; @oconnor2013real], continuous variables were introduced as a surrogate for $x_k$ and $a_i$ in (\[eq:mp\_full\]) for backpropagation. In this work, however, we directly use the contribution of spike signals to the membrane potential as defined in (\[eq:activity\]). Thus, the real statistics of spike signals, including temporal effects such as synchrony between inputs, can influence the training process. Ignoring the step jumps caused by spikes in the calculation of gradients might of course introduce errors, but we found in practice that this has very little influence on SNN training. A potential explanation is that by regarding the signals in (\[eq:activity\]) as continuous signals, but corrupted by noise at the times of spikes, this can have a similar positive effect as the widely used approach of noise injection during training, which can improve the generalization capability of neural networks [@vincent2008extracting]. In the case of SNNs, several papers have used the trick of treating spike-induced abrupt changes as noise for gradient descent optimization [@bengio2015objective; @hunsberger2015spiking]. However, in these cases the model added Gaussian random noise instead of spike-induced pertubations. In this work, we directly use the actual contribution of spike signals to the membrane potential as described in (\[eq:activity\]) for training SNNs. Here we show that this approach works well for learning in SNNs where information is encoded in spike rates, but importantly, the presented framework also provides the basis for utilizing specific spatio-temporal codes, which we demonstrate on a task using directly inputs from event-based sensors. For the backpropagation equations we need to obtain the transfer functions of LIF neurons in the WTA circuit. For this we set the residual $V_{mp}$ term in the left side of (\[eq:mp\_full\]) to zero (since it is not relevant to the transfer function), resulting in the transfer function $$\label{eq:transfer_func} a_i \approx \frac{s_i}{\gamma V_{th,i}} + \frac{\sigma \sum_{j = 1, j \neq i}^{n}{\kappa_{ij}a_j}}{\gamma}, \text{ where } s_i = \sum_{k=1}^{m}{w_{ik}x_k}.$$ Refractory periods are not considered here since the activity of neurons in SNNs is rarely dominated by refractory periods in a normal operating regime. For example, we used a refractory period of $1$ ms and the event rates of individual neurons were kept within a few tens of events per second (eps). Eq. (\[eq:transfer\_func\]) is consistent with (4.9) in [@gerstner2002spiking] without WTA terms. It can also be simplified to a spiking version of a rectified-linear unit by introducing a unit threshold and non-leaky membrane potential as in [@oconnor2016deep]. Directly differentiating (\[eq:transfer\_func\]) yields the backpropagation equations $$\label{eq:da_ds} \frac{\partial a_i}{\partial s_i} \approx \frac{1}{\gamma V_{th,i}}, \frac{\partial a_i}{\partial w_{ik}} \approx \frac{\partial a_i}{\partial s_i}x_k, \frac{\partial a_i}{\partial V_{th,i}} \approx \frac{\partial a_i}{\partial s_i}(-\gamma a_i + \sigma \sum_{j \neq i}^{n} {\kappa_{ij} a_j}), \frac{\partial a_i}{\partial \kappa_{ih}} \approx \frac{\partial a_i}{\partial s_i}(\sigma V_{th,i}a_h),$$ $$\label{eq:da_dx} \begin{bmatrix} \frac{\partial a_1}{\partial x_k} \\ \vdots \\ \frac{\partial a_1}{\partial x_k} \end{bmatrix} \approx \frac{1}{\sigma} \begin{bmatrix} q & \cdots & -\kappa_{1n} \\ \vdots & \ddots & \vdots \\ -\kappa_{n1} & \cdots & q \end{bmatrix}^{-1} \begin{bmatrix} \frac{w_{1k}}{V_{th,1}} \\ \vdots \\ \frac{w_{nk}}{V_{th,n}} \end{bmatrix}$$ where $q=\gamma/\sigma$. When all the lateral inhibitory connections have the same strength ($\kappa_{ij} = \mu, \forall i, j$) and are not learned, $\partial a_i/\partial \kappa_{ih}$ is not necessary and (\[eq:da\_dx\]) can be simplified to $$\label{eq:da_dx2} \frac{\partial a_i}{\partial x_k} \approx \frac{\partial a_i}{\partial s_i} \frac{\gamma}{(\gamma-\mu\sigma)} \left( w_{ik} - \frac{\mu\sigma V_{th,i}}{\gamma+\mu\sigma (n-1)} \sum_{j=1}^{n}{\frac{w_{jk}}{V_{th,j}}} \right).$$ We consider only the first-order effect of the lateral connections in the derivation of gradients. Higher-order terms propagating back through multiple lateral connections are neglected for simplicity. This is mainly because all the lateral connections considered here are inhibitory. For inhibitory lateral connections, the effect of small parameter changes decays rapidly with connection distance. Thus, first-order approximation saves a lot of computational cost without loss of accuracy. Initialization and Error Normalization {#normalization} -------------------------------------- Good initialization of weight parameters in supervised learning is critical to handle the exploding or vanishing gradients problem in deep neural networks [@glorot2010understanding; @he2015delving]. The basic idea behind those methods is to maintain the balance of forward activations and backward propagating errors among layers. Recently, the batch normalization technique has been proposed to make sure that such balance is maintained through the whole training process [@ioffe2015batch]. However, normalization of activities as in the batch normalization scheme is difficult for SNNs, because there is no efficient method for amplifying event rates. The initialization methods proposed in [@glorot2010understanding; @he2015delving] are not appropriate for SNNs either, because SNNs have positive thresholds that are usually much larger than individual weight values. In this work, we propose simple methods for initializing parameters and normalizing backprop errors for training deep SNNs. Even though the proposed technique does not guarantee the balance of forward activations, it is effective for addressing the exploding and vanishing gradients problems. The weight and threshold parameters of neurons in the $l$-th layer are initialized as $$\label{eq:initialization} w^{(l)} \sim U\left[ -\sqrt{3/M^{(l)}}, \sqrt{3/M^{(l)}}\right], \quad V_{th}^{(l)}=\alpha\sqrt{3/M^{(l)}}, \quad \alpha > 1,$$ where $U[-a, a]$ is the uniform distribution in the interval $(-a, a)$, $M^{(l)}$ is the number of synapses of each neuron, and $\alpha$ is a constant. $\alpha$ should be large enough to stabilize spiking neurons, but small enough to make the neurons respond to the inputs through multiple layers. We used values between 3 and 10 for $\alpha$. The weights initialized by (\[eq:initialization\]) satisfy the following condition: $$\label{eq:weight_condition} E\left[\sum_i^{M^{(l)}}(w_{ji}^{(l)})^2\right] = 1 \quad \text{or} \quad E\left[ (w_{ji}^{(l)})^2\right]=\frac{1}{M^{(l)}}.$$ This condition is used for backprop error normalization in the next paragraph. In addition, to ensure stability, the weight parameters are regularized by decaying them so that they do not deviate too much from (\[eq:weight\_condition\]) throughout training. We will discuss this in detail in Section 5.1. The main idea of backprop error normalization is to balance the magnitude of updates in weights (and in threshold) parameters among layers. In the $l$-th layer $(N^{(l)} = M^{(l+1)}, n^{(l)} = m^{(l+1)})$, we define the error propagating back through the $i$-th active neuron as $$\label{eq:delta_norm} \delta_i^{(l)}=\frac{g_i^{(l)}}{\bar{g}^{(l)}}\sqrt{\frac{M^{(l+1)}}{m^{(l+1)}}}\sum_j^{n^{(l+1)}}w_{ji}^{(l+1)}\delta_j^{(l+1)},$$ where $g_i^{(l)}=1/V_{th,i}^{(l)}$, $\bar{g}^{(l)}=\sqrt{E \left[ (g_i^{(l)})^2 \right]} \cong \sqrt{\frac{1}{n^{(l)}}\sum_i^{n^{(l)}}(g_i^{(l)})^2}$. Thus, with (\[eq:weight\_condition\]), the expectation of the squared sum of errors (i.e, $E[ \sum_i^{n^{(l)}}{(\delta_i^{(l)})^2} ]$) can be maintained constant through layers. Although this was confirmed for the case without a WTA circuit, we found that it still approximately holds for networks using WTA. Weight and threshold parameters are updated as: $$\label{eq:update} \Delta w_{ij}^{(l)}=-\eta_w\sqrt{\frac{N^{(l)}}{m^{(l)}}}\delta_i^{(l)}\hat{x}_j^{(l)}, \quad \Delta V_{th,i}^{(l)}=-\eta_{th}\sqrt{\frac{N^{(l)}}{m^{(l)}M^{(l)}}}\delta_i^{(l)}\hat{a}_i^{(l)},$$ where $\eta_w$ and $\eta_{th}$ are the learning rates for weight and threshold parameters, respectively. We found that the threshold values tend to decrease through the training epochs due to SGD decreasing the threshold values whenever the target neuron does not fully respond to the corresponding input. Small thresholds, however, could lead to exploding firing within the network. Thus, we used smaller learning rates for threshold updates to prevent the threshold parameters from decreasing too much. $\hat{x}$ and $\hat{a}$ in (\[eq:update\]) are the effective input and output activities defined as: $\hat{x}_j=x_j$, $\hat{a}_i=\gamma a_i - \sigma \sum_{j \neq i}^n{\kappa_{ij}a_j}$. By using (\[eq:update\]), at the initial stage of training, the amount of updates depends on the expectation of per-synapse activity of active inputs, regardless of the number of active synapses or neurons. Thus, we can balance updates among layers in deep SNNs. Regularization ============== As in conventional ANNs, regularization techniques such as weight decay during training are essential to improve the generalization capability of SNNs. Another problem in training SNNs is that because thresholds need to be initialized to large values, only a few neurons respond to input stimuli and many of them remain silent. This is a significant problem, especially in WTA circuits. In this section we introduce weight and threshold regularization methods to address these problems. Weight Regularization {#w_regularization} --------------------- Weight decay regularization is used to improve the stability of SNNs as well as their generalization capability. Specifically, we want to maintain the condition in (\[eq:weight\_condition\]). Conventional L2-regularization was found to be inadequate for this purpose, because it leads to an initial fast growth, followed by a continued decrease of weights. To address this issue, a new method named exponential regularization is introduced, which is inspired from max-norm regularization [@srivastava2014dropout]. The cost function of exponential regularization for neuron $i$ of layer $l$ is defined as: $$\label{eq:w_reg} L_w(l,i)=\frac{1}{2}\lambda e^{\beta \left( \sum_j^{M^{(l)}}(w_{ij}^{(l)})^2 - 1\right)},$$ where $\beta$ and $\lambda$ are parameters to control the balance between error correction and regularization. L2-regularization has a constant rate of decay regardless of weight values, whereas max-norm regularization imposes an upper bound of weight increase. Exponential regularization is a compromise between the two. The decay rate is exponentially proportional to the squared sum of weights. Thus, it strongly prohibits the increase of weights like max-norm regularization. Weight parameters are always decaying in any range of values to improve the generalization capability as in L2-regularization. However, exponential regularization prevents weights from decreasing too much by reducing the decay rate. Thus, the magnitude of weights can be easily maintained at a certain level. Threshold Regularization {#th_regularization} ------------------------ Threshold regularization is used to balance the activities among $N$ neurons receiving the same input stimuli. When $N_w$ neurons fire after receiving an input spike, their thresholds are increased by $\rho N$. Subsequently, for all $N$ neurons, the threshold is decreased by $\rho N_w$. Thus, highly active neurons become less sensitive to input stimuli due to the increase of their thresholds. On the other hand, rarely active neurons can respond more easily for subsequent stimuli. Because the membrane potentials are restricted to the range $[-V_{th}, V_{th}]$, neurons with smaller thresholds, because of their tight lower bound, tend to be less influenced by negative inputs. Threshold regularization actively prevents dead neurons and encourages all neurons to equally contribute to the optimization. This kind of regularization has been used for competitive learning previously [@rumelhart1985feature]. We set a lower bound on thresholds to prevent spiking neurons from firing too much due to extremely small threshold values. If the threshold of a neuron is supposed to go below the lower bound, then instead of decreasing the threshold, all weight values of the neuron are increased by the same amount. Threshold regularization was done during the forward propagation in training. Results and Discussion {#result} ====================== Using the regularization term from (\[eq:w\_reg\]), the objective function for each training sample (using batch size = 1) is given by $L=\frac{1}{2}\|a-y\|^2 \ + \sum_{l \in hidden}\sum_i{L_w(l,i)}$ , where $y$ is the label vector and $a$ is the output vector. Each element of $a$ is defined as $a_i=\#spike_i/\max_j(\#spike_j)$, where $\#spike_i$ is the number of output spikes generated by the $i$-th neuron of the output layer. The output is normalized by the maximum value instead of the sum of all outputs. With this scheme, it is not necessary to use weight regularization for the output layer. Parameters Values Used In ------------- ------------------------------------ ----------------------------------- -- $\tau_{mp}$ 20 ms (MNIST), 200 ms (N-MNIST) (\[eq:lifmp\]), (\[eq:activity\]) $T_{ref}$ 1 ms (\[eq:lifmp\]) $\alpha$ $3 - 10$ (\[eq:initialization\]) $\eta_{w}$ $0.002 - 0.004$ (\[eq:update\]) $\eta_{th}$ $0.1\eta_w$ (SGD), $\eta_w$ (ADAM) (\[eq:update\]) $\beta$ 10 (\[eq:w\_reg\]) $\lambda$ $0.002 - 0.04$ (\[eq:w\_reg\]) $\rho$ $0.00004 - 0.0002$ \[th\_regularization\] : Values of parameters used in the experiments[]{data-label="param_table"} The PI MNIST task was used for performance evaluation [@lecun1998gradient]. MNIST is a hand written digit classification dataset consisting of 60,000 training samples and 10,000 test samples. The permutation-invariant version was chosen to directly measure the power of the fully-connected classifier. By randomly permuting the input stimuli we prohibit the use of techniques that exploit spatial correlations within inputs, such as data augmentation or convolutions to improve performance. An event stream is generated from a $28 \times 28$ pixel image of a hand written digit at the input layer. The intensity of each pixel defines the event rate of Poisson events. We normalized the total event rate to be 5 keps ($\sim$43 eps per non-zero pixel on average). The accuracy of the SNN tends to improve as the integration time (i.e. the duration of the input stimuli) increases. We used 1 second duration of the input event stream during accuracy measurements to obtain stable results. Further increase of integration time improved the accuracy only marginally ($<0.1\%$). During training, only 50 ms presentations per digit were used to reduce the training time. In the initial phase of training deep SNNs, neuron activities tend to quickly decrease propagating into higher layers due to non-optimal weights and large thresholds. Thus, for the networks with 2 hidden layers (HLs), the first epoch was used as an initial training phase by increasing the duration of the input stimuli to 200 ms. All 60,000 samples were used for training, and 10,000 samples for testing. No validation set or early stopping were used. Learning rate and threshold regularization were decayed by $\exp(-1/35)$ every epoch. Typical values for parameters are listed in Table \[param\_table\]. We trained and evaluated SNNs with different sized hidden layers (784-$N$-10, where $N$ = 100, 200, 300) and varied the strength of lateral inhibitory connections in WTA circuits (in the HL and the output layer) to find their optimum value. All the networks were initialized with the same weight values and trained for 150 epochs. The reported accuracy is the average over epochs \[131, 150\], which reduces the fluctuation caused by random spike timing in the input spike stream and training. Figure \[fig1\](a) shows the accuracy measured by varying the lateral inhibition strength in the first HL. The best performance was obtained when the lateral inhibition was at -0.4, regardless of $N$. For the output layer, we found that -1.0 gave the best result. Table \[accuracy\_table\] show the accuracies of various shallow and deep architectures in comparison with previous reports. For the deep SNNs with 2 HLs, the first HL and the output layer were competing in a WTA circuit. The strength of the lateral inhibition was -0.4 and -1.0 for each one as in the case of the SNNs with 1 HL. However, for the second HL, the best accuracy was obtained without a WTA circuit, which possibly means that the outputs of the first hidden layer cannot be sparsified as much as the original inputs without losing information. The best accuracy ($98.64\%$) obtained from the SNN with 1 HL was better than that of the shallow ANN (i.e. MLP) ($98.4\%$) and matched the previous state-of-the-art of deep SNNs [@diehl2015fast; @hunsberger2015spiking]. We attribute this improvement to the use of WTA circuits and the direct optimization on spike signals. The best accuracy of SNN with 2 HLs was $98.7\%$ with vanilla SGD. By applying the ADAM learning method ($\beta_1=0.9$, $\beta_2=0.999$, $\epsilon=10^{-8}$) [@kingma2014adam], we could further improve the best accuracy up to $98.77\%$, which is in the range of ANNs trained with Dropout or DropConnect [@srivastava2014dropout; @wan2013regularization]. To investigate the potential of the proposed method on event stream data, we trained simple networks with 1 HL on the N-MNIST dataset, a neuromorphic version of MNIST. It was generated by moving a Dynamic Vision Sensor (DVS) [@lichtsteiner2008dvs] in front of projected images of digits [@orchard2015converting]. A 3-phase saccadic movement of the DVS (Figure \[fig1\](b)) is responsible for generating events, and shifts the position of the digit in pixel space. The previous state-of-the-art result achieved $95.72\%$ accuracy with a spiking convolutional neural network (CNN) [@neil2016effective]. Their approach was based on [@diehl2015fast], converting an ANN to an SNN instead of directly training on spike trains. This lead to a large accuracy drop after conversion ($98.3\% \rightarrow 95.72\%$), even though the event streams were pre-processed to center the digits. In this work, however, we work directly on the original uncentered data. For training, 300 consecutive events were picked at random positions from each event stream, whereas the full event streams were used for evaluating the test accuracy. Since the DVS generated two types of event (on-event for intensity increase, off-event for intensity decrease), we separated events into two channels based on the event type. Table \[accuracy\_table\] shows that our result of $98.53\%$ with 500 hidden units is the best N-MNIST result with SNNs reported to date. ![(a) Accuracy vs. strength of lateral inhibition in the hidden layer for PI MNIST. (b) Illustration of the saccades used to generate the N-MNIST dataset and resulting event streams [@orchard2015converting].[]{data-label="fig1"}](fig1d){width="\textwidth"} We have shown that our novel spike-based backpropagation technique for deep SNNs works both on standard benchmarks such as PI MNIST, but also on N-MNIST, which contains rich spatio-temporal structure in the events generated by a neuromorphic vision sensor. We improve the previous state-of-the-art of SNNs on both tasks and achieve accuracy levels that match those of conventional deep networks. Closing this gap makes deep SNNs attractive for tasks with highly redundant information or energy constrained applications, due to the benefits of event-based computation, and advantages of efficient neuromorphic processors [@merolla2014million]. We expect that the proposed technique can precisely capture the statistics of spike signals generated from event-based sensors, which is an important advantage over previous SNN training methods. Future work will extend our training approach to new architectures, such as CNNs and recurrent networks. Network \# units in HLs Test accuracy (%) ------------------------------------------------ ------------------------------- ------------------------------ ANN ([@srivastava2014dropout], Drop-out) 4096-4096 98.99 ANN ([@wan2013regularization], Drop-connect) 800-800 98.8 ANN ([@goodfellow2013maxout], maxout) 240 $\times$ 5-240 $\times$ 5 99.06 SNN ([@oconnor2013real])$ ^{a,b}$ 500-500 94.09 SNN ([@hunsberger2015spiking])$ ^a$ 500-300 98.6 SNN ([@diehl2015fast]) 1200-1200 98.64 SNN ([@oconnor2016deep]) 200-200 97.8 SNN (SGD, This work) 800 \[98.56, 98.64, 98.71\]$ ^*$ SNN (SGD, This work) 500-500 \[98.63, 98.70, 98.76\]$ ^*$ SNN (ADAM, This work) 300-300 \[98.71, 98.77, 98.88\]$ ^*$ N-MNIST (centered), ANN ([@neil2016effective]) CNN 98.3 N-MNIST (centered), SNN ([@neil2016effective]) CNN 95.72 N-MNIST (uncentered), SNN (This work) 500 \[98.45, 98.53, 98.61\]$ ^*$ : Comparison of accuracy of different models on PI MNIST without unsupervised pre-training or cost function (except SNN([@oconnor2013real]) and SNN([@hunsberger2015spiking])) and N-MNIST [@orchard2015converting].[]{data-label="accuracy_table"} \ a: pretraining, b: data augmentation, \*:\[min, average, max\] values over epochs \[181, 200\].
--- address: | (1) Laboratoire d’Informatique de Grenoble (LIG), UGA, G-INP, CNRS, INRIA, France\ (2) Department of Computer Science, University of Sheffield, England\ (3) Institute of Informatics (INF), UFRGS, Brazil\ **contact:** marcely.zanon-boito@univ-grenoble-alpes.fr\ bibliography: - 'emnlp-ijcnlp-2019.bib' title: | Investigating Language Impact in Bilingual Approaches for\ Computational Language Documentation --- Bibliographical References {#reference} ==========================
--- abstract: 'In this paper, we present an optical computing method for string data alignment applicable to genome information analysis. By applying moir[é]{} technique to spatial encoding patterns of deoxyribonucleic acid (DNA) sequences, association information of the genome and the expressed phenotypes could more effectively be extracted. Such moir[é]{} fringes reveal occurrence of matching, deletion and insertion between DNA sequences providing useful visualized information for prediction of gene function and classification of species. Furthermore, by applying a cylindrical lens, a new technique is proposed to map two-dimensional (2D) association information to a one-dimensional (1D) column of pixels, where each pixel in the column is representative of superposition of all bright and dark pixels in the corresponding row. By such a time-consuming preprocessing, local similarities between two intended patterns can readily be found by just using a 1D array of photodetectors and post-processing could be performed on specified parts in the initial 2D pattern. We also evaluate our proposed circular encoding adapted for poor data alignment condition. Our simulation results together with experimental implementation verify the effectiveness of our dynamic proposed methods which significantly improve system parameters such as processing gain and signal to noise ratio (SNR).' author: - title: Mining DNA Sequences Based on Spatially Coded Technique Using Spatial Light Modulator --- String data alignment, moir[é]{} pattern, DNA sequencing, spatial light modulator. Introduction ============ Emerging various widespread human diseases speeds up the growing rate of genomics. Accordingly, analysis of deoxyribonucleic acid (DNA) sequences, as a medium storing by far more important information about properties of an organism, has intrigued many researchers to extract significant knowledge about life sciences [@kinser2000mining; @shendure2008next; @rajan2014two]. As a common event in evolution process, mutation would modify DNA data sequences comprising of a finite number of basic elements known as nucleotides, i.e., adenine (A), cytosine (C), guanine (G), and thymine (T), which are independent of each other. Since each sequence data conceals in a collection of one-dimensional (1D) strings forming a genome, the role of string data alignment or pattern matching against a sequence of genomes is much more critical for comparison and interpretation of DNA-based structures [@eid2009real]. Due to rapidly evolving DNA-sequencing, investigating through highly extensive DNA databases to identify occurrence of exchange, deletion, and insertion of specific data, find target DNA strings or newly genes and classify species is becoming a costly and challenging problem for researchers [@rothberg2011integrated; @min2011fast]. All recent sequencing technologies, including Roche/454, Illumina, SOLiD and Helicos, are able to produce data of the order of giga base-pairs (Gbp) per machine day [@metzker2010sequencing]. However, with the emergence of such enormous quantities of data, even the fast digital electronic devices are not effective enough to align capillary reads [@ning2001ssaha; @kent2002blat]. Actually, today’s electronics technology would not permit us to achieve high rate of analysis in sequence matching and information processing due to the time consuming nature of serial processing [@tanida1999string; @tanida2000string; @rothberg2011integrated]. To keep pace with the throughput of sequencing technologies, many new alignment algorithms have been developed, but demands for faster alignment approaches still exist. As a result, the necessity of finding a novel implementation to provide high performance computational systems is undeniable [@mardis2008next; @merkling2005sequence]. High data throughput, inherent parallelism, broad bandwidth and less-precise adjustment of optical computing provide highly efficient devices which can process information with high speed and low energy consumption. It is worth mentioning that visible light in optical computing systems realizes information visualization for human operators to more effectively carry out genome analysis. Employing a powerful technique to encode DNA information into an optical image besides optical computing capabilities would definitely guarantee to efficaciously perform genomes analysis [@mardis2008next; @shendure2008next]. While recent implementations were static and relying on printed transparent sheets [@tanida2002spatially; @niita2001genome], herein, we theoretically and experimentally present dynamic string data alignment based on a spatially coded moir[é]{} technique [@amidror2000theory; @gabrielyan2007basics] implemented on spatial light modulators (SLMs) which enables one to investigate useful hiding information in genomes. The remaining of the paper is organized as follows. In Section II, the principle of string data matching using the spatially coded technique is explained. In Section III, bar and circular patterns as an effective scheme for string data alignment will be discussed. In Section IV, the experimental optical architecture and obtained results will be appeared to verify practical feasibility of our proposed pattern, and Section V concludes the paper. Principles ========== In this section, the principles of string alignment by moir[é]{} technique are outlined. Consider two data sequences. The goal of string alignment is evaluation of similarities and differences between them. In particular, we are interested in distinguishing insertion and deletion of elements in any strings with respect to each other. Moir[é]{} technique applies high speed parallel processing of light to perform string alignment. In this approach, four components of strings, namely $\{{\rm A,G,C,T}\}$ are encoded as $\{1000, 0100,0010,0001\}$, respectively. Based on this coding, the strings are spatially coded into images where each component corresponds to four narrow stripes with one bright stripe as $``1"$ and three dark stripes as $``0"$ (see Fig. \[graphical\]). The coded images are then overlapped with a small relative angle, and by using this technique, correlating segments of the second string in various shifts of the first one can evidently be distinguished. The subsequent matched elements will be appeared as a bright line in the observed pattern of overlapped images. ![Graphical patterns for DNA bases[]{data-label="graphical"}](Positional_codes.pdf){width="2.7in"} ![Spatial code patterns for (a) $S_2$, (b) subsequent shifts of initial string $S_1$, (c) output pattern by overlapping (a) and (b), (d) output pattern by overlapping $S_3$ and (b).[]{data-label="fig1"}](Figure_1.pdf){width="2.7in"} As an example, consider two strings S1 of length 40 and S2 of length 20. Now, we want to search for S2 in S1. Fig. \[fig1\](a) shows $S_{2}=\{ {\rm A C G T A T C C G T A C A G G T C G A A} \}$ with respect to the codes appeared in Fig. \[graphical\], and each row in Fig. \[fig1\](b) shows subsequent shifts of initial string $S_{1}=\{{\rm T C C G T A C G T A T C C G T A C A G G T C G A A T G C G T A C A T}$ $ {\rm C G A C C T}\}$; for example first row shows S1(1:20), second row shows S1(2:21), up to the last row. Overlapping Fig. \[fig1\](a) ans (b) results in the pattern shown in Fig. \[fig1\](c); the bright line in the fourth row illustrates that a correlation has happened for a shift of 6, i.e., S2 and S1(6:25) are matched. DNA bases A G C T ----------- -------- -------- -------- -------- Type I $1000$ $0100$ $0010$ $0001$ Type II $H00H$ $V0V0$ $0V0V$ $0HH0$ : Corresponding codes for polarized spatial patterns in Figs. \[type1\] and \[type2\].[]{data-label="T1"} \ $H$: Horizontal, $V$: Vertical ![Spatial code patterns of (a) $S_1$, (b) $S_2$, and (c) corresponding correlation for type I.](Sim_Type1.pdf){width="3.4in"} \[type1\] ![Spatial code patterns of (a) $S_1$, (b) $S_3$, and (c) corresponding correlation for type II.](Sim_Type2.pdf){width="3.4in"} \[type2\] The insertion and deletion of elements lead to a vertical shift in some parts of the bright line in the overlapping pattern. Each break point indicates the location where insertion or deletion is occurred. The positive and negative vertical shifts correspond to insertion and deletion of some elements, respectively. As an example, consider the string $S_{3}=\{ {\rm A C G T A T\textbf{AG}C C G T A C A}$${\rm T C G A A} \}$ generated by insertion of “AG” between the sixth and seventh element of $S_{2}$ and deletion of the fourteenth and fifteenth element of $S_{2}$ . Figure \[fig1\](d) depicts the output pattern obtained by multiplying $S_{3}$ and Fig. \[fig1\](b). Proposed Methods ================ In this section, we propose several practically feasible moir[é]{} patterns for string data alignment applications. Wave nature of light provides enough degrees of freedom, i.e., amplitude, phase, and polarization manipulation for sequence data processing. The first coding approach is based on correlation in which a sequence is simply symbol-by-symbol compared to another sequence. In DNA sequence data processing, each symbol denotes a DNA base. In comparing two symbols with each other, similar symbols generate a bright spot; hence, a correlated set realizes a bright line. This line is fragmented in the case of insertion and deletion in which vertical distance between fragmented lines identifies the number of deleted or inserted elements in that place. In this method, SNR can easily be calculated; in the case of two independent and identical distributed sequences, the probability of such a random similarity and hence number of bright spots with respect to full matching is $0.25$, leading to $6$ dB SNR. It is notable that system’s SNR is proportional to the ratio of bright line intensity to the average intensity of other rows. Another coding technique is based on concatenating two subsequent elements, for example $S(i:i+1)$ and $S(i+1:i+2)$, as a group. Subsequent groups have a common element which ensures an easier detection procedure of insertion or deletion. Coding sequences in overlapped pairs not only does increase the SNR but also makes correlated elements more distinguishable even in the cases of insertion and deletion. In this method, a $12$ dB SNR can be expected in that the probability of random similarity for a word of two symbols is $0.0625$. \[t\] DNA bases Type III Type IV ------------ ------------ -------------------- ${\rm AA}$ $H0000000$ $1000000000000000$ ${\rm GA}$ $0H000000$ $0100000000000000$ ${\rm CA}$ $00H00000$ $0010000000000000$ ${\rm TA}$ $000H0000$ $0001000000000000$ ${\rm AG}$ $0000H000$ $0000100000000000$ ${\rm GG}$ $00000H00$ $0000010000000000$ ${\rm CG}$ $000000H0$ $0000001000000000$ ${\rm TG}$ $0000000H$ $0000000100000000$ ${\rm AC}$ $V0000000$ $0000000010000000$ ${\rm GC}$ $0V000000$ $0000000001000000$ ${\rm CC}$ $00V00000$ $0000000000100000$ ${\rm TC}$ $000V0000$ $0000000000010000$ ${\rm AT}$ $0000V000$ $0000000000001000$ ${\rm GT}$ $00000V00$ $0000000000000100$ ${\rm CT}$ $000000V0$ $0000000000000010$ ${\rm TT}$ $0000000V$ $0000000000000001$ : Corresponding codes for spatial patterns in Figs. \[type3\] and \[type4\].[]{data-label="T2"} Type I Type II Type III Type IV ------------------------- --------------------------- ---------------------------- ---------------------------- --------------------------- Processing Gain $ N/4$ $N/4$ $N/8$ $N/16$ SLM Modulation Capacity Intensity or Polarization Intensity and Polarization Intensity and Polarization Intensity or Polarization SNR [$\rm (dB)$]{} $6.8854$ $6.4648$ $12.2260$ $12.0715$ Bar Pattern ----------- We examined two different sets of symbols in a bar moir[é]{} pattern. While the first one employs pulse position modulation (PPM), the second comprises of a set of four orthogonal codes using both intensity and polarization (see Table \[T1\]). Since there is no useful information in shifts that are not an integer product of symbol length, different rows are shifted by an integer product of four slots that form a symbol. This is by far more efficient than horizontal tilting of the second pattern and consequently compatible with finite resolution of SLM. Simulation results are depicted in Figs. \[type1\] and \[type2\]. By comparing the results, it is clear that using type II increases the intensity of both noise and signal but does not improve the SNR. Measuring symbol-by-symbol correlation, we see the SNR does not go further than $6$ dB. In the second approach, the codes in Table \[T2\] are applied which means that for tilted pattern different rows are shifted by $8k$ in type III (word length is eight here) and $16k$ in type IV; $k$ is a positive integer. Figs. \[type3\] and \[type4\] illustrate the simulation results. As it can be seen, the horizontal straight line is more vivid in types III and IV since the probability of random similarity for a word of two symbols is $0.0625$; therefore, we can expect a SNR about $12$ dB. In type III, we need a SLM with independent intensity and polarization modulation while in type IV only intensity or polarization modulation is required. Polarization modulation can easily be converted to intensity modulation via a polarizer. Since word length for type IV is twice type III, for an equal number of SLM surface pixels, processing gain, the number of DNA bases that the setup is able to compare in each run, of type III is twice type IV. Types III and IV offer better detection capability facing insertion and deletion. It is notable that each insertion or deletion changes two words. In case of $n$ subsequent deletion or insertion, $n+1$ words differ from initial pattern. Moreover, if we increase the word length to code the DNA bases in group of length $L$, $L-1$ elements are needed to be overlapped in order to detect insertion and deletion. When misalignment and other types of errors are addressed, the maximum performance of such a system could be achieved. In this case, the number of pixels of SLMs to compare two sequences of length $N$ follows Table \[T3\]. It also reports the SNR values of different types for a random sequence of length $48$. Circular Pattern ---------------- ![Spatial code patterns of (a) $S_1$, (b) $S_3$, and (c) corresponding correlation for type III.](Sim_Type3.pdf){width="3.4in"} \[type3\] ![Spatial code patterns of (a) $S_1$, (b) $S_3$, and (c) corresponding correlation for type IV.](Sim_Type4.pdf){width="3.4in"} \[type4\] Optical alignment could be quite problematic in implementing bar patterns. In correlating two bar patterns, the dimension precision required should be about $d/N$, where $d$ is the transverse length of a pixel and $N$ is the total number of vertical pixels on SLM surface. On the other hand, circular moir[é]{} patterns are basically easier to be adjusted in experimental setups since only the center of circles should be aligned. Besides, it is sensitive to neither rotation nor divergence during propagation in free-space. Despite circular pattern could only process sparse sequence data, it simplifies the optical alignment complexity when transceivers are distant. ![Design of a sector for circular moir[é]{} pattern. $\Delta r_{1}$ is chosen such that $r_{0} \Delta \theta \Delta r_{0}=r_{1} \Delta \theta \Delta r_{1}$. []{data-label="sector"}](Sector.pdf){width="1.8in"} ![Images of first, second, and output patterns obtained from simulation ((a), (b), and (c), respectively) and experiment ((d), (e), and (f), respectively) in the case of exact matching. ](EXP3){width="3.4in"} \[Circular\] Our proposed approach is based on encoding the strings into circular images. In this method, instead of rectangular stripes, narrow sectors are applied, as shown in Fig. \[Circular\](a). To realize the shifted versions of another string, we use curved pattern, as depicted in \[Circular\](b). Each curved sector in this pattern is designed such that the area of its segments at different radii are approximately equal. Defining $r_{0}$ and $\Delta r_{0}$ as our initial values for the first segment of the curved sector, we have (see Fig. \[sector\]); $$r_{0} \Delta \theta \Delta r_{0}=(r_{0} +\Delta r_{0})\Delta \theta \Delta r_{1}.$$ From the above equation, $\Delta r_{1}$ is given by; $$\Delta r_{1}=\frac{r_{0}\Delta r_{0}}{r_{0} +\Delta r_{0}}.$$ Furthermore, it is obvious from Fig. \[sector\] that $r_{1}=r_{0} +\Delta r_{0}$. Hence, we can define the following recursive relation to obtain $\Delta r_{i}$s and $r_{i}$s as; $$\begin{aligned} \Delta r_{i}&=\frac{r_{i-1}\Delta r_{i-1}}{r_{i-1} +\Delta r_{i-1}},\nonumber\\ r_{i}&=r_{i-1} +\Delta r_{i-1},\end{aligned}$$ respectively. Fig. \[Circular\](c) depicts the simulation and the experimental patterns after overlapping two images of Figs. \[Circular\](a) and (b). As can be seen, a bright ring appears at the intersection of matched elements. Experimental Setup and Results ============================== In this paper, the optical architecture is implemented by two separate programmable reflective SLMs. The pixel pitch of the liquid-crystal display of each SLM is $20~\rm{ \mu m}$, and the pixels number is $1280\times768$. The larger number of elements it comprises, the less resolution in the output plane we achieve. Further details about equipments can be found in Table \[T3\]. Fig. \[Setup\] shows the architecture of multiplier realizing the proposed optical processing method for genome analysis based on spatial coded technique. The output light of a spatially coherent source such as a laser diode source or a non-coherent source like LED is collimated and then impinges the first SLM containing $S_1$. A linear polarizer in front of the first SLM sets the incoming polarization state. Since the laser emits elliptical polarized light, the intensity is dependent on both the SLM and the first polarizer’s states. To remove this ambiguity, the intensity of the light leaving the first polarizer has to be set to be independent of the angle. The reflected light from the first SLM meets the second SLM which implements $S_2$. Since horizontal tilt angle is so small ($<5^\circ$) for a reflective SLM, the two SLMs have to be placed at further distance to realize an appropriate setup. Consequently, the high resolution pattern in $\rm{SLM_1}$ would be damaged in that it convolves with free-space Green’s function. Using a lens at a distance of twice the focal length ($2f$) between two SLMs makes it possible to have the exact sharp pattern of $S_1$ on $\rm{SLM_2}$. Moreover, fine adjustment of the first polarizer and the analyzer ensures the maximum contrast in the plane of $\rm{SLM_2}$. Each SLM consists of rectangular pixels where each pixel corresponds to the programmed generated binary element. A pixel with binary element “$1$" allows light to reflect with the same impinging polarization, ideally without any attenuation, corresponding to the white string and a pixel with binary element “$0$" rotates the incoming light polarization by $90^\circ$ corresponding to the black string. ![(a) Schematic block diagram and (b) experimental setup for the proposed optical sequence data processing.](Experiment_Assembled.pdf){width="3.4in"} \[Setup\] ![Image of (a) $S_1$, (b) $S_2$, (c) output pattern achieved via simulation. Photograph of (d) $S_1$, (e) $S_2$, (f) output pattern obtained from experiment.](EXP1){width="3.4in"} \[EXP1\] ![Image of (a) $S_1$, (b) $S_3$, (c) output pattern achieved via simulation. Photograph of (d) $S_1$, (e) $S_3$, (f) output pattern obtained from experiment.](EXP2){width="3.4in"} \[EXP2\] ![Transformed output patterns of Figs. \[EXP1\](f) and \[EXP2\](f) on the display using cylindrical lens.](Cylindrical.pdf){width="3.4in"} \[Cylindrical\] Equipment Description --------------------------- ----------------------------------------------- Reflective SLM Holoeye, LC R-720 and LC R-2500 Biconvex lens Thorlabs, $f=~75~{\rm and}~100$ mm, $d=40$ mm Laser doide $1$ mW, Green ($532$ nm), Polarized CCD camera Tevicom Achromatic objective lens Thorlabs, $NA=0.25$ Cylindrical lens $f=75$ mm, $h=50.8$ mm, $ l=53 $ mm Holder Standa Analyzer and polarizer Wire grid : Optical architecture characterization[]{data-label="T4"} A two-dimensional array of photodetectors can be employed to capture the output pattern; then digital processing would be done by a host computer to extract precise matching. Alternatively, analyzing the output pattern can be realized by visual inspection or using a CCD camera. In order to verify our proposed method, we firstly show bar strings alignment between two DNA-simulated sequences; then the circular one will be demonstrated as our proposed new encoded pattern. One-dimensional strings to be aligned are illustrated in Figs. \[EXP1\] and \[EXP2\] in which $S_1$, $S_2$, and $S_3$ were introduced earlier. To more straightforwardly realize string alignment, a cylindrical lens could be employed between the third polarizer and the output display. It is well known that such a lens transforms plane wave to an ultra-thin line. As a result, each horizontal bright line in the output pattern right behind the lens is mapped to a luminous point on the display which enables us to use a simple one-dimensional array of photodetectors to detect the occurrence of exact matching and the number of deleted or inserted elements. Figs. \[Cylindrical\](a) and \[Cylindrical\](b) respectively illustrate the transformed versions of the output patterns in Figs. \[EXP1\](f) and \[EXP2\](f) at the focused plane of the cylindrical lens. Additionally, simulated and experimental results for circular patterns presented in Fig. \[Circular\] are in good agreement. Conclusion ========== In conclusion, a simple and practical method based on spatially coded moir[é]{} matching technique has been proposed for string alignment processing. Easy interpretation and inherent parallelism with almost real-time processing are the main specifications of our approach which is compatible with digital devices. The processing gain and SNR of the proposed patterns, i.e., bar and circular patterns, have numerically been calculated to show the effectiveness of our method. Moreover, a preprocessing stage which remarkably decreases post-processing time needed for interpretation of output pattern has been introduced. The capability of our proposed method in DNA sequence matching has been shown via simulation. Finally, experimental results verify the performance of the method in genomics processing applications based on optical computing. [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{} J. M. Kinser, “Mining [DNA]{} data in an efficient 2d optical architecture,” in *2000 International Topical Meeting on Optics in Computing (OC2000)*.1em plus 0.5em minus 0.4emInternational Society for Optics and Photonics, 2000, pp. 104–110. J. Shendure and H. Ji, “Next-generation [DNA]{} sequencing,” *Nature biotechnology*, vol. 26, no. 10, pp. 1135–1145, 2008. A. C. Rajan, M. R. Rezapour, J. Yun, Y. Cho, W. J. Cho, S. K. Min, G. Lee, and K. S. Kim, “Two dimensional molecular electronics spectroscopy for molecular fingerprinting, [DNA]{} sequencing, and cancerous [DNA]{} recognition,” *ACS nano*, vol. 8, no. 2, pp. 1827–1833, 2014. J. Eid, A. Fehr, J. Gray, K. Luong, J. Lyle, G. Otto, P. Peluso, D. Rank, P. Baybayan, B. Bettman *et al.*, “Real-time [DNA]{} sequencing from single polymerase molecules,” *Science*, vol. 323, no. 5910, pp. 133–138, 2009. J. M. Rothberg, W. Hinz, T. M. Rearick, J. Schultz, W. Mileski, M. Davey, J. H. Leamon, K. Johnson, M. J. Milgrew, M. Edwards *et al.*, “An integrated semiconductor device enabling non-optical genome sequencing,” *Nature*, vol. 475, no. 7356, pp. 348–352, 2011. S. K. Min, W. Y. Kim, Y. Cho, and K. S. Kim, “Fast [DNA]{} sequencing with a graphene-based nanochannel device,” *Nature nanotechnology*, vol. 6, no. 3, pp. 162–165, 2011. M. L. Metzker, “Sequencing technologies—the next generation,” *Nature reviews genetics*, vol. 11, no. 1, pp. 31–46, 2010. Z. Ning, A. J. Cox, and J. C. Mullikin, “Ssaha: a fast search method for large dna databases,” *Genome research*, vol. 11, no. 10, pp. 1725–1729, 2001. W. J. Kent, “Blat—the blast-like alignment tool,” *Genome research*, vol. 12, no. 4, pp. 656–664, 2002. J. Tanida, “String data alignment by a spatial coding and moir[é]{} technique,” *Optics letters*, vol. 24, no. 23, pp. 1681–1683, 1999. J. Tanida and K. Nitta, “String data matching based on a moir[é]{} technique using 1d spatial coded patterns,” in *2000 International Topical Meeting on Optics in Computing (OC2000)*.1em plus 0.5em minus 0.4emInternational Society for Optics and Photonics, 2000, pp. 16–23. E. R. Mardis, “Next-generation [DNA]{} sequencing methods,” *Annu. Rev. Genomics Hum. Genet.*, vol. 9, pp. 387–402, 2008. J. L. Merkling, “Sequence matching in holographically stored genetic strings,” Ph.D. dissertation, Texas Tech University, 2005. J. Tanida, K. Nitta, and A. Yahata, “Spatially coded moire matching technique for genome information visualization,” in *Photonics Asia 2002*. 1em plus 0.5em minus 0.4emInternational Society for Optics and Photonics, 2002, pp. 26–33. K. Niita, H. Togo, A. Yahata, and J. Tanida, “Genome information analysis using spatial coded moire technique,” in *Lasers and Electro-Optics, 2001. CLEO/Pacific Rim 2001. The 4th Pacific Rim Conference on*, vol. 2.1em plus 0.5em minus 0.4emIEEE, 2001, pp. II–II. I. Amidror, *The theory of the moir[é]{} phenomenon*.1em plus 0.5em minus 0.4emSpringer, 2000, no. LSP-BOOK-2000-001. E. Gabrielyan, “The basics of line moir[é]{} patterns and optical speedup,” *arXiv preprint physics/0703098*, 2007.
--- author: - 'J. Tuziemski' - 'J. K. Korbicz' title: Dynamical Objectivity in Quantum Brownian Motion --- Introduction ============ Reconciliation of quantum theory with the classical world of everyday experience has been one of the central problems in our understanding of Nature [@Bohr; @decoh], touching such deep questions as is there any ’reality’ out there [@Fine]. One of its aspects has been how to explain the objective character of our world with fragile quantum systems, inevitably disturbed by measurements. As quantum state is to date our most fundamental description of Nature, it is natural to look for an explanation at this level. Indeed, recently specific quantum state structures—*spectrum broadcast structures (SBS)* [@object; @sfera], have been identified as responsible for the perceived objectivity, suggesting that the latter is, in fact, a property of quantum states. Building on the *quantum Darwinism* idea [@ZurekNature; @decoh]—a realistic form of decoherence theory [@decoh] where the system of interest $S$ interacts with multiple environments $E_1,\dots,E_N$ and observers acquire information about $S$ through them, it has been shown in [@object] (see also [@generic]) in a model- and dynamics-independent way that the only, in a certain sense, states that encode objective states of the system are precisely the SBS: \[br2\] &&\_[S:fE]{}=\_i p\_i [| x\_i ]{}[x\_i |]{}\^[E\_1]{}\_i\^[E\_[fN]{}]{}\_i,\ &&\^[E\_k]{}\_i\^[E\_k]{}\_[i’i]{}=0, where $fE$ is the observed portion of the environment $E$, $\{{| x_i \rangle}\}$ a pointer basis [@ZurekPRD], $p_i$ pointer probabilities, and $\varrho^{E_1}_i,\dots,\varrho^{E_{fN}}_i$ some states of $E_1,\dots,E_{fN}$ with orthogonal supports. As it is easy to see from (\[br2\]), by properly measuring their portions of the environment (projecting on the supports of $\varrho^{E_k}_i$), all the observers will obtain the same result $i$ without disturbing neither the system $S$ nor each other. Since “seeing the same by many” without disturbance arguably defines a form of objectivity [@ZurekNature; @object], the states ${| x_i \rangle}$ become thus objective in this sense. Our approach is of course connected to the earlier studies based on information redundancy [@ZurekNature], but here we show it directly at the fundamental level of states, rather than using information-theoretical conditions, known so far only to be necessary [@object]. A process of formation of a SBS [@sfera] is a weaker form [@my] of quantum state broadcasting [@broadcasting; @CQ]. A question now arises if such structures are indeed formed in realistic models of decoherence. Recently [@sfera], their formation was shown in the emblematic model of decoherence with scattering-type interactions: A small dielectric sphere illuminated by photons, but the resulting broadcast structure, and hence the objective states, were static (described a fixed position) as the central system had no self-dynamics. In this work we study a fully dynamical model where both the system and the environment have own dynamics and report a formation of objectively existing states of motion for the fundamental to all physics class of harmonic interactions. In one of the universal models of decoherence—Quantum Brownian Motion (QBM) [@decoh; @Ullersma; @Petruccione], which describes a central oscillator $S$ linearly coupled to a bath $E$ of oscillators, we show a formation, in the massive central system limit, of novel dynamical spectrum broadcast structures (\[br2\]) with time-evolving pointer states ${| x_i(t) \rangle}$. Due to developed correlations, information about this evolution is redundantly encoded in the environment (in time-evolving, mutually orthogonal states $\varrho^{E_k}_i(t)$), even if the environment is noisy, and in this sense it becomes objective [@ZurekNature; @object]. We model the noise as a thermal noise (with a ramification to arbitrary single-mode Gaussian noise) and numerically study the effect as a function of the temperature, showing a certain noise-robustness. Surprisingly, in spite of being probably the most studied model of decoherence for decades [@decoh; @Ullersma; @Petruccione], these state structures have not been noticed before (in the previous studies [@qbm_Zurek; @Augusto] information-theoretical conditions were used, known so far to be only necessary with their sufficiency being open [@object] and the environment was pure). Moreover, in contrast to the standard approaches [@decoh; @Petruccione; @qbm_Zurek], we do not use the continuous approximation of the environment, keeping it discrete, thus deriving objectivity in a more fundamental setup. The model {#model} ========= The central system $S$ is a harmonic oscillator of a mass $M$ and a frequency $\Omega$, linearly coupled to the environment $E$—a bath of $N$ oscillators, each of a mass $m_k$ and a frequency $\omega_k$, $k=1,\dots, N$. The total Hamiltonian is [@decoh; @Petruccione]: \[H\] &&H=++\_[k=1]{}\^N( +)+\ &&+X\_[k=1]{}\^N C\_kx\_k, in the units $\hbar=1$; $\hat X, \hat P$ are the system’s variables, $\hat x_k, \hat p_k$ describe the $k$-th environmental oscillator, and $C_k$ are the coupling constants. The system’s self-Hamiltonian we denote by $\hat H_S$, while the $k$-th environmental by $\hat H_k$. Our central interest is the information transfer from the system to the environment. We will assume [@qbm_Zurek] that the central system is very massive, so it is effectively macroscopic, and will neglect all the back-reaction of the environment (non-dissipative regime). We note that this is exactly the opposite regime than the one used in the more familiar Born-Markov approximation and quantum master equation approaches to QBM [@decoh]. Unlike in the usual approaches [@qbm_Zurek; @Petruccione], we also do not pass here to the continuous limit and to a continuous spectral density function, working all the time with the discrete environment. To make the decoherence possible, we assume a random distribution of $\omega_k$’s (cf. [@Zurek_spins]). This choice is some form of a spectral density, but we keep it discrete. Furthermore, we will work in the off-resonant regime: $$\label{offres} \omega_k\ll\Omega\ \text{ or }\ \omega_k\gg\Omega\quad \text{for all} \ k,$$ so that, as will become clear later, a single environmental oscillator alone will not decohere the central system [@qbm_Zurek; @Augusto; @Petruccione]. Albeit possible, that would be a somewhat trivial situation as we are interested here in a regime where a single environment carries a vanishingly small amount of information about the system [@comm]. We will thus study collective effects and following [@object; @sfera], we will group the environments into macro-fractions—fragments scaling with the total number of oscillators $N$, and study their information content. The dynamics {#dynamics} ============ Although the exact solution of the model is possible as the Hamiltonian (\[H\]) is quadratic [@Ullersma], for the purpose of this study we will use the approximate method of Refs. [@qbm_Zurek; @Augusto] taking advantage of the assumed high mass of the central system (a type of a non-adiabatic Born-Oppenheimer approximation with classical trajectories; see e.g. [@NBO]). In this approximation, the system $S$ evolves according to its self-Hamiltonian $\hat H_S$, with this evolution further approximated using classical trajectories $X(t;X_0)$, while the environment is driven along each of this trajectory. The resulting state is: $$\label{final} {| \Psi_{S:E} \rangle}=\int dX_0 \phi_0(X_0) e^{-i\hat H_St}|X_0\rangle\otimes \hat U_{E}(X(t;X_0)){| \psi_0 \rangle},$$ $\hat U_{E}(X(t;X_0))$ is the evolution generated by $\hat H_{E}(X)\equiv\sum_k(\hat H_k+C_kX\hat x_k)$ for the trajectory $X(t;X_0)$ and ${| \phi_0 \rangle}$,${| \psi_0 \rangle}$ are initial states of $S$ and $E$ respectively. Formally, (\[final\]) is obtained by a controlled-unitary evolution [@sfera]: $$\label{USE} \hat U_{S:E}(t)=\int dX_0 e^{-i\hat H_St}{| X_0 \rangle}{\langle X_0 |}\otimes \hat U_{E}(X(t;X_0)),$$ acting on the initial state ${| \phi_0 \rangle}{| \psi_0 \rangle}$. Since $\hat H_S$ is quadratic, the trajectory approximation is actually exact (the semi-classical propagator is exact). For simplicity, we will limit ourselves to trajectories obtained when the system is initially in the squeezed vacuum state (cf. [@qbm_Zurek; @Augusto]): ${| \phi_0 \rangle}=\hat S(r){| 0 \rangle}$, where $\hat S(r)\equiv e^{r(\hat a ^2-\hat a^{\dagger 2})/2}$. Especially interesting is a highly momentum squeezed state due to its large coherences in the position. We may than assume that the initial velocity of each trajectory is zero so that $X(t;X_0)=X_0\cos(\Omega t)$. The analysis of the high initial position squeezing, for which $X_0=0$ and $X(t;X_0)=X_0\sin(\Omega t)$, will be analogous. We solve for $\hat U_{E}(X(t;X_0))$ using $\hat U_{E}(X(t;X_0))=\lim_{n\to\infty} \big(\prod_{r=1}^n \exp[-i\hat H_{E}(t_r)\Delta t]\big)$, $\Delta t\equiv t/n$, $t_r\equiv r\Delta t$ and obtain: $$\begin{aligned} &&\hat U_{E}(X(t;X_0))=\bigotimes_{k=1}^N \hat U_k(X_0;t),\label{Ue}\\ &&\hat U_k(X_0;t)\equiv e^{i\zeta_k(t)X_0^2}e^{-i\hat H_k t}\hat D\left(\alpha_k(t)X_0\right),\label{UI}\end{aligned}$$ so that (\[USE\]) has the following form: $$\begin{aligned} \label{USE2} \hat U_{S:E}(t)=&&e^{-i\hat H_St}\otimes e^{-i\sum_k\hat H_k t}\times\\ &&\times\int dX_0 {| X_0 \rangle}{\langle X_0 |}\otimes e^{i\zeta_k(t)X_0^2}\hat D\left(\alpha_k(t)X_0\right).\nonumber\end{aligned}$$ Here $\hat D(\alpha)\equiv e^{\alpha \hat a^\dagger-\alpha^*\hat a}$ is the displacement operator [@Perelomov], $\hat a^\dagger,\hat a$ are the creation and annihilation operators, $\zeta_k(t)$ is a dynamical phase (as we will show irrelevant for our calculations), and: $$\begin{aligned} \alpha_k(t)\equiv -\frac{C_k}{2\sqrt{2m_k\omega_k}}\left[\frac{e^{i(\omega_k+\Omega)t}-1}{\omega_k+\Omega}+ \frac{e^{i(\omega_k-\Omega)t}-1}{\omega_k-\Omega}\right]\label{ak}\end{aligned}$$ for the momentum squeezing and: $$\begin{aligned} \alpha_k(t)\equiv -\frac{C_k}{2i\sqrt{2m_k\omega_k}}\left[\frac{e^{i(\omega_k+\Omega)t}-1}{\omega_k+\Omega}- \frac{e^{i(\omega_k-\Omega)t}-1}{\omega_k-\Omega}\right].\label{ak_p}\end{aligned}$$ for the position squeezing. Dynamical Spectrum Broadcast Structure ====================================== The formation of SBS (\[br2\]) is equivalent to: (i) decoherence and (ii) perfect disntinguishability of post-interaction environmental states [@sfera]. We study the evolved $S:E$ state under the approximations described in the previous Section and after tracing over a fraction $(1-f)E$, $f\in(0,1)$, of the environment that passes unobserved and is necessary for the decoherence: $\varrho_{S:fE}(t)\equiv tr_{(1-f)E}\varrho_{S:E}(t)$, $\varrho_{S:E}(t)\equiv \hat U_{S:E}(t)({| \phi_0 \rangle}{\langle \phi_0 |}\otimes\bigotimes_k\varrho_{0k})\hat U_{S:E}(t)^\dagger$. We assume the environment to be initially in a thermal state so that all $\varrho_{0k}$’s are thermal states with the same temperature $T$ (later we will generalize to arbitrary single-mode Gaussian states). Although (\[USE\]) is formally written with a continuous distribution of $X_0$, it in fact stands for a limit of finite divisions $\{\Delta_i\}$ of the real line $\mathbb R$, with ${| X_0 \rangle} {\langle X_0 |}$ approximated by orthogonal projectors $\hat \Pi_{\Delta}$ on the intervals $\Delta$ (see e.g. [@GalindoPascual]). From (\[USE\]-\[UI\]) we obtain: &&\_[S:fE]{}(t)=\_e\^[-iH\_[S]{}t]{}\_[| \_0 ]{}[\_0 |]{} \_e\^[iH\_[S]{}t]{}\_[k=1]{}\^[fN]{} \_k(X\_;t)\ && +\_[’]{} \_[X\_,X\_[’]{}]{}(t) e\^[-iH\_St]{}\_[| \_0 ]{}[\_0 |]{} \_[’]{}e\^[iH\_St]{} \[mama\]\ &&\_[k=1]{}\^[fN]{} U\_k(X\_;t) \_[0k]{}U\_k(X\_[’]{};t)\^,where $fN$ denotes the number of observed oscillators, $X_\Delta$ is some position within $\Delta$, and: $$\begin{aligned} \label{rI} &&\varrho_k(X;t)\equiv \hat U_k(X;t) \varrho_{0k}\hat U_k(X;t)^\dagger,\\ &&\Gamma_{X,X'}(t)\equiv \prod_{k\in (1-f)E}tr\left[\hat U_k(X;t) \varrho_{0k}\hat U_k(X';t)^\dagger\right],\label{G}\end{aligned}$$ the latter being the decoherence factor due to the traced fraction $(1-f)E$ of the environment (for compactness we denote the system’s initial position by $X$ rather than $X_0$). It governs vanishing of the off-diagonal part in (\[mama\]) in the trace-norm [@sfera]. A closed formula for $|\Gamma_{X,X'}(t)|$ for general initial states $\varrho_{0k}$ is possible, using the fact [@Prep] that one can always write $\varrho_{0k}=(1/\pi)\int d^2\alpha P_k(\alpha) {| \alpha \rangle}{\langle \alpha |}$, where ${| \alpha \rangle}$ are the usual coherent states [@Perelomov] and $P_k(\alpha)$ is in general a distributional Glauber-Sudarshan $P$-representation: $$\begin{aligned} &&\left| \Gamma_{X,X'}(t)\right|=\prod_{k\in (1-f)E}e^{-\frac{|\alpha_k(t)|^2}{2}(X-X')^2}\times\nonumber\\ &&\left|\int \frac{dqdp}{\pi}P_k(q,p)e^{2i(X-X')\left[q\text{Im}\alpha_k(t)-p\text{Re}\alpha_k(t)\right]}\right|.\end{aligned}$$ (phases $\zeta_k(t)$, cf. (\[UI\]), cancel due to the modulus). Here: $$\begin{aligned} |\alpha_k(t)|^2&=&\frac{C_k^2\omega_k}{2m_k(\omega_k^2-\Omega^2)^2} \bigg[\left(\cos\omega_kt-\cos\Omega t\right)^2\nonumber\\ &&+\left(\sin\omega_kt-\frac{\Omega}{\omega_k}\sin\Omega t\right)^2\bigg]\label{aT}\end{aligned}$$ for an initial momentum squeezed state of $S$ (cf. (\[ak\])). For thermal states at temperature $T$, $P_k(q,p)=(1/\bar n_k)e^{-(q^2+p^2)/\bar n_k}$, $\bar n_k=1/(e^{\beta\omega_k}-1)$, $\beta\equiv1/k_BT$ and the corresponding decoherence factor is given by [@Petruccione]: \[GT\] &&| \_[X,X’]{}(t)|=\ &&\_[k(1-f)E]{}, where $\text{cth}(\cdot)$ is the hyperbolic cotangent. From (\[aT\]) it is clear that bands near the resonant mode $\omega_k\approx \Omega$ would be enough to effectively decohere the system [@qbm_Zurek; @Augusto]. But here we want to study the opposite, more subtle, regime where a single mode has a very small influence on the system’s coherence. This motivates the condition (\[offres\]). Due to discrete and random $\omega_k$’s, $\left| \Gamma_{X,X'}(t)\right|$ is in our study an almost periodic function of time [@ap]. We analyze it later. Next, we turn to the diagonal part in (\[mama\]), reverting to the continuum limit. We group the observed environment $fE$ into $\mathcal M$ macro-fractions of an equal size of $fN/\mathcal M$ oscillators each [@object; @sfera] and show that there is a regime, where the states of each macro-fraction (cf. (\[rI\])) $\varrho_{mac}(X;t)\equiv\bigotimes_{k\in mac}\varrho_k(X;t)$ become perfectly distniguishable for different $X$ ($k\in mac$ means $k$ running through the oscillators in a given macro-fraction $mac$). We use the generalized overlap [@Fuchs]: \[B\] B(\_1,\_2)tr as the most convenient measure of distinguishability (cf. (\[br2\])): $\varrho_1$ and $\varrho_2$ are perfectly distinguishable, $\varrho_1\varrho_2=0$, if and only if $B(\varrho_1,\varrho_2)=0$. A calculation for thermal $\varrho_{0k}$’s gives (see Appendix \[genoverlp\]): \[BT\] B\^[mac]{}\_[X,X’]{}(t)=\_[kmac]{}, where $B^{mac}_{X,X'}(t)\equiv B[\varrho_{mac}(X;t),\varrho_{mac}(X';t)]$ measures the distinguishability of the system’s initial positions $X$, $X'$ as recorded into macro-fractions. Note, however, that the states $\varrho_{mac}(X;t)$ depend not only on $X$, but on the whole classical motion through (\[USE\]). From (\[GT\],\[BT\]) $\lim_{T\to\infty}|\Gamma_{X,X'}(t)|=0$, i.e. hot environments decohere the central system better, but as $\lim_{T\to\infty}B^{mac}_{X,X'}(t)=1$ they are unable to discriminate its positions, irrespectively of the observed macro-fraction size—hot environments are too noisy (the initial states $\varrho_{0k}$ are too close to the maximally mixed state) to store any information (cf. (\[rI\])). Note that the factor $\text{th}(\beta\omega_k/2)$, appearing in both (\[GT\],\[BT\]), is nothing else but the purity $tr(\varrho_{0k}^2)$. Numerical analysis ================== We first analyze the case when the system $S$ initially in a momentum squeezed state. Both $|\Gamma_{X,X'}(t)|$ and $B^{mac}_{X,X'}(t)$ depend on the same almost periodic function of time (\[aT\]), too complicated for an immediate analytical study. In this work we analyze it numerically. We set: $M=10^{-5}$kg, $\Omega=3 \times 10^{8}$s$^{-1}$, $\omega_k$’s independently, identically and uniformly distributed in the interval $3\dots 6 \times 10^{9}$s$^{-1}$ to satisfy (\[offres\]), and $|X-X'|=10^{-9}$m. We assume that $C_k$ depend only on the masses: $C_k\equiv 2\sqrt{(M m_k \tilde\gamma_0 )/\pi}$, and $\tilde\gamma_0 =0.33\times 10^{18}$ s$^{-4}$ is a constant. We assume a symmetric situation: The size of the traced macro-fraction $(1-f)E$ in (\[GT\]) is the same as the size of the observed one $mac$ in (\[BT\]). Intuitively, for large enough macro-fractions for a given $T$, $|\Gamma_{X,X'}(t)|$ and $B^{mac}_{X,X'}(t)$ should decay rapidly and have small typical fluctuations due to the large amount of random phases in (\[aT\]), indicating decoherence and perfect distingushability. This is confirmed in Fig. \[time\]. From Figs. \[time\]b,d we see that for 30 oscillators both functions decay rapidly, while for 10 oscillators they do not—the macro-fraction is too small for the given $T$. We further analyze, Fig. \[av\], the time averages $\left\langle |\Gamma_{X,X'}| \right\rangle = (1/\tau) \int^{\tau}_{0} dt |\Gamma_{X,X'}(t)|$, $\left\langle B^{mac}_{X,X'}\right\rangle = (1/\tau) \int^{\tau}_{0} dt B^{mac}_{X,X'}(t)$ as functions of the temperature $T$ with $\tau$ taken large ($\sim 1s$): Since both functions are non-negative, vanishing of their time averages is a good indicator of the functions having small typical fluctuations above zero. From Fig. \[av\]a one sees that, in the chosen parameter range, there is no formation of the broadcast state for a macro-fraction of $10$ oscillators: While $\left\langle |\Gamma_{X,X'}| \right\rangle\approx 0$ (the lower trace) for $T\approx 10^{-1}K$, $\left\langle B^{mac}_{X,X'}\right\rangle\approx 0.6$ (the upper trace). The state decoheres, but at too high a temperature to store a perfect record of the system’s position. From (\[mama\]), the post-interaction partial state is then of a, so called, Classical-Quantum (CQ) type [@CQ]. However, increasing the size to $30$ oscillators both traces become practically zero up to $T\approx 10^{-2} K$, as one sees from Fig. \[av\]b (cf. Fig. \[time\]b,d). This serves as a numerical evidence of a formation of the spectrum broadcast structure (\[br2\]), and hence objectivisation [@object], in the Quantum Brownian Motion model with a massive central system, initiated in a highly momentum-squeezed state, i.e. possessing large coherences in the position. This is our main result. The situation with initial position squeezing, for which the trajectories are given by $X(t;X_0)=X_0\sin(\Omega t)$ is quite different. Under exactly the same conditions as above there is no decoherenece neither orthogonalization for macrofractions of both 10 and 30 oscillators as Fig. \[time\_p\] shows. Actually the plots suggest that both functions are periodic in time (even increasing th macrofraction size to 100), so there is a periodic revival of coherence. This in general agrees with the findings of [@Augusto]. Dynamical Objectivity ===================== Let us assume that a SBS is formed, i.e. both $|\Gamma_{X,X'}(t)|$ and $B^{mac}_{X,X'}(t)$ approach zero. Then from (\[mama\]) (taking the usual continuum limit of the sum): $$\begin{aligned} &&\varrho_{S:fE}(t)=\int dX_0 \left|\langle X_0|\phi_0\rangle\right|^2 \times\label{DB}\\ && \times{| X(t) \rangle}{\langle X(t) |} \otimes\varrho_{mac_1}(X_0;t)\otimes \cdots\otimes \varrho_{mac_{\mathcal M}}(X_0;t),\nonumber\end{aligned}$$ where ${| X(t) \rangle}\equiv e^{-i\hat H_St}{| X_0 \rangle}$, we have grouped $fE$ into $\mathcal M$ macro-fractions and $\varrho_{mac_i}(X_0;t)$ have orthogonal supports (for large enough $t$; cf. e.g. Fig. \[time\]d). What appears in (\[DB\]) is a novel, compared to the previous studies [@object; @sfera], dynamical spectrum broadcast structure (dSBS). Because the system now has its own dynamics, the pointers ${| X(t) \rangle}$ are now states of motion—they evolve on a time-scale $t_S\sim 2\pi/\Omega$, rather than being static as in [@sfera], and a time-dependent SBS is formed with a reference to these evolving pointers. For the example studied in the previous Section, the respective time-scales are $t_S\sim 2\times10^{-8}s$ and from Fig. \[time\]b,d $t_{SBS}\sim 2\times 10^{-10}s$ so that the SBS is formed two orders of magnitude faster than the intrinsic system evolution. Thanks to it, all the observers will measure the same initial position (= the oscillation amplitude) $X_0$, leaving the (by now decohered) system undisturbed in its state of motion. But the traces of this motion are present in the environment not only through $X_0$—each state $\varrho_{mac}(X_0;t)$ depends on the whole trajectory $X(t;X_0)$ (cf. (\[final\])). The intuitive picture is that while the system rotates on its intrinsic timescale, the environment follows this movement and past the transient period a spectrum broadcast structure is being continuously formed, leading to a perception of objective position at each moment of time. Of course due to the neglected back reaction on the system, the structure (\[DB\]) is only a first approximation to this situation, as e.g. there is no dynamical production of coherences in the system’s position. The next logical step would be to include the back reaction. General Gaussian Initial States =============================== We recall [@gaus] that an arbitrary single-mode Gaussian state can be parametrized as follows: $ \varrho=e^{i\psi \hat a^\dagger \hat a}\hat D(\gamma)\hat S(\xi)\varrho_T\hat S(\xi)^\dagger \hat D(\gamma)^\dagger e^{-i\psi \hat a^\dagger \hat a}, $ where $\hat S(\xi)\equiv e^{(\xi^*\hat a ^2-\xi \hat a^{\dagger 2})/2}$, $\xi\equiv re^{i\theta}$, and $\varrho_T$ is some thermal state. Parametrizing each $\varrho_{0k}$ as above leads to the same expressions (\[GT\],\[BT\]) but with $\alpha_k(t)$ (cf. (\[ak\])) substituted by: $ \tilde\alpha_k(t)\equiv \text{ch} r\left[e^{-i\psi} \alpha_k(t) - e^{i(\psi+\theta)} \alpha_k(t)^*\text{th} r\right]. $ Introduction of a squeezing increases the temperature range where a dynamical SBS can be formed via increasing the informational capacity of the environment, e.g. for $r=5$, the temperature range is increased up to $T=1K$ [@Photonics]. Concluding remarks ================== Our findings generally agree with that of [@qbm_Zurek; @Augusto] in that there is a parameter range in QBM such that objectivity appears, but it has been obtained with a deeper analysis directly on quantum states, uncovering previously unnoticed dynamical spectrum broadcast structures. Our method, although developed here in a specific model, is in fact much more universal and can be generalized to test other decoherence models for a presence of dynamical forms of objectivity: One checks if states of the type (\[DB\]) are formed during the evolution. One immediate generalization is to allow for other trapping potentials than harmonic and other couplings than linear (see e.g. [@Pietro]). Another, is to study finite-dimensional systems, e.g. spins [@spins], but a far more challenging generalization would be an application to quantum fields, leading to objective dynamical classical fields. Finally, a possible connection between Markovianity/non-Markovianity of the evolution and a formation of broadcast structures can also be studied [@Galve]. The generalized overlap $B^{mac}_{X,X'}(t)$ for thermal environment states {#genoverlp} ========================================================================== We calculate \[Bm\] B\^[mac]{}\_[X,X’]{}(t)B\[\_[mac]{}(X;t),\_[mac]{}(X’;t)\], for $\varrho_{mac}(X;t)\equiv\bigotimes_{k\in mac} \hat U_k(X;t)\varrho_{0k}\hat U_k(X;t)^\dagger$ and $\varrho_{0k}$ thermal. The above distinguishability measure [@Fuchs] scales with the tensor product $B\big(\bigotimes_k\varrho_k,\bigotimes_k\varrho'_k\big)=\prod_k B(\varrho_k,\varrho'_k)$, so that it is enough to calculate it for a single environment. Dropping the explicit dependence on $k$ and denoting a single-system overlap by $B^{mic}_{X,X'}(t)$ we obtain: $B^{mic}_{X,X'}(t)= tr\sqrt{\sqrt{\varrho_0}\hat U(X';t)^\dagger\hat U(X;t)\varrho_0\hat U(X;t)^\dagger\hat U(X';t)\sqrt{\varrho_0}}$, where we have pulled the extreme left and right unitaries out of the both square roots and used the cyclic property of the trace to cancel them out. Thus, modulo phase factors: $\hat U(X';t)^\dagger\hat U(X;t) \simeq\hat D\left(\alpha(t)(X-X')\right)\equiv\hat D(\eta_t)$, Next, assuming all the $\varrho_{0k}$ are thermal with the same temperature, we use the $P$-representation for the middle $\varrho_0$ under the square root in $B^{mic}_{X,X'}(t)$: $\varrho_0=\int d^2\gamma /(\pi\bar n)\exp\left(-|\gamma|^2/\bar n\right){| \gamma \rangle}{\langle \gamma |}$, where $\bar n=1/(e^{\beta\omega}-1)$, $\beta\equiv1/k_BT$. Denoting the Hermitian operator under the square root in $B^{mic}_{X,X'}(t)$ by $\hat A_t$, we obtain: $\hat A_t =\int d^2 \gamma/ (\pi\bar n)e^{-|\gamma|^2/\bar n}\sqrt{\varrho_0}{| \gamma+\eta_t \rangle}{\langle \gamma+\eta_t |}\sqrt{\varrho_0}$. To perform the square roots above we use the Fock representation: $\varrho_0=\sum_n\left( \bar{n}^n/(\bar n +1)^{n+1} \right){| n \rangle}{\langle n |}$, so that: $$\begin{aligned} \hat A_t&=&\int\frac{d^2\gamma}{\pi\bar n}e^{-\frac{|\gamma|^2}{\bar n}}\sum_{m,n}\sqrt{\frac{\bar n^{m+n}}{(\bar n +1)^{m+n+2}}}\times\nonumber\\ & \times& \langle n|\gamma+\eta_t\rangle\langle\gamma+\eta_t|m\rangle {| n \rangle}{\langle m |} \label{A2}\end{aligned}$$ and the scalar products above read: $\langle n|\gamma+\eta_t\rangle=\exp \left(- |\gamma+\eta_t|^2/2 \right)\left( (\gamma+\eta_t)^n/\sqrt{n!}\right).$ The strategy is now to use this relation and rewrite each sum in (\[A2\]) as a coherent state but with a rescaled argument, and then try to rewrite (\[A2\]) as a single thermal state (with a different mean photon number than $\varrho_0$). To this end we note that: $$\begin{aligned} &&e^{-\frac{1}{2}|\gamma+\eta_t|^2}\sum_n\left(\frac{\bar n}{\bar n +1}\right)^{\frac{n}{2}}\frac{(\gamma+\eta_t)^n}{\sqrt{n!}}{| n \rangle}=\\&&e^{-\frac{1}{2}\frac{|\gamma+\eta_t|^2}{\bar n+1}}\left|\sqrt{\frac{\bar n}{\bar n +1}}(\gamma+\eta_t)\right\rangle.\end{aligned}$$ Substituting this into (\[A2\]) and reordering gives: $$\begin{aligned} &&\hat A_t =\frac{1}{\bar n +1}e^{-\frac{|\eta_t|^2}{1+2\bar n}}\int\frac{d^2\gamma}{\pi\bar n}e^{-\frac{1+2\bar n}{\bar n(\bar n +1)} \left|\gamma+\frac{\bar n}{1+2\bar n}\eta_t\right|^2}\times\nonumber\\ &&\times \left|\sqrt{\frac{\bar n}{\bar n +1}}(\gamma+\eta_t)\right\rangle\left\langle\sqrt{\frac{\bar n}{\bar n +1}}(\gamma+\eta_t)\right|.\label{A3}\end{aligned}$$ Note that since we are interested in $tr\sqrt{\hat A_t}$ rather than $\hat A_t$ itself, there is a freedom of rotating $\hat A_t$ by a unitary operator, in particular by a displacement. We now find such a displacement as to turn (\[A3\]) into the thermal form. Comparing the exponential under the integral in (\[A3\]) with the thermal form, we see that the argument of the subsequent coherent states should be proportional to $\gamma+\left(\bar n\right)/\left(1+2\bar n\right) \eta_t$. Simple algebra gives: $$\begin{aligned} &&\left|\sqrt{\frac{\bar n}{\bar n +1}}(\gamma+\eta_t)\right\rangle\simeq\\ &&\hat D\left(\sqrt{\frac{\bar n}{\bar n +1}}\frac{\bar n+1}{1+2\bar n}\eta_t\right)\left|\sqrt{\frac{\bar n}{\bar n +1}}\left(\gamma+\frac{\bar n}{1+2\bar n}\eta_t\right)\right\rangle\nonumber,\end{aligned}$$ where we have omitted the irrelevant phase factor as we are interested in the projector on the above state. Inserting the above relation into (\[A3\]), dropping the displacements, and changing the integration variable: $\gamma \to \sqrt{\bar n/\left(\bar n +1\right)}\left(\gamma+\left(1+2\bar n\right)\eta_t\right)$ gives: B\^[mic]{}\_[X,X’]{}(t)=e\^[-]{}tr, where $\varrho_{th}(\bar n)$ is a thermal state with the mean photon number $\bar n$. We use the Fock expansion for $\varrho_{th}\left(\bar n^2/(1+2\bar n)\right)$: $$\begin{aligned} &&B^{mic}_{X,X'}(t)=e^{-\frac{|\eta_t|^2}{2+4\bar n}}\frac{1}{\sqrt{1+2\bar n}}\times\\ &&\times\left(1+\frac{\bar n^2}{1+2\bar n}\right)^{-\frac{1}{2}} \sum_n \left(\frac{\bar n^2/(1+2\bar n)}{1+\bar n^2/(1+2\bar n)}\right)^{\frac{n}{2}}\\ &&=\exp\left[-\frac{(X-X')^2}{2}|\alpha(t)|^2\text{th}\left(\frac{\beta\omega}{2}\right)\right],\end{aligned}$$ where we have used the definition of $\eta_t$ and $\bar n=1/(e^{\beta\omega}-1)$. Coming back to the generalized overlap for macro-fraction states (\[Bm\]) with a help of (\[B\]), we finally obtain the desired result (\[BT\]). We would like to thank R. Horodecki, P. Horodecki, J. Wehr, H. Gomonay, M. Lewenstein, P. Massignan and A. Lampo for discussions. JT was supported by the ERC Advanced Grant QOLAPS, JKK and JT acknowledge the financial support of National Science Centre project Maestro DEC-2011/02/A/ST2/00305. [0]{} . , . . . . . . . . . . . . . . . It is analogous to e.g. the large wavelength condition in the illuminated sphere model, where photon wavelengths are much larger than the separation of the possible positions of the sphere; see e.g. [@sfera] and E. Joos and H. D. Zeh, Z. Phys. B - Cond. Matt. [**59**]{}, 223 (1985). . . . . A. S. Besicovitch, *Almost Periodic Functions*, Dover (1954). . . . . . In preparation. , arXiv:1412.3316 (2014).
--- address: - 'Anton Petrunin, Math. Dept., PSU, University Park, PA 16802, USA.' - 'petrunin@math.psu.edu' author: - Anton Petrunin title: Area minimizing polyhedral surfaces are saddle --- Preface {#preface .unnumbered} ======= At age fifteen I had to solve the following problem: [Problem]{} Consider all quadrangles $\square axby$ in the plane with fixed sides $|a-x|$, $|a-y|$, $|b-x|$ and $|b-y|$. Note that the value $$\alpha=\measuredangle axb+\measuredangle ayb$$ describes the quadrangle $\square axby$ up to congruence; let $A(\alpha)$ be the area of the quadrangle for given $\alpha$. Show that $A(\alpha)$ increases in $\alpha$ for $\alpha\le \pi$ and decreases in $\alpha$ for $\alpha\ge\pi$.[^1] The problem was not especially hard, beautiful or interesting. But a voice in my head said “one day it will be useful” — a strange warning that turned out to be true. Ten years later I was finishing graduate school. I was trying to prove something about minimal surfaces in Hadamard spaces (not important what it is). As I simplified the problem further and further, I eventually saw the problem above. It proved what I wanted to and made me happy for few days. Later, I generalized the statement yet further and it ended up in my paper [@petrunin]. The technique used in the original proof turned out to be redundant. On the other hand, the argument was simple and beautiful, so I decided that it was worth sharing. In addition I notice recent closely related activity, see for example [@bobenko], but as far as I can see the idea below has not been noticed. Introduction {#introduction .unnumbered} ============ The following question is a simplified version of the one mentioned in the preface; still it contains all of its interesting features. Let $\DD$ be a simplicial complex homeomorphic to the disc in the plane (think of a convex polygon with fixed triangulation). A piecewise linear map $F$ from $\DD$ to the Euclidean space will be called a *polyhedral disc*; that is, a map $F\:\DD\to \RR^3$ is a polyhedral disc if the restriction of $F$ to any triangle of $\DD$ is linear. Intuitively the polyhedral disc is a disc in $\RR^3$ glued from triangles with possible self-intersections. With slight abuse of notation, we make no distinction between vertices, edges, and triangles of $\DD$ and their $F$-images. We say that a vertex or an edge of $F$ is *interior* if does not lie in the boundary $\partial\DD$. The *area* of a polyhedral disc $F$ is defined as the sum of the areas of all its triangles. A polyhedral disc is called *saddle* if one can not cut a hat from it by a plane. More precisely, we say that a plane $\Pi$ cuts an edge $[ab]$ if the endpoints $a$ and $b$ lie on opposite sides of $\Pi$. Then the polyhedral disc $F$ is called saddle if for any interior vertex $a$ of $F$ there is no plane which cuts each edge coming from $a$. Fix a positive integer $n$. Consider a class of polyhedral discs $F$ with the same boundary curve $F(\partial\DD)$ and with the total number of triangles at most $n$. A disc $F$ is called *area minimizing* if it has the minimal area in this class. [Theorem]{} Any area minimizing polyhedral disc in Euclidean space is saddle. Before we get into the proof, let us discuss an example. Assume we change the definition of area minimizing polyhedral disc a bit; instead of giving the upper bound for the number of triangles, we fix one triangulation. In this case, the conclusion of theorem does not hold. [pics/tent(1)]{} A counterexample is shown on the left. This tent forms a polyhedral disc made from 12 triangles; it is mirror symmetric in each vertical plane which contains one white and the black vertex on the top. The black vertex is the only interior vertex of the disc; it can be cut by a plane from the rest, so this disc is not saddle. The disc on the left minimizes the area for the given triangulation. The disc on the right has smaller area and a smaller number of triangles (there are 10 of them). In fact, the disc on the right is area minimizing for $n=10$; it has to be saddle according to the theorem, but it is also saddle by a trivial reason — it has no interior vertices. Proof {#proof .unnumbered} ===== Let $F$ be an area minimizing polyhedral disc. Without loss of generality, we can assume that one can not reduce the number of triangles in $F$ while keeping the area the same. In this case, $F$ satisfies the so-called *no triangle condition*; i.e., if three vertices of $F$, say $a$, $b$ and $c$, are pairwise joined by edges then $\triangle abc$ is a triangle of $F$. Indeed, if this is not the case, exchange the domain bounded by these three edges by $\triangle abc$; this procedure will not increase the area of $F$ and it will drop the number of triangles in the triangulation. *We can assume that the sum of all 4 angles which are adjacent to any interior edge of $F$ is at least $\pi$. Moreover if this sum is $\pi$, then the two adjacent triangles together form a flat convex quadrilateral.* Assume the contrary; i.e., there is an edge $[ab]$ in $F$ with two adjacent triangles $\triangle abx$ and $\triangle aby$ such that $$\measuredangle abx+\measuredangle aby+\measuredangle bax+\measuredangle bay<\pi.\leqno\bigstar$$ Let us cut these two triangles from $F$ and glue $\triangle axy$ and $\triangle bxy$ instead. This way we get a new polyhedral disc, say $H$, with a new triangulation. [pics/flip(1)]{} The construction of $H$ from $F$ will be called *flip of the edge $[ab]$*. Note that performing the flip we will always get a genuine triangulation; this follows since the original triangulation satisfies the no triangle condition. Let us show that $$\area F\ge \area H.\leqno\spadesuit$$ To do this, we construct two quadrilaterals $\square a'x'b'y'$ and $\square a''x''b''y''$ in the plane such that the diagonal $[a'b']$ divides $\square a'x'b'y'$, the diagonal $[x''y'']$ divides $\square a''x''b''y''$ and $$\begin{aligned} \triangle abx&\cong\triangle a'b'x', & \triangle axy&\cong\triangle a''x''y'', \\ \triangle aby&\cong\triangle a'b'y', & \triangle bxy&\cong\triangle b''x''y''.\end{aligned}$$ [pics/two-trig(1)]{} Note that $$\begin{aligned} \measuredangle x'a'y'+\measuredangle x'b'y' &=\measuredangle xab+\measuredangle yab+\measuredangle xba+\measuredangle yba\ge \\ &\ge \measuredangle xay+\measuredangle xby = \\ &=\measuredangle x''a''y''+\measuredangle x''b''y''. \end{aligned} \leqno\clubsuit$$ Applying $\bigstar$ and the Problem, we get $$\area (\square a'x'b'y')\ge \area (\square a''x''b''y''),$$ or equivalently, $$\area(\triangle abx)+\area(\triangle aby) \ge \area(\triangle axy)+\area(\triangle bxy).\leqno\diamondsuit$$ Hence $\spadesuit$ follows. Note that we have equality in $\diamondsuit$ if and only if we have equality in $\clubsuit$. Further, equality in $\clubsuit$ holds if and only if the quadrilateral $\square axby$ is flat and convex. Therefore, if the disc $F$ is in general position; i.e., no 4 vertices of the disc lie in one plane, then the Claim follows. Further, we can assume that the triangulation of $F$ is chosen in such a way that there is an approximation of $F$ by discs in general position such that no flip decreases its area. Hence the Claim follows in the general case. Now assume the disc $F$ is not saddle. In this case we can move one of its interior vertices, say $a$, so that all the edges coming from $a$ become shorter. To do this, choose a plane which cuts each edge coming from $a$, and move $a$ toward the plane along the segment perpendicular to the plane, say with unit speed. Let us denote by $a(t)$ the position of $a$ after time $t$ and let $F_t$ be the obtained polyhedral disc. In general this deformation may not decrease the area. However it does decrease the area for the discs which satisfy the statement in the Claim. Indeed, note that the area of $F$ is completely determined by the triangulation and the lengths of its interior edges. Assume $\ell_1(t),\dots,\ell_k(t)$ are the lengths of the edges coming from $a(t)$. Then $$\area F_t=A(\ell_1(t),\dots,\ell_k(t)).$$ Applying the Problem again, we get that $$\frac{\partial A}{\partial\ell_i}\ge 0 \leqno\heartsuit$$ for each $i$. Thus, $t\mapsto\area F_t$ is decreasing for small $t$ if for at least one $i$ the inequality $\heartsuit$ is strict. Finally note that if for each $i$ we get equality in $\heartsuit$, then the sum of 4 adjacent angles at each edge from $a$ is exactly $\pi$. Therefore, from the second statement in the Claim, all the edges coming from $a$ lie in one plane. These edges can not point into a fixed open half-plane, simply because an angle of a triangle can not be bigger than $\pi$. In particular, there is no plane which cuts the edges coming from $a$, a contradiction. I wish to thank my adviser, Stephanie Alexander. [1]{} Petrunin, A. Metric minimizing surfaces. *Electron. Res. Announc. Amer. Math. Soc.* **5** (1999), 47–54. Bobenko, A.; Suris, Y. *Discrete differential geometry.* Graduate Studies in Mathematics, Vol. 98, American Mathematical Society, Providence, RI, 2008. [^1]: In particular the area of a quadrilateral with fixed side lengths is maximal when it is inscribed into a circle.
--- abstract: | The optical/near-infrared (OIR) region of the spectra of low-mass X-ray binaries appears to lie at the intersection of a variety of different emission processes. In this paper we present quasi-simultaneous OIR–X-ray observations of 33 XBs in an attempt to estimate the contributions of various emission processes in these sources, as a function of X-ray state and luminosity. A global correlation is found between OIR and X-ray luminosity for low-mass black hole candidate XBs (BHXBs) in the hard X-ray state, of the form $L_{\rm OIR}\propto L_{\rm X}^{0.6}$. This correlation holds over 8 orders of magnitude in $L_{\rm X}$ and includes data from BHXBs in quiescence and at large distances (LMC and M31). A similar correlation is found in low-mass neutron star XBs (NSXBs) in the hard state. For BHXBs in the soft state, all the near-infrared (NIR) and some of the optical emission is suppressed below the correlation, a behaviour indicative of the jet switching off/on in transition to/from the soft state. We compare these relations to theoretical models of a number of emission processes. We find that X-ray reprocessing in the disc and emission from the jets both predict a slope close to 0.6 for BHXBs, and both contribute to the OIR in BHXBs in the hard state, the jets producing $\sim90$ percent of the NIR emission at high luminosities. X-ray reprocessing dominates the OIR in NSXBs in the hard state, with possible contributions from the jets (only at high luminosity) and the viscously heated disc. We also show that the optically thick jet spectrum of BHXBs extends to near the $K$-band. OIR spectral energy distributions of 15 BHXBs help us to confirm these interpretations. We present a prediction of the $L_{\rm OIR}$–$L_{\rm X}$ behaviour of a BHXB outburst that enters the soft state, where the peak $L_{\rm OIR}$ in the hard state rise is greater than in the hard state decline (the well known hysteretical behaviour). In addition, it is possible to estimate the X-ray, OIR and radio luminosity and the mass accretion rate in the hard state quasi-simultaneously, from observations of just one of these wavebands, since they are all linked through correlations. Finally, we have discovered that the nature of the compact object, the mass of the companion and the distance/reddening can be constrained by quasi-simultaneous OIR and X-ray luminosities. author: - | D. M. Russell$^{1}$[^1], R. P. Fender$^{1}$, R. I. Hynes$^{2}$, C. Brocksopp$^{3}$, J. Homan$^{4}$, P. G. Jonker$^{5,6,7}$, M. M. Buxton$^{8}$\ $^1$School of Physics & Astronomy, University of Southampton, Highfield, Southampton, SO17 1BJ, UK\ $^2$Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803, USA\ $^3$Mullard Space Science Laboratory, Holmbury St Mary, Dorking, Surrey, RH5 6NT, UK\ $^4$Massachusetts Institute of Technology, Kavli Institute for Astrophysics and Space Research, 70 Vassar Street, Cambridge, MA 02139, USA\ $^5$SRON National Institute for Space Research, Corbonnelaan 2, 3584 CA Utrecht, Netherlands\ $^6$Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS 83, Cambridge, MA 02138, USA\ $^7$Astronomical Institute, Utrecht University, P.O.Box 80000, 3508 TA, Utrecht, The Netherlands\ $^8$Astronomy Department, Yale University, P.O. Box 208101, New Haven, CT 06520-8101, USA\ title: 'Global optical/infrared–X-ray correlations in X-ray binaries: quantifying disc and jet contributions' --- accretion, accretion discs, black hole physics, ISM: jets and outflows, X-rays: binaries Introduction ============ X-ray binaries are binary systems in which a compact object – either a black hole or a neutron star – accretes matter from a companion star. The origin of the emission from these sources is known in some wavebands and not so well established in others. Radio emission is produced by the synchrotron process in collimated outflows [see e.g. @hjelet95], whereas X-ray emission could originate e.g. directly from the hot inner accretion disc, from a Comptonising corona, from an advective flow, or from the jets [@pout98; @naraet98; @mccoet00; @market01; @brocet04]. The optical/near-infrared (optical/NIR; OIR) is perhaps the waveband for which the dominating emission processes in XBs are least known. OIR emission has been extensively studied in outburst and quiescence (for reviews see @vanpet95 and @charco06; see also @chenet97). The complex variety of spectral, timing and luminosity properties of the OIR emission indicates that many processes may be contributing, each depending on a number of factors. In high mass X-ray binaries (HMXBs), the OIR light is largely dominated by the massive companion star in the system [@vandet72; @trevet80] with occasional additional contributions, for example from the reprocessing of X-rays. For low-mass neutron star X-ray binaries (NSXBs), there is strong evidence for a central X-ray source illuminating a disc that reprocesses the light to OIR wavelengths [see e.g. @mcclet79]. The optical emission of a low-mass black hole candidate X-ray binary (BHXB) is generally thought to arise in the outer accretion disc as the result of X-ray reprocessing [@cunn76; @vrtiet90], much like the NSXBs [e.g. @vanpet95]. Indeed, timing and spectral analysis in many cases has led to this conclusion [e.g. @wagnet91; @callet95; @obriet02; @hyneet02a; @hyne05]. However, reprocessed X-rays are often misleadingly *assumed* to dominate the OIR light in BHXBs. Some observations of BHXBs point towards alternative physical processes contributing (and sometimes dominating) the OIR emission [e.g. @homaet05]. Intrinsic thermal emission from the viscously heated outer accretion disc is expected to contribute significant light in the optical, through UV to X-ray wavelengths [@shaksu73; @franet02]. OIR behaviour has revealed this process to play a role in some BHXBs [e.g. @kuul98; @soriet99; @brocet01a; @brocet01b; @homaet05]. Thermal emission from the companion star is observed in some low-mass XBs in quiescence [e.g. @oke77; @bail92; @oroset96; @greeet01; @mikoet05]. In the last decade evidence has been mounting for the flat optically thick spectrum of the jets to extend from the radio to the OIR regime (@hanet92 [@fend01; @corbet02; @market03; @chatet03]; @brocet04 [@buxtet04; @homaet05]; for a review see @fend06). Behaviour that is not consistent with intrinsic disc or reprocessed emission has in the past been attributed to e.g. magnetic loop reconnection in the outer disc [e.g. @zuriet03], emission from a magnetically dominated compact corona [e.g. @merlet00] or emission from an advective region [e.g. @shahet03]. In BHXBs (both transient and persistent), properties of the emission in all wavebands are often related to changes in the X-ray spectrum. The two main X-ray spectral states are the *hard* (or *low/hard*) state, which is characterised by a hard power-law spectrum and strong variability and the *soft* (or *high/soft*; *thermal–dominant*) state, where a thermal spectrum dominates with a power-law contribution (see @mcclet06 for a review of X-ray states; see also @homaet01 [@fendet04; @homabe05]). The low luminosity ‘quiescent’ state is likely to be an extension to the hard state (@nara96 [@esinet97; @mcclet03; @fendet03]; @fendet04 [@gallet06]) but currently this is not universally accepted. Here, we treat quiescence as an extension to the hard state but we also show how our results differ when the quiescent data are removed. We hereafter class ‘optical’ and ‘NIR’ emission as that seen in the $BVRI$ ($\sim4400-7900\AA$) and $JHK$ ($\sim1.25-2.22\mu m$) wavebands, respectively. Towards a Unified Model for the OIR Behaviour in BHXBs ------------------------------------------------------ Power-law correlations between OIR and X-ray luminosities are naturally expected from a number of emission processes. [@vanpet94] showed that the optical luminosity of an X-ray reprocessing accretion disc varies as $L_{\rm OPT}\propto T^2 \propto L_{\rm X}^{0.5}a$, where $T$ is the temperature and $a$ is the orbital separation of the system, and that this correlation has been observed in a selection of low-mass XBs. $L_{\rm OIR}$–$L_{\rm X}$ correlations are also expected when the OIR originates in the viscously heated disc as both X-ray and OIR are linked through the mass accretion rate (see Section 3.2). In addition, OIR–X-ray correlations can be predicted if the OIR emission originates in the jets. Models of steady, compact jets demonstrate that the total jet power is related to the radio luminosity as $L_{\rm radio}\propto L_{\rm jet}^{1.4}$ [@blanko79; @falcbi96; @market01]. It was shown that the jet power is linearly proportional to the mass accretion rate in NSXBs and BHXBs in the hard state [@falcbi96; @miglfe06; @kordet06] and the X-ray luminosity scales as $L_{\rm X}\propto$ m and $L_{\rm X}\propto$ m$^2$ for radiatively efficient and inefficient objects, respectively [e.g @shaksu73; @narayi95; @kordet06]. The accretion in hard state BHXBs is found to be *radiatively inefficient* (the majority of the liberated gravitational potential is carried in the flow and not radiated locally), where jet-dominated states can exist, whereas in NSXBs, the accretion is *radiatively efficient*, and jet-dominated states are unlikely to exist [see also @fendet03]. We therefore have: BHXBs: $L_{\rm radio}\propto L_{\rm jet}^{1.4}\propto $ m$^{1.4}\propto L_{\rm X}^{0.7}$ NSXBs: $L_{\rm radio}\propto L_{\rm jet}^{1.4}\propto $ m$^{1.4}\propto L_{\rm X}^{1.4}$ The correlation for BHXBs has been observed [@corbet03; @gallet03] and very recently, [-@miglfe06] have applied this technique to NSXBs and found $L_{\rm radio}\propto L_{\rm x}^{\ge1.4}$; which is also consistent with the above NSXB model. If the optically thick jet spectrum is indeed flat from the radio regime to OIR, we can expect the following correlations: BHXBs: $L_{\rm OIR}\propto L_{\rm radio}\propto L_{\rm X}^{0.7}$ NSXBs: $L_{\rm OIR}\propto L_{\rm radio}\propto L_{\rm X}^{1.4}$ [@homaet05] discovered a correlation between the quasi-simultaneous NIR (which was shown to originate in the jets) and X-ray fluxes for GX 339–4 in the hard state, with a slope $F_{\rm NIR}\propto F_{\rm X}^{0.53\pm 0.02}$ (3–100 keV). To date, no other sources have been tested for jet OIR emission using OIR–X-ray correlations. It is now becoming clear that this profitable but simple technique of analysing the dependence of OIR and X-ray luminosities over many orders of magnitude, may prove fruitful for the understanding of the emission mechanisms involved. The unification of jet–X-ray state activity is now underway; a steady jet exists in the hard state, which is accelerated as the X-ray spectrum softens, and is finally quenched as it passes the ‘jet line’ into the soft state [@fendet04]. A unification (if one exists) of the origins of OIR light from BHXBs in different spectral and luminosity states is desired to understand the behaviour of these systems. Furthermore, a measure of the level of OIR emission from jets may be used to constrain jet power estimates. methodology & results ===================== For this work, we have collected OIR and X-ray data from a large number of BHXBs, NSXBs and HMXBs in order to find relations that may help determine the processes responsible for the OIR light in these systems. We apply the technique of testing the dependency of OIR luminosity with X-ray luminosity for the three types of XB and between different X-ray states in BHXBs, and attempt to identify the dominant emission mechanisms for BHXBs at a given luminosity and X-ray state. A literature search for quasi-simultaneous (no more than $\sim$ 1 day between observations; for sources with outbursts $\leq$1 month in length we only use data with separations of $\leq$0.1 times the outburst length) X-ray and OIR fluxes from BHXBs was conducted. Where possible, tabulated fluxes or magnitudes were noted. In some cases we obtained data directly from the authors. We also made use of the *DEXTER* applet provided by *NASA ADS* to extract data from light curves where the data themselves were unattainable. For each source, the best estimates of its distance, optical extinction $A_{\rm V}$ and HI absorption column $N_{\rm H}$ were sought. Table 1 lists the properties of each BHXB for which data were obtained. We used non-simultaneous OIR–X-ray luminosities only in quiescence for some sources, and for these we have included errors that encompass all observed values of the quiescent flux in one of the two wavebands. ![image](fig-bhxbs.ps){height="22cm" width="17cm"} ![image](fig-nshmxbs.ps){height="22cm" width="17cm"} ----------------- --------------------- --------------------- --------------------- ------------------ -------------------------------- ------------------------- -------------- ------------ Source Distance Period $M_{\rm co}$ $M_{\rm cs}$ $A_{\rm V}$, $N_{\rm H}$ / $q_{\rm cs}$ $\Delta t$ / Fluxes - = alternative / kpc / hours / $M_\odot$ / $M_\odot$ 10$^{21} cm^{-2}$ (band, days data name (ref) (ref) (ref) (ref) (refs) ref) references (I) (II) (III) (IV) (V) (VI) (VII) (VIII) (IX) M31 r2–70 784$\pm$30 192$^{+290}_{-120}$ - - 1.0$\pm$0.3, - 2.0 12 (1) (12) 1.8$\pm$0.5 (12) GRO J0422+32 2.49$\pm$0.30 5.09 3.97$\pm$0.95 0.46$\pm$0.31 0.74$\pm$0.09, 0.76$^{+0.04}_{-0.46}$ 0.5 2,41,42 = V518 Per (2) (5) (15) (15) 1.6$\pm$0.4 (2,23) (R, 2) LMC X–3 50$\pm$10 $\sim$40.8 $\sim$9–10 $\sim$4–8 0.19$\pm$0.03, - 1.0 43,44 (3,4) (13) (16) (16) 0.32$^{+0.31}_{-0.07}$ (24–26) A0620–00 1.2$\pm$0.4 7.75 11.0$\pm$1.9 0.74$\pm$0.13 1.17$\pm$0.08, 0.58$^{+0.25}_{-0.22}$ 1.0 33,45 = V616 Mon (5) (5) (17) (17) 2.4$^{+1.1}_{-1.0}$ (5,27) (V, 33) XTE J1118+480 1.71$\pm$0.05 4.08 6.8$\pm$0.4 0.28$\pm$0.05 0.053$^{+0.027}_{-0.016}$, 0.55$^{+0.15}_{-0.28}$ 1.0 6,41–43, = KV UMa (6) (5) (15) (15) 0.11$\pm$0.04 (6) (R, 34) 46–49 GRS 1124–68 5.5$\pm$1.0 10.4 6.0$^{+1.5}_{-1.0}$ 0.80$\pm$0.11 0.9$\pm$0.1 (5,28), 0.55$\pm$0.05 - 28,50–52 = GU Mus (5) (5) (18) (18) 1.58$^{+0.42}_{-0.58}$ &(B–V, 35) GS 1354–64 $\geq$27 (7) 61.1 $>$7.83$\pm$0.50 $>$1.02$\pm$0.06 2.60$\pm$0.31, - 1.0 43,53 = BW Cir ($\sim$33$\pm$6) (7) (15) (15) 37.2$^{+14}_{-7}$ (7,29) 4U 1543–47 7.5$\pm$0.5 26.8 9.4$\pm$1.0 2.45$\pm$0.15 1.55$\pm$0.15, 0.68$^{+ 0.11}_{-0.07}$ 1.0 43,54,55 = IL Lup (5) (5) (15) (15) 4.3$\pm$0.2 (5) (R, 36) XTE J1550–564 5.3$\pm$2.3 37.0 10.6$\pm$1.0 1.30$\pm$0.43 2.5$\pm$0.6, 0.7$\pm$0.1 1.0 19,56,57 = V381 Nor (5) (5) (19) (19) 8.7$\pm$2.1 (5) (V, 19) GRO J1655–40 3.2$\pm$0.2 62.9 7.02$\pm$0.22 2.35$\pm$0.14 3.7$\pm$0.3 (5), $\sim$1.0 1.0 20,43, = Nova Sco 1994 (5) (5) (20) (20) 6.66$\pm$0.57 (20) (B–K, 37) 58–60 GX 339–4 8$^{+7.0}_{-1.0}$ 42.1 $\sim$5.8 $\sim$0.52 3.9$\pm$0.5, $\leq$0.3 1.0 43,60–64 = V821 Ara (8) (5) (21) (21) 6$^{+0.9}_{-1.7}$ (5,30) (B–K, 38) GRO J1719–24 2.4$\pm$0.4 14.7 $\sim$4.9 $\sim$1.6 2.8$\pm$0.6, - 0.5 41 = Nova Oph 1993 (9) (14) (9) (9) 4$^{+0.0}_{-2.6}$ (9,31,32) XTE J1720–318 8$^{+7}_{-5}$ - - - 6.9$\pm$0.1, - 1.0 43,65 = INTEGRAL1 51 (10) 12.4$\pm$0.2 (10$^{\ast}$) XTE J1859+226 6.3$\pm$1.7 9.17 5–12 0.68–1.12 1.80$\pm$0.37, 0.59$\pm$0.04 1.0 22,39,43, = V406 Vul (5) (5) (22) (22) 8$\pm$2 (5) (R, 39) 66,67 GRS 1915+105 9.0$\pm$3.0 816 14.0$\pm$4.4 0.81$\pm$0.53 19.6$\pm$1.7, - 1.0 68 = V1487 Aql (11) (5) (15) (15) 35$\pm$3 (11) V404 Cyg 4.0$^{+2.0}_{-1.2}$ 155.28 10.0$\pm$2.0 0.65$\pm$0.25 3.65$\pm$0.35 0.87$\pm$0.03 0.5 69–71 = GS 2023+338 (5) (5) (15) (15) 6.98$\pm$0.76 (5,27) (R, 40) ----------------- --------------------- --------------------- --------------------- ------------------ -------------------------------- ------------------------- -------------- ------------ Columns give: (I) source names; (II) distance estimate; (III) orbital period of the system; (IV) mass of the compact object (black hole here) in solar units; (V) mass of the companion star in solar units; (VI) interstellar reddening in $V$-band, and interstellar HI absorption column ($^{\ast}A_{\rm V}$ is estimated here from the relation $N_{\rm H} = 1.79 \times 10^{21} cm^{-2}A_{\rm V}$; @predet95); (VII) the companion star OIR luminosity contribution in quiescence; (VIII) The maximum time separation, $\Delta$t, between the OIR and X-ray observations defined as quasi-simultaneous; (IX) References for the quasi-simultaneous OIR and X-ray fluxes collected. References: see caption of Table 3. ------------------ --------------------- ------------ ---------------- ---------------- -------------------------------- --------------------- -------------- ------------ Source Distance Period $M_{\rm co}$ $M_{\rm cs}$ $A_{\rm V}$, $N_{\rm H}$ / $q_{\rm cs}$ $\Delta t$ / Fluxes - = alternative / kpc / hours / $M_\odot$ / $M_\odot$ 10$^{21} cm^{-2}$ (band, days data name (ref) (ref) (ref) (ref) (refs) ref) references (I) (II) (III) (IV) (V) (VI) (VII) (VIII) (IX) IGR J00291+5934 4$^{+3}_{-0}$ 2.457 1.4 0.039–0.160 1.56$\pm$0.22 - 1.0 97,98 (72) (77) (72) (72) 2.8$\pm$0.4 (77$^{\ast}$) 4U 0614+09 3.0$^{+0.0}_{-2.5}$ 0.25–0.33 1.4 $\leq$1.9 (80) 1.41$\pm$0.17 (78) - 0.5 99,100 = V1055 Ori (73) (78) (80) ($\sim$1.45) 2.99$\pm$0.01 (87) CXOU 132619.7 5.0$\pm$0.5 - - - 0.34$\pm$0.03 (88) 0.8$\pm$0.1 - 74,88 –472910.8 (74) 0.9$\pm$0.1 (74) (B, 88) Cen X–4 1.2$\pm$0.3 15.1 1.3$\pm$0.8 0.31$\pm$0.27 0.31$\pm$0.16 (89) 0.75$\pm$0.05 (R), 0.5 79,95, = V822 Cen (75) (79) (15) (85) 0.55$\pm$0.16 (90) 101,102 4U 1608–52 3.3$\pm$0.5 $\sim$12.9 $\sim$1.4 $\sim$0.32 4.65$^{+3.25}_{-0.18}$ - 1.0 43,81 = QX Nor (5) (5) (81) (81) 15$\pm$5 (14,81,91) Sco X–1 2.8$\pm$0.3 18.9 1.4 0.42 0.70$\pm$0.23 - 1.0 43,103 = V818 Sco (5) (5) (82) (82) 1.25$\pm$0.41 (14$^{\dagger}$) SAX J1808.4–3658 2.5$\pm$0.1 2.0 $\geq$1.7 (83) 0.05–0.10 0.68$^{+0.37}_{-0.15}$ (86) 0.0$^{+0.3}_{-0.0}$ 1.0 86,104, = XTE J1808–369 (76) (5) ($\sim$1.7) (86) 0.11$\pm$0.03 (92) (V, 96) 105 Aql X–1 5.15$\pm$0.75 19.0 $\sim$1.4 $\sim$0.6 1.55$\pm$0.31 (93) - 0.5 43, = V1333 Aql (5) (5) (84) (84) 4.0$^{+3.8}_{-3.2}$ (94) 106–112 ------------------ --------------------- ------------ ---------------- ---------------- -------------------------------- --------------------- -------------- ------------ Columns give: (I) source names; (II) distance estimate; (III) orbital period of the system; (IV) mass of the compact object (neutron star here) in solar units; (V) mass of the companion star in solar units; (VI) interstellar reddening in $V$-band, and interstellar HI absorption column ($^{\ast}A_{\rm V}$ and $^{\dagger}N_{\rm H}$ are estimated here from the relation $N_{\rm H} = 1.79 \times 10^{21} cm^{-2}A_{\rm V}$; @predet95); (VII) the companion star OIR luminosity contribution in quiescence; (VIII) The maximum time separation, $\Delta t$, between the OIR and X-ray observations defined as quasi-simultaneous; (IX) References for the quasi-simultaneous OIR and X-ray fluxes collected. References: see caption of Table 3. [lllllllll]{} Source = alternative name&Compact&Distance &$A_{\rm V}$ (ref)&$N_{\rm H}$ / 10$^{21} cm^{-2}$&$\Delta t$&Fluxes -\ &object &/ kpc (ref)& &(ref) &/ days &data references\ (I)&(II)&(III)&(IV)&(V)&(VI)&(VII)\ ------------------------------------------------------------------------ SMC X–3 &NS &58.1$\pm$5.6 (113) &1.5$\pm$0.7 (120) &2.9$\pm$1.4 (120) &-&130,131\ ------------------------------------------------------------------------ CI Cam = XTE J0421+560 &unknown&5$^{+3}_{-4}$ (114)&3.2$\pm$1.2 (121) &5$\pm$2 (124) &1.0&43,124,132\ ------------------------------------------------------------------------ LMC X–4 &NS &50$\pm$10 (3,4) &0.31$\pm$0.06 (122$^{\ast}$) &0.55$\pm$0.10 (122) &-&43,133\ ------------------------------------------------------------------------ A0535+26 = HDE 245770 &NS &2$^{+0.4}_{-0.7}$ (115) &2.3$\pm$0.5 (115)&11.8$\pm$1.5 (125) &1.0&125\ ------------------------------------------------------------------------ GX 301–2 = 4U 1223–62 &NS &5.3$\pm$0.1 (116) &5.9$\pm$0.6 (116) &20$\pm$10 (126) &-&116,126\ ------------------------------------------------------------------------ V4641 Sgr = SAX 1819.3–2525&BH &9.6$\pm$2.4 (5)&1.0$\pm$0.3 (5) &2.3$\pm$0.1 (127) &0.2&43,134,135\ ------------------------------------------------------------------------ KS 1947+300 = GRO J1948+32 &NS &10$\pm$2 (117) &3.38$\pm$0.16 (117) &34$\pm$30 (117) &1.0&43,117\ ------------------------------------------------------------------------ Cyg X–1 = HD 226868 &BH &2.1$\pm$0.1 (118) &2.95$\pm$0.21 (123) &6.21$\pm$0.22 (128)&1.0&43,136–138\ ------------------------------------------------------------------------ Cyg X–3 = V1521 Cyg &unknown&10.3$\pm$2.3 (119) &20$\pm$5 (119) &85$\pm$1 (129) &1.0&43, 139\ Columns give: (I) source names; (II) BH = black hole, NS = neutron star; (III) distance estimate; (IV) interstellar reddening in $V$-band ($^{\ast}$$A_{\rm V}$ is estimated here from the relation $N_{\rm H} = 1.79 \times 10^{21} cm^{-2}A_{\rm V}$; @predet95); (V) interstellar HI absorption column; (VI) The maximum time separation, $\Delta$t, between the OIR and X-ray observations defined as quasi-simultaneous; (VII) References for the quasi-simultaneous OIR and X-ray fluxes collected. References for Tables 1, 2 & 3: (1) [-@stanga98]; (2) [@geliet03]; (3) [-@boydet00]; (4) [-@kova00]; (5) [-@jonket04]; (6) [@chatet03]; (7) [-@casaet04]; (8) [-@zdziet04]; (9) [-@dellet94]; (10) [-@cadoet04]; (11) [-@chapco04]; (12) [-@willet05]; (13) [-@hutcet03]; (14) [-@liuet01]; (15) [-@rittet03]; (16) [-@cowlet83]; (17) [-@geliet01]; (18) [@esinet00]; (19) [@oroset02]; (20) [-@hyneet98]; (21) [-@hyneet03]; (22) [@hyneet02a]; (23) [-@shraet97]; (24) [-@soriet01]; (25) [-@wilmet01]; (26) [-@wanget05]; (27) [-@konget02]; (28) [-@ebiset94]; (29) [-@kitaet90]; (30) [-@zdziet98]; (31) [-@tana93]; (32) [-@hyne05]; (33) [-@mcclet95]; (34) [@torret04]; (35) [-@oroset96]; (36) [-@oroset98]; (37) [@greeet01]; (38) [-@shahet01]; (39) [-@zuriet02]; (40) [-@casaet93]; (41) [@brocet04]; (42) [-@garcet01]; (43) *RXTE* ASM; (44) [@brocet01a]; (45) [@kuul98]; (46) [-@mcclet03]; (47) [-@kiziet05]; (48) [-@uemuet00]; (49) [-@hyneet05]; (50) [-@kinget96]; (51) [-@dellet98]; (52) [-@sutaet02]; (53) [-@brocet01b]; (54) [@buxtet04]; (55) [-@kaleet05]; (56) [-@hameet03]; (57) [@jainet01b]; (58) [-@market05]; (59) [-@torret05]; (60) [-@chatet02]; (61) [@corbet02]; (62) [@homaet05]; (63) [-@kuulet04]; (64) [-@israet04]; (65) [-@nagaet03]; (66) [-@tomset03]; (67) [-@haswet00]; (68) [-@fendpo00]; (69) [-@hyneet04]; (70) [-@zycket99]; (71) [@hanet92]; (72) [-@gallma05]; (73) [-@branet02]; (74) [-@rutlet02]; (75) [-@chevet89]; (76) [-@intzet01]; (77) [-@shawet05]; (78) [-@neleet04]; (79) [-@campet04]; (80) [-@vanset00]; (81) [-@wachet02]; (82) [-@steeet02]; (83) [-@campet02]; (84) [-@welset00]; (85) [-@torret02]; (86) [-@wanget01]; (87) [-@schu99]; (88) [-@hagget04]; (89) [-@blaiet84]; (90) [-@rutlet01]; (91) [-@grinli78]; (92) [-@campet05]; (93) [-@chevet99]; (94) [-@campet03]; (95) [-@shahet93]; (96) [-@burdet03]; (97) [-@torret06]; (98) [-@steeet04]; (99) [-@machet90]; (100) Russell et al. (in preparation); (101) [-@kaluet80]; (102) [-@caniet80]; (103) [-@mcnaet03]; (104) [-@campst04]; (105) [-@homeet01]; (106) [-@charet80]; (107) [-@jainet99]; (108) [-@jainet00]; (109) [-@maitet03]; (110) [-@maitet04a]; (111) [-@maitet04b]; (112) [-@maitet05]; (113) [-@cole98]; (114) [-@miodet04]; (115) [-@steeet98]; (116) [-@kapeet95]; (117) [-@neguet03]; (118) [-@masset95]; (119) [-@vanket96]; (120) [-@lequet92]; (121) [-@hyneet02b]; (122) [-@naiket03]; (123) [-@wuet82]; (124) [-@orlaet00]; (125) [-@orlaet04]; (126) [-@mukhet04]; (127) [-@dicket90]; (128) [-@schuet02]; (129) [-@staret03]; (130) [-@claret78]; (131) [-@cowlet04]; (132) [-@claret00]; (133) [-@heemet89]; (134) [-@katoet99]; (135) [-@buxtet05b]; (136) [-@bochet98]; (137) [-@brocet99b]; (138) [-@brocet99a]; (139) [-@kochet02] The X-ray unabsorbed 2-10 keV flux was calculated for all X-ray data[^2]. We made note of the X-ray state of each source on each observation, as defined by the analysis by authors in the literature. We were unable to apply strict definitions to the spectral states due to the differing nature (e.g. X-ray energy ranges and variability analysis) of each data set. We have therefore used the judgement of the authors to determine the spectral states; a method likely to be much more accurate than any conditions that could be imposed by us. We assume a power-law with a spectral index $\alpha=-$0.6 (photon index $\Gamma=1.6$) where $F_{\nu}\propto \nu^{\alpha}$, when the source is in the hard state, and a blackbody at a temperature of 1 keV for soft state data. These models for the X-ray spectrum are the same as those adopted by [@gallet03], and altering the values in the models to other reasonable approximations does not significantly change the estimated X-ray luminosities. For GRS 1915+105, we use data from the radio-bright plateau state, which is approximately analogous to the hard state [e.g. @fendpo00]. The *NASA* tool *Web-PIMMS* was used to convert from instrument X-ray counts per unit time (e.g. day-averaged $RXTE$ ASM counts s$^{-1}$). [@brocet04] also provide a table of approximate instrument counts–flux conversion factors used in this work. For the OIR luminosities, data was collected from the optical $B$-band at 440 $nm$ to the near-infrared $K$-band at 2220 $nm$. OIR absorbed fluxes were de-reddened using the best-known value of the extinction $A_{\rm V}$ to each source and the dependence of extinction with wavelength given by [-@cardet89]. For OIR data at fluxes low enough for the companion star to significantly contribute (i.e. $quiescence$ for most low-mass XBs), the data was discarded if the fractional contribution of the companion had not been estimated in the OIR band of the data. The fractional contribution of the companion (column 7 of Table 1) was subtracted from the low-flux OIR data in order to acquire the flux from all other emission processes in the system. This contribution is not well constrained in many cases due to the uncertain spectral type of the companion star [e.g. @haswet02]. We have therefore propagated the errors associated with this into the errors of the OIR luminosities for all quiescent data. In sources for which the companion is comparatively bright [e.g. GRO J1655–40; @hyneet98], its contribution has been subtracted in outburst in addition to quiescence. The intrinsic (de-reddened) OIR and X-ray luminosities were then calculated given the best-known estimate of the distance to each source (Table 1). We adopted the approximation $L_{\rm OIR}\approx \nu F_{\nu,OIR}$ to estimate the OIR luminosity (we are approximating the spectral range of each filter to the central wavelength of its waveband). The errors associated with the luminosities are propagated from the errors quoted in the original data. Where no errors are quoted, we apply a conservative error of 30%. Errors associated with estimates of the distance, extinction $A_{\rm V}$ and NI absorption column $N_{\rm H}$ were sought (Table 1). Where these limits are not directly quoted in the reference, we used the most conservative estimates implied from the text. These errors were not propagated into the errors associated with the luminosities we derive because the resulting plots would be dominated by error bars, however we also show one error bar for each data set, representing the average total systematic 1$\sigma$ errors associated with each luminosity data point. In addition, we searched the literature for quasi-simultaneous OIR and X-ray fluxes from a number of NSXBs in a hard X-ray state [mostly atoll sources in the ‘island state’, defined by the X-ray colour–colour diagram; see e.g. @hasiva89; @miglfe06], and HMXBs in both the hard and soft X-ray state. The same methodology was adopted in calculating the intrinsic OIR and X-ray luminosities of the NSXBs and the HMXBs. Tables 2 and 3 list the properties of the NSXBs and HMXBs, respectively. A literature search for OIR Spectral Energy Distributions (SEDs) of BHXBs was also conducted in order to shed further light on the nature of the emission. Fluxes were used when two or more OIR wavebands were quasi-simultaneous. No X-ray fluxes were required for the SEDs, however the X-ray state of the source on the date was noted. Where the companion star significantly contributes to the emission, the estimated wavelength-dependent contribution of the companion was subtracted (adopting the quoted contributions given in the papers from which the data was acquired). Results ------- Quasi-simultaneous OIR and X-ray luminosities are plotted of 15 BHXBs in the hard state (Fig. 1$a$), 9 BHXBs in the soft state (Fig. 1$b$), 8 NSXBs in the hard state (Fig. 2$a$) and 9 HMXBs (Fig. 2$b$). We have classed LMC X–3 as a low-mass XB (BHXB) because [@brocet01a] found from 6 years of observations on this source that its optical emission is dominated by long-term variations rather than the bright companion star in the system, due to its persistent nature (and we are using data from their paper). For BHXBs in the hard state, a strong $L_{\rm OIR}$–$L_{\rm X}$ correlation exists over 8 orders of magnitude in X-ray luminosity. The data from all 15 individual sources lie close to this correlation but deviations in the slopes and normalisations of individual sources are present. The slope of the global correlation is $\beta$=0.61$\pm$0.02, where $L_{OIR} \propto L_{\rm X}^{\beta}$ (we do not take into account the $L_{\rm OIR}$ and $L_{\rm X}$ error bars in calculating $\beta$). To calculate correlations we use the package *gnuplot* and apply equal weighting to each data point. The relations will be biased towards sources with more data points, but we argue that all data points are equally important because we have used a maximum of one data point per day per waveband per source, and the sources tend to vary on timescales $\geq$ days. In the soft state, the optical data ($BVRI$-bands) lie close to the hard state correlation for BHXBs (Fig. 1$b$), and the NIR data ($JHK$-bands) lie below the correlation. A correlation is also found for the NSXBs (Fig. 2$a$) over 7 orders of magnitude in X-ray luminosity. Its slope, $L_{\rm OIR} \propto L_{\rm X}^{0.63\pm0.04}$, is remarkably similar to the OIR–X-ray correlation of the BHXBs, but with a lower normalisation. Taking the two correlations, we find that at a given X-ray luminosity (2–10 keV), the OIR luminosity of a BHXB in the hard state is on average 19.4 times larger (1.29 dex) than that of a NSXB in the hard state in the X-ray range of the data. If we neglect the data from sources in quiescence ($L_{\rm X}<10^{33.5}$ erg s$^{-1}$), the hard state fits are $L_{\rm OIR} = 10^{13.2\pm 0.8} L_{\rm X}^{0.60\pm0.02}$ for BHXBs and $L_{\rm OIR} = 10^{9.7\pm 1.8} L_{\rm X}^{0.66\pm0.05}$ for NSXBs. These are very similar to those with quiescent data included. The similarity of the fits strengthens the case for the quiescent state being an extension to the hard state. For the analysis in the following Sections, we use the fits with quiescent data included, as they are similar enough either way. In addition, it is also interesting that the fit to the BHXB quiescent data alone has a slope $L_{OIR} \propto L_{\rm X}^{0.67\pm 0.14}$ (there are only 4 quiescent NSXB data points). From Fig. 2$b$ it is clear that the OIR luminosity of HMXBs is typically orders of magnitude larger than that of LMXBs, and does not appear to correlate with $L_{\rm X}$. This is consistent with the high-mass companion star dominating the OIR emission. The large range in OIR luminosities between sources is likely to be due to the differing masses and spectral types of the companions. Since this interpretation agrees with the evidence in the literature, we will not discuss the HMXBs in further detail (although they are mentioned in Section 3.5). There are clear advantages and disadvantages of these OIR–X-ray compilations compared to the $L_{\rm radio}$–$L_{\rm X}$ approach of [@gallet03]. The hard state OIR–X-ray correlation in BHXBs includes data from many BHXBs in low-luminosity *quiescent* states (7 sources with $10^{30}<L_{\rm X}<10^{33.5}$ erg s$^{-1}$), which was not possible for the radio–X-ray correlation due to radio detector limits [but see @gallet06]. The BHXB sample also includes sources in the LMC and M31, which are too distant (and hence faint) to observe at radio wavelengths. However, unlike the radio–X-ray comparison, HMXBs cannot be included in these OIR–X-ray plots as the OIR emission is dominated by the companion. In addition, two sources situated in the galactic plane (1E 1740.7–2942 and GRS 1758–258) were included in the radio–X-ray correlation but are not observable at OIR wavelengths due to the high levels of extinction towards the sources. ![image](fig-seds.ps){height="22cm" width="17cm"} ---------------- ------- ---------------------------- -------------- --------------------- ------------- ------- ------------ Source X-ray Dates X-ray Dates X-ray Dates References state (MJD) state (MJD) state (MJD) GRO J0422+32 hard 1 - 4 LMC X–3 hard 50151, 50683 soft 50324, 51038 unknown 46804 5 - 7 A0620–00 hard 42792, 43097-100, 42859 soft 42650-2, 42703 8 - 11 XTE J1118+480 hard 12, 13$^2$ GRS 1124–68 hard 48453, 48622 soft 48367-71 14 - 16 GS 1354–64 hard 50778, 50782, 50851, 50888 17 4U 1543–47 hard 52486, 52490, 52495, 52501 soft 52454, 52469 18 XTE J1550–564 hard 51260, 51630, 51652, 51717 soft 51210, 51660, 51682 19, 20 GRO J1655–40 hard 53422 soft 21 - 23 GX 339–4 hard intermediate 52406 23 - 25 GRO J1719–24 hard 23, 26 - 29 XTE J1720–318 hard 52782 soft 52659, 52685, 52713 30 XTE J1859+226 hard 51605-8 soft 31, 32 GS 2000+25$^1$ soft 47360 unknown 47482 33, 34 V404 Cyg hard 47684, 47718-27 soft 47678 35 - 38 ---------------- ------- ---------------------------- -------------- --------------------- ------------- ------- ------------ $^1$ This source (alternative name: QZ Vul) is not tabulated in Table 1; its distance and reddening are 2.7$\pm$0.7 kpc and $Av\sim$ 3.5, respectively [@jonket04]. $^2$This previously unpublished data was obtained with the Liverpool Telescope and the United Kingdom Infrared Telescope. See Brocksopp et al. (in preparation) for the data reduction recipe used. References: (1) [@bartet94]; (2) [-@goraet96]; (3) [-@castet97]; (4) [-@hyneha99]; (5) [@brocet01a]; (6) [-@trevet87]; (7) [-@trevet88]; (8) [-@robeet76]; (9) [-@okeet77]; (10) [@oke77]; (11) [-@kleiet76]; (12) [@chatet03]; (13) this paper (see Brocksopp et al. in preparation); (14) [@kinget96]; (15) [@dellet98]; (16) [@bail92]; (17) [@brocet01b]; (18) [@buxtet04]; (19) [@jainet01a]; (20) [@jainet01b]; (21) [-@buxtet05a]; (22) [@hyneet98]; (23) [@chatet02]; (24) [@corbet02]; (25) [@homaet05]; (26) [@sekiwy93]; (27) [@allejo93]; (28) [@allegi93]; (29) [@brocet04]; (30) [@nagaet03]; (31) [@haswet00]; (32) [@hyneet02a]; (33) [@charet81]; (34) [-@chevil90]; (35) [-@szkoet89]; (36) [-@gehret89]; (37) [@casaet91]; (38) [@hanet92] In Section 3 we attempt to interpret the relations found between OIR and X-ray luminosities in terms of the most likely dominant emission processes. OIR SEDs were collected from 15 BHXBs in a range of luminosities and X-ray states and are presented in Fig. 3 and the dates of observations and references are given in Table 4. The SEDs are interpreted in Section 3.3. In Sections 3.4 and 3.5 we discuss additional patterns, applications and implications of the empirical relations. The results and interpretations are summarised in Section 4. ![image](fig-radio.ps){height="15.5cm"} Interpretation & Discussion =========================== Jet Suppression in the Soft State --------------------------------- The significant drop in some of the OIR data in the soft state for BHXBs compared to the hard state (Fig. 1$b$) is not due to changes in the X-ray luminosity during state transition because although the X-ray spectrum significantly changes during transition, the bolometric (and 2–10 keV) X-ray luminosity does not [e.g. @zhanet97; @kordet06]. The jet component at radio wavelengths is known to decrease/increase in transition to/from the soft X-ray state, due to the quenching of the jets in the soft state [e.g. @gallet03]. In Fig. 4 we split the OIR data from BHXBs into three wavebands: $BVR$-bands, $I$-band and $JHK$-bands, and plot the monochromatic OIR luminosity ($L_{\nu}$; i.e. flux density scaled for distance) against $L_{\rm X}$, overplotted with the radio data $L_{\rm \nu ,radio}$–$L_{\rm X}$ for BHXBs from @gallet03 (2003; 2006). We see that the normalisations in $L_{\nu}$ (as well as the power-law slopes) for the radio–X-ray and OIR–X-ray hard state correlations are similar to one another for BHXBs: at a given $L_{\rm X}$, the radio and OIR monochromatic luminosities are $\sim$ equal, implying a flat spectrum from radio to OIR wavelengths for all BHXBs in the hard state. We return to the hard state interpretation in Section 3.2. Fig. 4 also shows a clear suppression of all the $JHK$ data, and a little, if any of the $BVR$ in the soft state. The $I$-band data appear to sit in the centre of the two groups. All 9 BHXBs for which we have soft state data are consistent with this behaviour, suggesting it is ubiquitous in BHXBs. We interpret this as the NIR wavebands being quenched as the jet is switched off, as is observed at radio wavelengths. The NIR appears to be quenched at a higher X-ray luminosity than the radio, but this effect may be just due to an upper $L_{\rm X}$ limit adopted by [@gallet03] when compiling their radio–X-ray data. The optical wavebands in the soft state lie close to the OIR hard state correlation. Most of the optical data lies below the centre of the hard state correlation, with the exception of V404 Cyg, whose optical luminosity is enhanced in the soft state. There is still a debate as to whether V404 Cyg entered the soft state, and the data here were taken close to the supposed state transition. The $I$-band appears to be the “pivot” point, as already shown by [@corbet02] and [@homaet05] to be where the continuum of the optically thin jet meets that of (possibly) the disc. The NIR quenching in the soft state implies that this waveband is dominated by the jets in luminous hard states, just before/after transition to/from the soft state. The optical data are not quenched, suggesting a different process is dominating at these wavelengths. An alternative interpretation could be that the disc dominates the OIR in both the hard and soft states, but changes temperature during state transition, shifting the blackbody from (e.g.) the OIR in the hard state, to the optical–UV in the soft state. This would have the effect of reducing the NIR in the soft state but maintaining the optical, as is observed. We argue that this is not the case because there is evidence in many BHXBs for two spectral components of OIR emission, the redder of which is quenched in the soft state [Section 3.3; @jainet01b; @buxtet04; @homaet05]. The jets should contribute negligible OIR flux in the soft state, so we can estimate the OIR contribution of the jets at high luminosity in the hard state from the level of soft state quenching. The mean offset of the OIR soft state data from the hard state fit is 0.30$\pm$0.32 dex, 0.71$\pm$0.21 dex and 1.10$\pm$0.26 dex in $L_{\rm\nu,OIR}$ for the $BVR$, $I$ and $JHK$-bands, respectively. This corresponds to a respective fractional jet component of 50$^{+26}_{-50}$, 81$^{+7}_{-13}$ and 92$\pm$5 percent in the $BVR$, $I$ and $JHK$-bands. It is clear that the spectrum of the jet extends, with spectral index $\alpha\sim 0$, from the radio regime to the NIR in BHXBs. The position of the turnover from optically thick to optically thin emission must be close to the NIR waveband for the radio–NIR spectrum to appear flat [unless the optically thick spectrum is highly inverted, which is not seen in the radio spectrum most of the time; see e.g. @fend01]. The OIR spectrum in the hard state is flatter than in the soft state, as is expected if the jet component is present in the former and not in the latter; this is explored further in Section 3.3. Modelling Disc and Jet Contributions in the Hard State ------------------------------------------------------ ![image](fig-models.ps){height="15.5cm"} We now attempt to interpret the empirical hard state correlations in terms the three most cited OIR emission processes: X-ray reprocessing in the disc, the viscously heated disc and jet emission. We adopt the theoretical model between optical and X-ray luminosities of [@vanpet94] for X-ray reprocessing (Section 1.1). @vanpet94 normalised their relation to a sample of systems, some of which are BHXBs and some NSXBs. Here, we normalise the relation to our sample of NSXBs (just the optical $BVRI$-bands), which is much larger than the sample of @vanpet94. There is more concrete evidence for NSXBs to be dominated by X-ray reprocessing in the optical regime than in BHXBs (see Section 1). We also take into account a dependency of the relation on system mass that was neglected by @vanpet94, but is needed here to compare between NSXBs and BHXBs. From Kepler’s third law and equation 5 of @vanpet94 (and the discussion that follows), we have: $$\begin{aligned} L_{\rm OPT}\propto L_{\rm X}^{1/2}a \propto L_{\rm X}^{1/2}(M_{\rm co}+M_{\rm cs})^{1/3}P^{2/3}\end{aligned}$$ In the left panel of Fig. 5 we plot the monochromatic luminosity (optical and NIR) versus $L_{\rm X}^{1/2}a$ for each BHXB and NSXB data point in the hard state (adopting the values of $P$, $M_{\rm co}$ and $M_{\rm cs}$ for each source from Tables 1 and 2). The solid line represents the power-law fit to the optical NSXB data, fixing the slope at unity ($L_{\rm OPT}\propto L_{\rm X}^{1/2}a$). We expect a lower normalisation for the NIR data due to the shape of the OIR spectrum, which may have a spectral index $0.5\leq\alpha\leq2.0$ [a conservative range based on theoretical and empirical results of X-ray reprocessing; see e.g. @hyne05]. The upper and lower dotted lines indicate the expected correlations of the NIR data (approximating the optical to the $V$-band centred at $550nm$ and the NIR to the $H$-band at $1660 nm$) if $\alpha =0.5$ and $2.0$, respectively. The dashed line shows the fit to the optical BHXB data (fixing the slope at $L_{\rm OPT}\propto L_{\rm X}^{1/2}a$), which appears to be elevated by $\sim 1$ order of magnitude in $L_{\rm \nu,OIR}$ compared to the optical NSXB data. Similarly, the NIR data of both BHXBs and NSXBs lie above the expected correlation for emission from X-ray reprocessing. The implications of these fits are discussed below. For the jets, we take the models described in Section 1.1 where the spectrum is flat from radio to OIR, and normalise them using the empirical $L_{\rm radio}$–$L_{\rm X}$ relations found by [@gallet03] and [@miglfe06] for BHXBs and NSXBs, respectively. In the right panel of Fig. 5 we plot $L_{\rm \nu ,OIR}$ versus $L_{\rm X}$ for hard state BHXBs and NSXBs. The same relation is expected between optical and NIR jet emission when plotting monochromatic luminosity because we are assuming the jet spectrum is flat; $\alpha \sim 0$. We find that the jet models can approximately describe the optical and NIR data of the BHXBs. The NIR data from NSXBs lie above the expected jet correlation but possess a similar slope, whereas the slope of the optical NSXB data is very different to that predicted from jet emission. The models for OIR emission from a viscously heated disc are described as follows. For the simplest viscously heated steady-state disc, there are two limiting regimes [@franet02]. For $h\nu \ll kT$, we expect $L_{\rm OIR}\propto$ m$^{1/4}$. This is simply the Rayleigh-Jeans limit and will only apply well into the IR. For $h\nu \gg kT$, the relationship is steeper, $L_{\rm OIR}\propto$ m$^{2/3}$. For typical disc edge temperatures of 8,000–12,000K, expected power-law slopes ($L_{\rm OIR}\propto$ m$^{\gamma}$), are calculated to vary from $\gamma\sim0.3$ in the $K$-band, to $\gamma\sim0.5$ in $V$, and $\gamma\sim0.6$ in the UV. Using the calculations linking $L_{\rm X}$ and m in Section 1.1, this corresponds to expected correlations of the form $L_{\rm OIR}\propto L_{\rm X}^{\beta}$, where 0.15$\le\beta\le$0.25 for BHXBs and 0.30$\le\beta\le$0.50 for NSXBs. ------------------------ ----------------------------------------------------------- --------------- --------------------- ------------------- --------------- --------------- -------------------- --------------- BHs: $L_{\rm \nu,OPT}$ $n L_{\rm X}^{0.5}a$ 0.05$\pm$0.03 9.3$\pm$0.4 $L_{\rm X}^{0.7}$ 0.11$\pm$0.02 1.05$\pm$0.07 $L_{\rm X}^{0.25}$ 0.34$\pm$0.02 BHs: $L_{\rm \nu,NIR}$ $(\frac{\nu_{NIR}}{\nu_{OPT}})^\alpha n L_{\rm X}^{0.5}a$ 0.06$\pm$0.03 15.5–81.3$^\dagger$ $L_{\rm X}^{0.7}$ 0.09$\pm$0.04 1.78$\pm$0.16 $L_{\rm X}^{0.17}$ 0.44$\pm$0.04 NSs: $L_{\rm \nu,OPT}$ $n L_{\rm X}^{0.5}a$ 0.09$\pm$0.02 1.0$^\ast$ $L_{\rm X}^{1.4}$ 0.81$\pm$0.03 6.03$\pm$1.94 $L_{\rm X}^{0.50}$ 0.09$\pm$0.03 NSs: $L_{\rm \nu,NIR}$ $(\frac{\nu_{NIR}}{\nu_{OPT}})^\alpha n L_{\rm X}^{0.5}a$ 0.05$\pm$0.03 3.2–16.6$^\dagger$ $L_{\rm X}^{1.4}$ 0.09$\pm$0.41 9.55$\pm$3.08 $L_{\rm X}^{0.30}$ 1.19$\pm$0.41 ------------------------ ----------------------------------------------------------- --------------- --------------------- ------------------- --------------- --------------- -------------------- --------------- For $\mid \beta_{data}$-$\beta_{model}\mid$, $\beta$ and $n$ are free parameters; for $\frac{n_{data}}{n_{model}}$, $\beta$ is fixed at the value of the model and $n$ is a free parameter; $^\ast$ $n$ is defined by the fit to the optical NSXB data (see text); $^\dagger$ the range corresponds to an OIR spectral index $0.5\leq\alpha\leq 2.0$. A summary of the results of fitting these models to the observed data is provided in Table 5. It is clear that for BHXBs, the slopes $\beta$ of the observed relations can be explained by the X-ray reprocessing model or the jet model, but not by the viscous disc model (Homan et al. 2005 also ruled out a viscous disc origin to the OIR emission in the hard state of GX 339–4 because the mass accretion rate inferred from the luminosity is much higher than expected). This is also true for the NIR data of the NSXB sample, however the slope of the NSXB optical data cannot be explained by the jet model and are accurately described by both the reprocessing and viscous models. At high luminosities in the NSXB sample, no OIR data points lie far below the jet relation, suggesting that this process may indeed play a significant OIR role here [as is seen in a few NSXBs, see @miglet06 and references therein]. The normalisation $n$ of the BHXB data is closer to the jet model than the reprocessing model. Although this is consistent with the constraints derived from Section 3.1 (the jets are contributing $\sim90$ percent of the NIR luminosity here), an optical excess of $\sim$1 order of magnitude from the reprocessing model is not expected. The excess is unlikely to be fully explained by the jets because of the lack of optical quenching in the soft state (Section 3.1). In addition, the optical spectrum of most BHXBs is inconsistent with jet emission dominating (Section 3.3). Instead we suggest that OIR emission from reprocessing is enhanced for BHXBs at a given $L_{\rm X}$ due to the localisation of the source of X-rays. For example, the X-ray emitting region in BHXBs may have a larger scale height than in NSXBs and will therefore illuminate the disc more readily [e.g. @minifa04]. This may account for the high value of $n$ for the optical BHXB data, but still struggles to explain the even higher $n$ for the NIR BHXB data. We note that there are deviations of individual sources from the correlations which may be caused by distance and reddening errors, or by differing emission process contributions due to the range of orbital separations or slight differences in the slope of the radio–OIR jet spectrum between sources (or other system parameters not considered). For example, XTE J1118+480 has a small disc and is known to produce significant optical jet emission [e.g. @malzet04; @hyneet06] whereas V404 Cyg possesses a large disc and is dominated by X-ray reprocessing in the disc [e.g. @wagnet91]. Other OIR emission mechanisms that could contribute, for example disc OIR emission due to magnetic reconnection, have not been modelled here and could also contribute to the scatter in Fig. 5. These processes cannot be ruled out, but are unlikely to easily explain the observed correlations. ![Mean normalisation $n$ of each source, fixing the slope at $L_{\rm OIR}\propto L_{\rm X}^{0.6}$, versus orbital inclination.](fig-inclin.ps){width="6cm"} In addition, the orbital inclination may affect the level of OIR emission, in particular from X-ray reprocessing in the disc. To explore this, we have plotted in Fig. 6 the average normalisation $n$ of the data for each source against the known estimated orbital inclination, fixing the slope at $L_{\rm OIR}\propto L_{\rm X}^{0.6}$ for each source. The best estimate of the inclination of each system (where known) was obtained from [@rittet03], [@trevet88], [-@fomaet01], [@wanget01] and [@falaet05]. We find no evidence for there being a direct relation between $L_{\rm OIR}$ and the orbital inclination $i$, suggesting the effect of inclination is subtle. The Spectral Energy Distributions --------------------------------- Although no clear patterns are visible from the SEDs in Fig. 3 on first inspection, closer analysis reveals support for many of the conclusions made so far. Optically thin synchrotron emission is expected to produce an OIR spectral index $\alpha<$ 0, and this is seen in part of the SEDs of 10 out of 14 BHXBs in the hard state. In the soft state only 3 out of 10 BHXBs are observed to have $\alpha<$ 0, suggesting a synchrotron component playing a larger role in the hard state than in the soft [$\alpha$ in some sources could be dominated by uncertainties in $A_{\rm V}$; e.g. the SEDs of XTE J1550–564 are red even in the soft state, which is likely due to an underestimated extinction; see @jonket04]. In comparison, $\alpha>$ 0 is seen in the SEDs of 10/14 BHXBs in the hard state and 10/10 in the soft state. These spectra are likely to have a thermal origin and agree with recent analysis of optical/UV SEDs of 6 BHXBs [@hyne05]. The SEDs generally appear redder in the hard state than in the soft, as is observed by the NIR suppression in the soft state (Section 3.1). A clear suppression of the NIR, and not the optical bands in the soft state is visible in the SEDs of GX 339–4, XTE J1550–564 and 4U 1543–47 [@homaet05; @jainet01b; @buxtet04]. In contrast, the value of $\alpha$ in XTE J1720–318 and XTE J1859+226 appear not to change between the hard and soft states. The SEDs show no substantial evidence for the turnover in the jet spectrum from optically thick ($\alpha\sim$ 0) to optically thin ($\alpha\sim-$0.6) synchrotron emission, as we may expect to see in the hard state [except in GX 339–4; see @corbet02]. This is consistent with the turnover lying close to the NIR (Section 3.1), possibly just redward of $K$-band. In some systems, $\alpha$ is more negative at low luminosities. This effect could be the result of (a) a cooler accretion disc, or (b) the jets contributing more than the disc at low luminosities. Since the former process cannot explain the steeply negative SEDs of a few BHXBs at low luminosities, we suspect both processes may play a role. Finally, the hard state NIR excess seen in GX 339–4 (@corbet02 [@homaet05]; interpreted as where the optically thin jet spectrum meets the blue thermal spectrum) is here also seen in LMC X–3 (tentatively), 4U 1543–47, XTE J1550–564 and V404 Cyg. In these sources, we interpret the NIR excess as originating in the optically thin part of the jet spectrum. It is interesting to note that OIR SEDs of a number of NSXBs show an IR excess that cannot be explained by thermal emission, and is likely to originate in the jets [@miglet06 and references therein]. The Rise and Fall of an Outburst -------------------------------- During a transient outburst typical of BHXBs, the source will either remain in the hard state for the entire outburst [e.g. XTE J1118+480; GRO J1719–24; see also @brocet04] or transit into the soft (or intermediate or very high) state, before returning to the hard state and declining in luminosity. A hysteresis effect has been identified in BHXB transients, and at least one NSXB transient: the hard to soft state transition always occurs at a higher X-ray luminosity than the transition back to the hard state (@maccco03 [@maccet03; @yuet03]; @fendet04 [@homabe05]). Here, we present a prediction of the OIR behaviour of an outburst that results from this hysteresis effect and the models in Section 3.2. The following sequence of events should occur for a BHXB outburst that enters the soft state: - $L_{\rm OIR}$ and $L_{\rm X}$ increase in the hard state rise of the outburst, with jets and reprocessing contributing to the OIR. - The source enters the soft state, quenching the jet component. $L_{\rm OIR}$ drops but $L_{\rm X}$ is maintained. - $L_{\rm OIR}$ and $L_{\rm X}$ decrease before transition back to the hard state, when the jet component returns. - $L_{\rm OIR}$ and $L_{\rm X}$ continue to decline, with jets and X-ray reprocessing contributing towards $L_{\rm OIR}$. ![Top panel: Schematic of the expected $L_{\rm OIR}$–$L_{\rm X}$ behaviour for a BHXB outburst that enters the soft state. Lower panel: As the top panel but with OIR data from the rise, soft state and decline (for sources that entered the soft state) stages of outbursts.](fig-minird.ps){width="8.48cm"} This sequence is illustrated in the top panel of Fig. 7. In this schematic, we fix the hard-to-soft state transition at $L_{\rm X}\sim 10^{38}$ erg $s^{-1}$ and the soft-to-hard, at $L_{\rm X}\sim 10^{37}$ erg $s^{-1}$. To test our hysteresis prediction, we split the hard state BHXB data into data from the rise of outbursts and data from the declines. We define rise and decline as before and after the peak X-ray (2–10 keV) luminosity of the outburst, respectively. We do not include data on the decline for sources that remained in the hard state throughout the outburst, as this is not what our prediction is testing. In the lower panel of Fig. 7 we plot these and the soft state data. Data from persistent sources (LMC X–3 and GRS 1915+105) and from sources in or near quiescence are not included. We find that the prediction is consistent with the data, with some inevitable scatter from errors as described in Section 3.2. The loop is larger in the NIR data, as is expected because the jets contribute more in the NIR than in the optical regime. Other reasons for any deviations from the expected models are also discussed in Section 3.2. Applications of the Correlations -------------------------------- The existence of the OIR–X-ray correlations leads to a number of intriguing tools and uses for quasi-simultaneous multi-wavelength data. **The Mass Accretion Rate:**In Section 1.1 we show how $L_{\rm X}$ and m are thought to be linked for BHXBs and NSXBs. Here, we can use the empirical hard state $L_{\rm OIR}$–$L_{\rm X}$ correlations to link the OIR luminosity to the mass accretion rate: BHXBs: $L_{\rm OIR}\propto L_{\rm X}^{0.6}\propto$ m$^{1.2}$ NSXBs: $L_{\rm OIR}\propto L_{\rm X}^{0.6}\propto$ m$^{0.6}$ Essentially, $L_{\rm OIR}$ for BHXBs and NSXBs respond to $L_{\rm X}$ in the same way, but not to m, since $L_{\rm X}$ varies with m differently for BHXBs and NSXBs. From equations (1) and (7) of [@kordet06] we can estimate accretion rates directly from $L_{\rm OIR}$ in the hard state: BHXBs: $L_{\rm OIR}/erg$ $s^{-1}\approx 5.3\times 10^{13}$ (m/$g$ $s^{-1}$)$^{1.2}$ Or: m/$g$ $s^{-1}\approx 3.7\times 10^{-12}$($L_{\rm OIR}/erg$ $s^{-1}$)$^{0.8}$ NSXBs: $L_{\rm OIR}/erg$ $s^{-1}\approx 3.2\times 10^{23}$ (m/$g$ $s^{-1}$)$^{0.6}$ Or: m/$g$ $s^{-1}\approx 6.7\times 10^{-40}$($L_{\rm OIR}/erg$ $s^{-1}$)$^{1.7}$ Given the level of the scatter in the correlations, we expect these calculations to be accurate to $\sim$ one order of magnitude. The OIR luminosity is all that is required to estimate m for hard state objects, however m would be more accurately measured from $L_{\rm X}$, where most of the energy is usually released. In addition, it is possible to estimate the X-ray, OIR and radio luminosities in the hard state quasi-simultaneously, given the value of just one because they are all linked through correlations. **Parameters of an X-ray Binary:**Quasi-simultaneous OIR and X-ray luminosities can constrain the nature of the compact object (BHXB or NSXB), the mass of the companion (HMXB or LMXB) and the distance and reddening to an X-ray binary. This is possible because the data from BHXBs (in the hard and soft states), NSXBs and HMXBs lie in different areas of the $L_{\rm X}$–$L_{\rm OIR}$ diagram, with some areas of overlap. If the distance and reddening towards a source is known (not necessarily at a high level of accuracy), its quasi-simultaneous OIR and X-ray fluxes could reveal the source to be any one of the above types of XB. In addition, if the nature of the compact object is known (BH or NS), but the distance and/or reddening is not, the fluxes can constrain these parameters. We stress that the total errors associated with the data (top left error bars in each panel of Figs 1 and 2) need to be considered to define the areas of overlap. Current techniques used to infer the nature of the compact object in XBs include X-ray timing analysis (e.g. thermonuclear instabilities on accreting neutron stars produce Type I X-Ray bursts), the X-ray spectrum [the well-known X-ray states of BHXBs and tracks in the colour–colour diagrams of NSXBs; e.g. @mcclet06] and optical timing analysis in quiescence (the orbital period and radial velocity amplitude constrain the mass function). This new tool has the power to constrain the nature of the compact object requiring only $L_{\rm OIR}$, $L_{\rm X}$ and, only at high luminosities, the X-ray state of the source at the time of observations. This tool may have many applications. X-ray all-sky monitors such as the *RXTE ASM* are continuously discovering new XBs which are subsequently identified at optical wavelengths. In addition, campaigns are underway to find extragalactic XBs, many of which have optical counterparts discovered with the $HST$ [e.g. @willet05]. It would also be interesting to see where ULXs and SMBHs [e.g. the X-ray and NIR flares seen from Sagittarius A\*; e.g. -@yuanet04] lie in the $L_{\rm OIR}$–$L_{\rm X}$ diagram with the inclusion of a mass term, and whether this could be used to constrain the BH mass in these systems. **The Power of the Jets:**The evidence in this paper confirms that in the literature of the presence of a flat jet spectrum from radio to NIR wavelengths in the hard state, with the optically thin–optically thick turnover close to the $K$-band (see Sections 3.1 and 3.3). The power of the jets is sensitive to the position of the turnover since it is dominated by the higher energy photons. However, the most recent calculations of the jet power [e.g. @gallet05; @heingr05; @miglfe06] already assume the jet spectrum extends to the IR so no extra calculations are required from this work. Conclusions =========== We have collected a wealth of OIR and X-ray fluxes from 33 XBs in order to identify the mechanisms responsible for the OIR emission. A strong correlation between quasi-simultaneous OIR and X-ray luminosities has been discovered for BHXBs in the hard state: $L_{\rm OIR}\propto L_{\rm X}^{0.61\pm0.02}$. This correlation holds over 8 orders of magnitude in $L_{\rm X}$ and includes data from BHXBs in quiescence and at large distances (LMC and M31), which were [until recently; see @gallet06] unattainable for radio–X-ray correlations in XBs. All the NIR (and some of the optical) BHXB luminosities are suppressed in the soft state; a behaviour indicative of synchrotron emission from the jets at high luminosities in the hard state. A similar correlation is found for NSXBs in the hard state: $L_{\rm OIR}\propto L_{\rm X}^{0.63\pm0.04}$, which holds over 7 orders of magnitude in $L_{\rm X}$. At a given X-ray luminosity, a NSXB is typically 20 times fainter in OIR than a BHXB. Comparing the hard state OIR data to radio data of [@gallet03], we find that the radio–OIR jet spectrum in BHXBs is $\sim$ flat ($F_{\nu}=$ constant) at a given $L_{\rm X}$ [see also @fend01]. From this and OIR SEDs of 15 BHXBs, which show an IR excess in a number of systems, we deduce that the optically thick–optically thin turnover in the jet spectrum is likely to be close to the $K$-band for BHXBs. In comparison, the turnover probably lies further into the IR for NSXBs [see @miglet06]. By comparing the observed OIR–X-ray relations with those expected from models of a number of emission processes, we are able to constrain the mean OIR contributions of these processes for XBs. Table 6 summarises the results. We find from the level of soft state quenching in BHXBs that the jets are contributing $\sim$90 percent of the NIR emission at high luminosities in the hard state. The optical BHXB data could have a jet contribution between zero and 76 percent but the optical SEDs show a thermal spectrum indicating X-ray reprocessing in the disc dominates in this regime. In BHXBs, ambiguity arises from the fact that the slope of the expected OIR–X-ray relations from the jets and X-ray reprocessing are essentially indistinguishable. Emission from the viscously heated disc may contribute at low luminosities in BHXBs, but cannot account for the observed correlations. In the NSXBs, the correlations can be explained by the X-ray reprocessing model alone (agreeing with the current thinking), but the jets may play a role at high luminosities, especially in the NIR. The exact contributions of the emission processes are likely to be sensitive to many individual parameters, such as the size of the accretion disc and the shape of the jet spectrum (e.g. the OIR luminosity of the jets is very sensitive to the slope of the optically thick jet spectrum). Sample X-ray state X-ray reprocessing Jet emission Viscous disc Intrinsic companion ------------ ------------- -------------------- -------------- -------------- --------------------- NSXBs; OPT hard $\surd$ $\times$ $\surd$ $\times$ NSXBs; NIR hard $\surd$ $\surd$ $\times$ $\times$ BHXBs; OPT hard $\surd$ $\surd$ $\times$ $\times$ BHXBs; NIR hard $\times$ $\surd$ $\times$ $\times$ BHXBs; OIR soft $\surd$ $\times$ $\surd$ $\times$ HMXBs; OIR all $\times$ $\times$ $\times$ $\surd$ : The OIR emission processes that can describe the empirical OIR–X-ray relations and SEDs. The SEDs show a non-thermal component ($\alpha < 0$) in most sources in the hard state, and thermal emission ($\alpha > 0$) is present, probably in all sources in both hard and soft states. The soft state could originate from a combination of X-ray reprocessing (as the OIR–X-ray relations suggest) and the viscous disc [e.g. @homaet05]. The SEDs of many BHXBs are redder at low luminosities, which seems to be due to both a cooler disc blackbody and a higher fractional contribution from the jets. BHXBs may be jet-dominated at low luminosities (for more on jet-dominated states, see @fend01 [@falcet04]; @fendet04 [@gallet05; @kordet06]). We have also made a prediction of the ‘$L_{\rm OIR}$–$L_{\rm X}$ path’ of a BHXB for a typical outburst, based on a hysteresis effect whereby the hard state rise reaches a higher luminosity than the hard state decline, if the source enters the soft state. The data obtained in this paper appear to agree with the prediction, with some inevitable scatter. Since the X-ray, OIR and radio luminosities and the mass accretion rate are all linked through correlations in the hard state, it is possible to estimate the quasi-simultaneous values of all of these parameters, given the value of just one, by e.g. daily monitoring of the X-ray or OIR fluxes (which is currently being done for some sources by the *RXTE ASM* and by ground-based telescopes). In addition, we have discovered a potentially powerful tool to complement current techniques, that can constrain the nature of the compact object, the mass of the companion and the distance/reddening towards an XB, given only the quasi-simultaneous X-ray and OIR luminosities. Data from BHXBs (in the hard and soft states), NSXBs and HMXBs lie in different areas of the $L_{\rm X}$–$L_{\rm OIR}$ diagram, with small areas of overlap. The tool is most useful for e.g. faint sources with poor timing analysis or at large distances, i.e. extragalactic XBs and new transients. Further work that could constrain the emission process contributions include identifying linear polarimetry perpendicular to the jet axis (a diagnostic of optically thin synchrotron emission), emission line equivalent width analysis [in particular testing for the Baldwin effect; e.g. @mushfe84] and analysing $L_{\rm OIR}$–$L_{\rm X}$ relations in the intermediate and very high X-ray states and in ULXs and SMBHs. *Acknowledgements*. We would like to thank Erik Kuulkers for providing the A0620-00 data [@kuul98] and Elena Gallo for the radio and X-ray data of [@gallet03]. We thank the referee for many useful comments that significantly improved this work. Results provided by the ASM/RXTE teams at MIT and at the RXTE SOF and GOF at NASA’s GSFC. This paper uses data taken with UKIRT and the Liverpool Telescope, the latter of which is funded via EU, PPARC and JMU grants. [natbib]{} Allen W., Gilmore A. C., 1993, IAUC, 5784 Allen W. H., Jones A. F., 1993, IAUC, 5774 Bailyn C. D., 1992, ApJ, 391, 298 Bartolini C., Guarnieri A., Piccioni A., Beskin G. M., Neizvestny S. I., 1994, ApJS, 92, 455 Blair W. P., Raymond J. C., Dupree A. K., Wu C.-C., Holm A. V., Swank J. H., 1984, ApJ, 278, 270 Blandford R. D., Konigl A., 1979, ApJ, 232, 34 Bochkarev N. G., Lyutyi V. M., 1998, AstL, 24, 277 Boyd, P. T., Smale, A. P., Homan, J., Jonker, P. G., van der Klis, M., Kuulkers, E. 2000, ApJ, 542, L127 Brandt, S., Castro-Tirado, A. J., Lund, N., Dremin, V., Lapshov, I., Syunyaev, R. 1992, A&A, 262, L15 Brocksopp C., et al., 1999a, MNRAS, 309, 1063 Brocksopp C., Tarasov A. E., Lyuty V. M., Roche P., 1999b, A&A, 343, 861 Brocksopp C., Groot P. J., Wilms J., 2001a, MNRAS, 328, 139 Brocksopp C., Jonker P. G., Fender R. P., Groot P. J., van der Klis M., Tingay S. J., 2001b, MNRAS, 323, 517 Brocksopp C., Bandyopadhyay R. M., Fender R. P. 2004, NewA, 9, 249 Burderi L., Di Salvo T., D’Antona F., Robba N. R., Testa V., 2003, A&A, 404, L43 Buxton M. M., Bailyn C. D., 2004, ApJ, 615, 880 Buxton M., Bailyn C., Maitra D., 2005a, ATel, 418 Buxton M., Bailyn C., Maitra D., 2005b, ATel, 542 Cadolle Bel M., et al., 2004, A&A, 426, 659 Callanan P. J., et al., 1995, ApJ, 441, 786 Campana S., et al., 2002, ApJ, 575, L15 Campana S., Stella L., 2003, ApJ, 597, 474 Campana S., Stella L., 2004, NuPhS, 132, 427 Campana S., Israel G. L., Stella L., Gastaldello F., Mereghetti S., 2004, ApJ, 601, 474 Campana S., Stella L., Israel G. L., Belloni T., Pagani C., Burrows D. N., Gehrels N., 2005, ATel, 529 Canizares C. R., McClintock J. E., Grindlay J. E., 1980, ApJ, 236, L55 Cardelli J. A., Clayton G. C., Mathis J. S., 1989, ApJ, 345, 245 Casares J., Charles P. A., Jones D. H. P., Rutten R. G. M., Callanan P. J., 1991, MNRAS, 250, 712 Casares J., Charles P. A., Naylor T., Pavlenko E. P., 1993, MNRAS, 265, 834 Casares J., Zurita C., Shahbaz T., Charles P. A., Fender R. P., 2004, ApJ, 613, L133 Castro-Tirado A. J., Ortiz J. L., Gallego, J., 1997, A&A, 322, 507 Chapuis C., Corbel S., 2004, A&A, 414, 659 Charles P. A., et al., 1980, ApJ, 237, 154 Charles P. A., Kidger M. R., Pavlenko E. P., Prokof’eva V. V., Callanan P. J., 1991, MNRAS, 249, 567 Charles P. A., Coe M. J., 2006, in Compact Stellar X-Ray Sources, eds. Lewin W. H. G., van der Klis M., Cambridge University Press, p. 215 Chaty S., Mirabel I. F., Goldoni P., Mereghetti S., Duc P.-A., Mart$\acute{i}$, J., Mignani R. P., 2002, MNRAS, 331, 1065 Chaty S., Haswell C. A., Malzac J., Hynes R. I., Shrader C. R., Cui W., 2003, MNRAS, 346, 689 Chen W., Shrader C. R., Livio M., 1997, ApJ, 491, 312 Chevalier C., Ilovaisky S. A., 1990, A&A, 238, 163 Chevalier C., Ilovaisky S. A., van Paradijs J., Pedersen H., van der Klis M., 1989, A&A, 210, 114 Chevalier C., Ilovaisky S. A., Leisy P., Patat F., 1999, A&A, 347, L51 Clark G., Doxsey R., Li F., Jernigan J. G., van Paradijs J., 1978, ApJ, 221, L37 Clark J. S., et al., 2000, A&A, 356, 50 Cole A. A., 1998, ApJ, 500, L137 Corbel S., Fender R. P., 2002, ApJ, 573, L35 Corbel S., Nowak M. A., Fender R. P., Tzioumis A. K., Markoff, S. 2003, A&A, 400, 1007 Cowley A. P., Schmidtke P. C., 2004, AJ, 128, 709 Cowley A. P., Crampton D., Hutchings J. B., Remillard R., Penfold J. E., 1983, ApJ, 272, 118 Cunningham C., 1976, ApJ, 208, 534 della Valle M., Mirabel I. F., Rodriguez L. F., 1994, A&A, 290, 803 della Valle M., Masetti N., Bianchini A., 1998, A&A, 329, 606 Dickey J. M., Lockman F. J., 1990, ARA&A, 28, 215 Ebisawa K., et al., 1994, PASJ, 46, 375 Esin A. A., McClintock J. E., Narayan R., 1997, ApJ, 489, 865 Esin A. A., Kuulkers E., McClintock J. E., Narayan R., 2000, ApJ, 532, 1069 Falanga M., et al., 2005, A&A, 444, 15 Falcke H., Biermann P. L., 1996, A&A, 308, 321 Falcke H., Körding E., Markoff S., 2004, A&A, 414, 895 Fender R. P., 2001, MNRAS, 322, 31 Fender R. P., 2006, in Compact Stellar X-Ray Sources, eds. Lewin W. H. G., van der Klis M., Cambridge University Press, p. 381 Fender R. P., Pooley G. G., 2000, MNRAS, 318, L1 Fender R. P., Gallo E., Jonker P. G., 2003, MNRAS, 343, L99 Fender R. P., Belloni T. M., Gallo E., 2004, MNRAS, 355, 1105 Fomalont E. B., Geldzahler B. J., Bradshaw C. F., 2001, ApJ, 558, 283 Frank J., King A., Raine D. J., 2002, Accretion Power in Astrophysics, 3rd edn. Cambridge Univ. Press, Cambridge Galloway D. K., Markwardt C. B., Morgan E. H., Chakrabarty D., Strohmayer T., E., 2005, ApJ, 622, L45 Gallo E., Fender R. P., Pooley G. G., 2003, MNRAS, 344, 60 Gallo E., Fender R. P., Kaiser C., Russell, D. M., Morganti R., Oosterloo T., Heinz S., 2005, Nature, 436, 819 Gallo E., et al., 2006, MNRAS, in press (astro-ph/0605376) Garcia M. R., McClintock J. E., Narayan R., Callanan P., Barret D., Murray S. S., 2001, ApJ, 553, L47 Gehrz R. D., Johnson J., Harrison T., 1989, IAUC, 4816 Gelino D. M., Harrison T. E., Orosz J. A., 2001, AJ, 122, 2668 Gelino D. M., Harrison T. E., 2003, ApJ, 599, 1254 Goranskii V. P., Karitskaya E. A., Kurochkin N. E., Trunkovskii E. M., 1996, AstL, 22, 371 Greene J., Bailyn C. D., Orosz J. A., 2001, ApJ, 554, 1290 Grindlay J. E., Liller W., 1978, ApJ, 220, L127 Haggard D., et al., 2004, ApJ, 613, 512 Hameury J.-M., et al., 2003, A&A, 399, 631 Han X., Hjellming R. M., 1992, ApJ, 400, 304 Hasinger G., van der Klis M., 1989, A&A, 225, 79 Haswell C. A., Chaty S., Cui W., Casares J. V., Hynes R. I., 2000, ATel, 55 Haswell C. A., Hynes R. I., King A. R., Schenker K., 2002, MNRAS, 332, 928 Heemskerk M. H. M., van Paradijs J., 1989, A&A, 223, 154 Heinz S., Grimm H. J., 2005, ApJ, 633, 384 Hjellming R. M., Han X., 1995, in Radio Properties of X-ray Binaries, eds, Lewin W. H. G., van Paradijs J., van den Heuvel E. P. J., Cambridge Univ. Press, Cambridge, p. 308 Homan J., Belloni T., 2005, Ap&SS, 300, 107 Homan J. et al., 2001, ApJS, 132, 377 Homan J., Buxton M., Markoff S., Bailyn C. D., Nespoli E., Belloni T., 2005, ApJ, 624, 295 Homer L., Charles P. A., Chakrabarty D., van Zyl L., 2001, MNRAS, 325, 1471 Hutchings J. B., Winter K., Cowley A. P., Schmidtke P. C., Crampton D., 2003, AJ, 126, 2368 Hynes R. I., Haswell C. A., 1999, MNRAS, 303, 101 Hynes R. I., et al., 1998, MNRAS, 300, 64 Hynes R. I., Haswell C. A., Chaty S., Shrader C. R., Cui W., 2002a, MNRAS, 331, 169 Hynes R. I., et al., 2002b, A&A, 392, 991 Hynes R. I., Steeghs D., Casares J., Charles P. A., O’Brien K., 2003a, ApJ, 583, L95 Hynes R. I., et al., 2004, ApJ, 611, L125 Hynes R. I., 2005, ApJ, 623, 1026 Hynes R. I., Gelino D. M., Pearson K. J., Robinson E. L., 2005, ATel, 393 Hynes R. I., et al., 2006, ApJ, submitted in’t Zand J. J. M., et al., 2001, A&A, 372, 916 Israel G. L., et al., 2004, ATel, 243 Jain R., Bailyn C., Garcia M., Rines K., Levine A., Espinoza J., Gonzalez D., 1999, ATel, 41 Jain R., et al., 2000, ATel, 59 Jain R. K., Bailyn C. D., Orosz J. A., McClintock J. E., Sobczak G. J., Remillard R. A., 2001a, ApJ, 546, 1086 Jain R. K., Bailyn C. D., Orosz J. A., McClintock J. E., Remillard R. A., 2001b, ApJ, 554, L181 Jonker P. G., Nelemans G., 2004, MNRAS, 354, 355 Kalemci E., et al., 2005, ApJ, 622, 508 Kaluzienski L. J., Holt S. S., Swank J. H., 1980, ApJ, 241, 779 Kaper L., Lamers H. J. G. L. M., Ruymaekers E., van den Heuvel E. P. J., Zuidervijk E. J., 1995, A&A, 300, 446 Kato T., Uemura M., Stubbings R., Watanabe T., Monard B., 1999, IBFS, 4777 King N. L., Harrison T. E., McNamara B. J., 1996, AJ, 111, 1675 Kitamoto S., Tsunemi H., Pedersen H., Ilovaisky S. A., van der Klis M., 1990, ApJ, 361, 590 Kiziloglu U., Balman S., Baykal A., Gogus E., Alpar A., Inam C., 2005, ATel, 386 Kleinmann S. G., Brecher K., Ingham W. H., 1976, ApJ, 207, 532 Koch-Miramond L., Ábrahám P., Fuchs Y., Bonnet-Bidaud J.-M., Claret A., 2002, A&A, 396, 877 Kong A. K. H., McClintock J. E., Garcia M. R., Murray S. S., Barret D., 2002, ApJ, 570, 277 Körding E., Fender R. P., Migliari S., 2006, MNRAS, submitted Kovács G., 2000, A&A, 363, L1 Kuulkers E., 1998, NewAR, 42, 1 Kuulkers E., et al., 2004, ATel, 240 Lequeux J., Maurice E., Prevot-Burnichon M.-L., Prevot L., Rocca-Volmerange B., 1982, A&A, 113, L15 Liu Q. Z., van Paradijs J., van den Heuvel E. P. J., 2001, A&A, 368, 1021 Maccarone T. J., Coppi P. S., 2003, MNRAS, 338, 189 Maccarone T. J., Gallo E., Fender R. P., 2003, MNRAS, 345, L19 Machin G., et al., 1990, MNRAS, 247, 205 Maitra D., Bailyn C., 2005, ATel, 450 Maitra D., Buxton M., Tourtellotte S., Bailyn C., 2003, ATel, 125 Maitra D., Bailyn C., Winnick R., Espinoza J., Gonzalez D., 2004a, ATel, 279 Maitra D., Bailyn C., Winnick R., Espinoza J., Gonzalez D., 2004b, ATel, 288 Malzac J., Merloni A., Fabian A. C., 2004, MNRAS, 351, 253 Markoff S., Nowak M., Corbel S., Fender R., Falcke H., 2003, A&A, 397, 645 Markoff S., Falcke H., Fender R., 2001, A&A, 372, L25 Markwardt C. B., Smith E., Swank J. H., 2005, ATel, 415 Massey P., Johnson K. E., Degioia-Eastwood K., 1995, ApJ, 454, 151 McClintock J. E., Canizares C. R., Cominsky L., Li F. K., Lewin W. H. G., van Paradijs J., Grindlay J. E., 1979, Nature, 279, 47 McClintock J. E., Horne K., Remillard R. A., 1995, ApJ, 442, 358 McClintock J. E., Narayan R., Garcia M. R., Orosz J. A., Remillard R. A., Murray S. S., 2003, ApJ, 593, 435 McClintock J. E., Remillard R. A., 2006, in Compact Stellar X-Ray Sources, eds. Lewin W. H. G., van der Klis M., Cambridge University Press, p. 157 McConnell M. L. et al., 2000, ApJ, 543, 928 McNamara B. J., et al., 2003, AJ, 125, 1437 Merloni A., Di Matteo T., Fabian A. C., 2000, MNRAS, 318, L15 Migliari S., Fender R. P., 2006, MNRAS, 366, 79 Migliari S., Tomsick J. A., Maccarone T. J., Gallo E., Fender R. P., Nelemans G., Russell, D. M., 2006, ApJ, 643, L41 Mikołajewska J., Rutkowski A., Gonçalves D. R., Szostek A., 2005, MNRAS, 362, L13 Miniutti G., Fabian A. C., 2004, MNRAS, 349, 1435 Mioduszewski A. J., Rupen M. P., 2004, ApJ, 2004, 615, 432 Mukherjee U., Paul B., 2004, A&A, 427, 567 Mushotzky R., Ferland G. J., 1984, ApJ, 278, 558 Nagata T., et al., 2003, PASJ, 55, L73 Naik S., Paul B., 2003, A&A, 401, 265 Narayan R., 1996, ApJ, 462, 136 Narayan R., Yi I, 1995, ApJ, 452, 710 Narayan R., Mahadevan R., Quataert, E., 1998, in Abramowicz M. A., Björnsson G., Pringle J. E., eds, Theory of Black Hole Accretion Disks, Cambridge Univ. Press, Cambridge, p. 148 Negueruela I., Israel G. L., Marco A., Norton A. J., Speziali R., 2003, A&A, 397, 739 Nelemans G., Jonker P. G., Marsh T. R., van der Klis M., 2004, MNRAS, 348, L7 O’Brien K., Horne K., Hynes R. I., Chen W., Haswell C. A., Still M. D., 2002, MNRAS, 334, 426 Oke J. B., 1977, ApJ, 217, 181 Oke J. B., Greenstein J. L., 1977, ApJ, 211, 872 Orlandini M., et al., 2000, A&A, 356, 163 Orlandini M., et al., 2004, NuPhS, 132, 476 Orosz J. A., Bailyn C. D., McClintock J. E., Remillard R. A., 1996, ApJ, 468, 380 Orosz J. A., Jain R. K., Bailyn C. D., McClintock J. E., Remillard R. A., 1998, ApJ, 499, 375 Orosz J. A., et al., 2002, ApJ, 568, 845 Poutanen J. 1998, in Theory of Black Hole Accretion Discs, Cambridge Contemporary Astrophysics, eds. Abramowicz M. A., Björnsson G., Pringle J. E., Cambridge Univ. Press, Cambridge, p. 100 Predehl P., Schmitt J. H. M. M., 1995, A&A, 293, 889 Ritter H., Kolb U., 2003, A&A, 404, 301 Robertson B. S. C., Warren P. R., Bywater R. A., 1976, IBVS, 1173, 1 Rutledge R. E., Bildsten L., Brown E. F., Pavlov G. G., Zavlin V. E., 2001, ApJ, 551, 921 Rutledge R. E., Bildsten L., Brown E. F., Pavlov G. G., Zavlin V. E., 2002, ApJ, 578, 405 Schulz N. S., 1999, ApJ, 511, 304 Schulz N. S., Cui W., Canizares C. R., Marshall H. L., Lee J. C., Miller J. M., Lewin W. H. G., 2002, ApJ, 565, 1141 Sekiguchi K., van Wyk F., 1993, IAUC, 5769 Shahbaz T., Naylor T., Charles P. A., 1993, MNRAS, 265, 655 Shahbaz T., Fender R., Charles P. A., 2001, A&A, 376, L17 Shahbaz T., et al., 2003, MNRAS, 346, 1116 Shakura N. I., Sunyaev R. A., 1973, A&A, 24, 337 Shaw S. E., et al., 2005, A&A, 432, L13 Shrader C. R., Wagner R. M., Charles P. A., Harlaftis E. T., Naylor T., 1997, ApJ, 487, 858 Soria R., Wu K., Johnston H. M., 1999, MNRAS, 310, 71 Soria R., Wu K., Page M. J., Sakelliou I., 2001, A&A, 365, L273 Stanek K. Z., Garnavich P. M., 1998, ApJ, 503, L131 Stark M. J., Saia M., 2003, ApJ, 587, L101 Steeghs D., Casares J., 2002, ApJ, 568, 273 Steeghs D., Blake C., Bloom J. S., Torres M. A. P., Jonker P. G., Starr D., 2004, ATel, 363 Steele I. A., Negueruela I., Coe M. J., Roche P., 1998, MNRAS, 297, L5 Sutaria F. K., et al., 2002, A&A, 391, 993 Szkody P. et al., 1989, IAUC, 4794 Tanaka Y., 1993, IAUC, 5877 Tomsick J. A., et al., 2003, ApJ, 597, L133 Torres M. A. P., Casares J., Mart$\acute{i}$nez-Pais I. G., Charles P. A., 2002, MNRAS, 334, 233 Torres M. A. P., Callanan P. J., Garcia M. R., Zhao P., Laycock S., Kong A. K. H., 2004, ApJ, 612, 1026 Torres M. A. P., Steeghs D., Jonker P., Martini P., 2005, ATel, 417 Torres M. A. P., et al., 2006, in preparation Treves A., et al., 1980, ApJ, 242, 1114 Treves A., Maraschi L., Tanzi E. G., Falomo R., Bouchet P., 1987, IAUC, 4309, 3 Treves A., Belloni T., Bouchet P., Chiappetti L., Falomo R., Maraschi L., Tanzi E. G., 1988, ApJ, 335, 142 Uemura M., et al., 2000, PASJ, 52 L, 15 van den Heuvel E. P. J., Heise J., 1972, Nature, 239, 67 van Kerkwijk M. H., Geballe T. R., King D. L., van der Klis M., van Paradijs J., 1996, A&A, 314, 521 van Paradijs J., McClintock J. E., 1994, A&A, 290, 133 van Paradijs J., McClintock J. E., 1995, in Optical and Ultraviolet Observations of X-ray Binaries, eds, Lewin W. H. G., van Paradijs J., van den Heuvel E. P. J., Cambridge Univ. Press, Cambridge, p. 58 van Straaten S., Ford E. C., van der Klis M., M$\acute{e}$ndez M., Kaaret P., 2000, ApJ, 540, 1049 Vrtilek S. D., Raymond J. C., Garcia M. R., Verbunt F., Hasinger G., Kurster M., 1990, A&A, 235, 162 Wachter S., Hoard D. W., Bailyn C. D., Corbel S., Kaaret P., 2002, ApJ, 568, 901 Wagner R. M., et al., 1991, ApJ, 378, 293 Wang Z., et al., 2001, ApJ, 563, L61 Wang Q. D., et al., 2005, ApJ, 635, 386 Welsh W. F., Robinson E. L., Young P., 2000, AJ, 120, 943 Williams B. F., Garcia M. R., McClintock J. E., Kong A. K. H., Primini F. A., Murray S. S., 2005, ApJ, 628, 382 Wilms J., Nowak M. A., Pottschmidt K., Heindl W. A., Dove J. B., Begelman M. C., 2001, MNRAS, 320, 327 Wu C.-C., Holm A. V., Eaton J. A., Milgrom M., Hammerschlag-Hensberge G., 1982, PASP, 94, 149 Yu W., Klein-Wolt M., Fender R., van der Klis M., 2003, ApJ, 589, L33 Yuan F., Quataert E., Narayan R., 2004, ApJ, 606, 894 Zdziarski A. A., Poutanen J., Mikołajewska J., Gierli$\acute{n}$ski M., Ebisawa K., Johnson W. N., 1998, MNRAS, 301, 435 Zdziarski A. A., Gierli$\acute{n}$ski M., Mikołajewska J., Wardzi$\acute{n}$ski G., Smith D. M., Alan Harmon B., Kitamoto S., 2004, MNRAS, 351, 791 Zhang S. N., Cui W., Harmon B. A., Paciesas W. S., Remillard R. E., van Paradijs J., 1997, ApJ, 477, L95 Zurita C., et al., 2002, MNRAS, 334, 999 Zurita C., Casares J., Shahbaz T., 2003, ApJ, 582, 369 Życki P. T., Done C., Smith D. A., 1999, MNRAS, 309, 561 [^1]: Email: davidr@phys.soton.ac.uk [^2]: This energy range was adopted to be consistent with that used for the radio–X-ray correlations of @gallet03 (2003; 2–11 keV) and @miglfe06 (2005; 2–10 keV).
--- author: - | Kevin Lai      Bernardo A. Huberman      Leslie Fine\ [{klai, bernardo.huberman, leslie.fine}@hp.com]{}\ \ HP Labs bibliography: - 'bibliographies/resource\_allocation.bib' - 'bibliographies/economics.bib' - 'bibliographies/networking.bib' - 'bibliographies/overlay.bib' - 'bibliographies/network\_performance.bib' - 'bibliographies/peer-to-peer.bib' - 'bibliographies/security.bib' - 'bibliographies/reputation.bib' - 'bibliographies/network\_architecture.bib' title: | **Tycoon: a Distributed Market-based\ Resource Allocation System** ---
--- abstract: | Perturbation analysis of Markov chains provides bounds on the effect that a change in a Markov transition matrix has on the corresponding stationary distribution. This paper compares and analyzes bounds found in the literature for finite and denumerable Markov chains and introduces new bounds based on series expansions. We discuss a series of examples to illustrate the applicability and numerical efficiency of the various bounds. Specifically, we address the question on how the bounds developed for finite Markov chains behave as the size of the system grows. In addition, we provide for the first time an analysis of the relative error of these bounds. For the case of a scaled perturbation we show that perturbation bounds can be used to analyze stability of a stable Markov chain with respect to perturbation with an unstable chain.\ [**Keywords:**]{} Markov chains, perturbation bounds, condition number, strong stability, series expansion, queuing\ AMS Primary: 60J10; Secondary: 15A12; 15A18 author: - | Karim Abbas\ LAMOS, University of Bejaia, Algeria\ Email: karabbas2003@yahoo.fr\ and\ Joost Berkhout\ Vrije Universiteit Amsterdam\ Department of Econometrics and Operations Research\ The Netherlands\ Email: j2.berkhout@vu.nl\ and\ Bernd Heidergott\ Vrije Universiteit Amsterdam\ Department of Econometrics and Operations Research & Tinbergen Institute\ The Netherlands\ Email: b.f.heidergott@vu.nl\ date: title: 'A Critical Account of Perturbation Analysis of Markov Chains[^1]' --- Introduction ============ Perturbation analysis of Markov chains (PAMC) studies the effect a perturbation of a Markov transition matrix has on the stationary distribution of the chain. Consider a Markov chain with discrete state space $ S $, transition probability matrix $ P $, and unique stationary distribution $ \pi_P$. Furthermore, let $ R $ be an alternative Markov transition matrix on $ S $ with unique stationary distribution $ \pi_{R} $. PAMC addresses the following question: what is the effect of switching from $ P $ to $ R $ on the stationary distribution of the chain? More formally, PAMC theory studies bounds of the type $$\label{opl} || \pi_{R}^\top - \pi_P^\top|| \leq { \Delta} ( R , P ) ,$$ where $ || \cdot || $ denotes a suitable vector norm (details will be provided later in the text), $ {\Delta } ( R , P ) $ is a scalar function of $ P $ and $ R $, and $ \top $ denotes the transposed[^2]. The study of the effect of perturbing a Markov transition matrix on its stationary distribution dates back to Schweitzer’s pioneering paper [@PA7]. Best to our knowledge, the first paper putting this perturbation question into the framework of (\[opl\]) is [@MeyerErst]. Specifically, [@MeyerErst] proposed bounds of the form $$\label{Delta} {\Delta} ( R , P ) = \kappa || R -P || ,$$ for some appropriate matrix norm, where $ \kappa $ is the so-called [*condition number*]{}. While the condition number is typically applied to bounding the effect in terms of $ R -P $, Theorem 3.2 in [@mitrophanov] provides a condition number for $ \| R^m - P^m ||$. In the remainder of this article we will refer to any instance of the bound in (\[opl\]) with $ { \Delta} ( R , P ) $ as in (\[Delta\]) as [*condition number bound*]{} (CNB). PAMC is a field of active research [@anisimov; @kirkland; @H2; @neumannxu; @Senata; @mitrophanov; @AE1; @AE2; @Neu1; @Neu2] and various CNBs have been proposed in the literature [@chomayer; @AE2]. As we will discuss later on in more detail, an alternative type of bound called [*strong stability bound*]{} (SSB) can be derived via the strong stability method. SSB bounds the weighted supremum norm of $ \pi_{R}^\top - \pi_P^\top $ by an expression that is non-linear as function of $ || R - P || $. For early references see [@karta96; @kartashov86] and recent references are [@liu12; @lekadir; @Rab]. Perturbation bounds are of interest in a wide area of applications. For example, in mathematical physics [@Szehr] and climate modeling [@Chek], in Bayesian statistics [@Andrieu; @Alquier], and in Bioinformatics [@AY2; @Pal]. Perturbation bounds have also been applied in robustness analysis of social networks and of Google’s PageRank algorithm [@extra]. A fruitful model for PAMC is that of a scaled perturbation. More specifically, let $ R , P $ be two Markov kernels defined on the same state space. Then the convex combination of both kernels $$\label{eq:convex} P ( \theta ) = ( 1 - \theta ) P + \theta R , \quad \theta \in [ 0 ,1 ] ,$$ is a well-defined Markov kernel. Note that $ P ( 0 ) = P $ and $ P ( 1 ) = R $. In perturbation analysis of $P(\theta)$ we are interested in the effect of changing $ \theta $ from $ 0 $ to some value $0 < \theta \leq 1$. By linearity of norms, $$\label{eq:deltanorm} || P ( \theta) - P || = \theta || R - P || ,$$ for $\theta \in [0,1]$. This allows to scale the size of the perturbation via control parameter $ \theta$. Letting $$\label{relerr} \eta ( R , P ) = \frac{ \Delta ( R , P ) - || \pi_R^\top - \pi_P^\top ||}{ || \pi_R^\top - \pi_P^\top ||}$$ denote the [*relative error of perturbation bound $ {\Delta} ( R , P ) $*]{}, scaled perturbations, i.e., $ R = P ( \theta ) $, allow for analyzing the behavior of the relative error $ \eta ( \theta ) = \eta ( P ( \theta ) , P ) $ as $ \theta $ tends to zero. The analysis of scaled perturbation is of particular interest if $ P ( \theta ) $, for $ \theta \in [ 0 , 1] $, has a clear interpretation. We will illustrate this by a queueing model with denumerable state-space and breakdowns, where $ \theta $ models the probability of a breakdown. An interesting observation is that in the parametrized model we establish conditions for stability of a mixture of a stable (no breakdowns) and an unstable (only breakdowns) Markov chain modeling a pure birth process. More specifically, we apply PAMC techniques to provide a lower bound for the domain of stability of $ P ( \theta ) $. The contributions of the paper are the following: - We provide a unified approach to PAMC for finite and denumerable Markov chains. Our analysis covers CNBs and SSB. - We introduce new bounds that do have the desirable property that the relative error of the bound tends to zero as the size of the perturbation tends to zero. These new bounds are derived by a series expansion approach. - We will provide sufficient conditions under which the convergence of the series expansion already constitutes existence of a stationary distribution. By introducing the new concept of the bias term, we are able to treat the case of Markov multi-chains (i.e., chains with several ergodic classes) and uni-chains in a unified framework. - We will show that techniques derived in PAMC can be applied to stability analysis. A worked out example from queuing theory will illustrate the fruitfulness of PAMC methods for this type of problem. The paper is organized as follows. In Section \[sec:3\] the perturbation bounds are presented and the main theoretical results a established. For a simple example, Section \[sec:leadex\] presents explicit solutions for the various bounds. Section \[sec:XXX\] is devoted to perturbation bounds for the M/G/1 queue with breakdowns. Other than the small numerical examples reported in the literature, the queuing system will be analyzed for the case of a large but finite state-space and for the infinite dimensional case. Perturbation Analysis {#sec:3} ===================== Throughout this paper we will consider aperiodic Markov chains defined on an at most denumerable state space $ S = \{ 0 , 1, \ldots \} \subset \mathbb{N}$. Unless stated otherwise, we assume that the Markov chains are aperiodic and have one closed communicating class of states with possible transient states. Preliminaries and Basic Definitions ----------------------------------- If $ P = (P_{ij})_{i,j\in S}$ is a Markov transition matrix of some Markov chain $ \{X_k \} $, then $ P_{ i j } = \mathbb{E} [ 1_{ X_{k+1} } ( j ) | X_k = i ] $ for $ i , j \in S $ and $ k\in\mathbb{N}$, where $ 1_j (i) $ is one if $ j=i $ and zero otherwise, $i, j \in S$. Sometimes $P(i,j):=P_{ij}$ is used instead for notational clarity. Further, let $f \in \mathbb{R}^S$ be a reward vector where $f_i$ is the reward for being in state $i\in S$. With these definitions, one obtains $$\label{eq:tr} \mu^\top P f = \sum_{ i , j \in S } \mu_i \! P _{ i j } f_j = \sum_{i \in S } \mathbb{E} [ f_{ X_1} | X_0 = i ] \mu_i$$ as the expected reward after one transition provided the Markov chain is started in state $ i $ with probability $ \mu_i$, for $i \in S$, in vector-matrix notation[^3]. For more details we refer to [@heder4-11; @sampledchain]. In the following denote the ergodic projector of $ P $ by $\Pi_P$, i.e., the matrix with rows identical to $ \pi_P^\top $, and we let $ D_P $ denote the deviation matrix of $ P $, which is given by $$\label{eq:ddd} D_P= \sum_{k=0}^\infty ( P^k - \Pi_P) = ( I - P + \Pi_P ) ^{-1} - \Pi_P ,$$ provided that it exists. The matrix $ ( I - P + \Pi_P ) ^{-1} $ is called the [*fundamental matrix*]{} (potential) of $ P $. Letting $ A^{\#} $ denote the group inverse of the matrix $ A = I - P $, see [@a; @bbb], it holds that $ D_P = A^{\#}$ if the deviation matrix exists. Conditions for existence of the deviation matrix and its related properties have been extensively studied in the literature, see [@heder4-11; @Syski]. For finite Markov chains, the deviation matrix is an instance of the generalized inverse of $ I - P $; see [@bbb] for an early reference. As Hunter demonstrates in [@2] for finite Markov chains, the generalized inverse plays a major role in perturbation analysis. For $ x \in \mathbb{R}^S $, we denote by $ || x ||_\infty $ the maximum absolute value (a.k.a. infinity norm or $\infty$-norm) and by $ || x ||_1 $ the sum of absolute values (a.k.a. $L_1$ norm or 1-norm). Furthermore, for $ v $, such that $ v (i) \geq 1 $ for all $ i \in S $ and $ v ( 0 ) =1 $, we denote by $$\label{eq:normf} \| x \|_{\upsilon} = \sup_{i \in S} \frac{|x_i |}{\upsilon(i)}$$ the weighted supremum norm of $ x\in \mathbb{R}^S$, also called [*$v$-norm*]{}. In the following we let $$v_\alpha ( i ) = \alpha^i , \quad i \in S ,$$ with $ \alpha \in [ 1 , \infty ) $ some unspecified constant. In the following we will omit the subscript $ \alpha $ whenever the results stated hold for general $ \alpha \geq 1$. Norms are extended to matrices by using the standard induced norms and vector norms are obtained as the vector norm induced by the corresponding matrix norm[^4] Note that as probability measures are row vectors, the $ v$-norm of a measure $ \mu^\top $ on $ S $ is given by $$\label{eq:normmu} \| \mu^\top \|_{\upsilon} = \sum_{k \in S} v ( k ) | \mu_{k}| .$$ Specifically, applying the $ v $-norm to (\[eq:tr\]) one readily obtains $$| \mu^\top P f | \leq || \mu^\top ||_v \, || P ||_v \, || f ||_v ,$$ which shows that (\[eq:normmu\]) and (\[eq:normf\]) arise naturally in PAMC. Note that by (\[eq:normmu\]) for a (possibly signed) measure $\mu $ on $ \mathbb{R}^{S}$ the $v$-norm of $ \mu^\top $, for $ v \equiv 1$, coincides with total variational norm. Note further that $ v \geq 1 $ implies for $ x \in \mathbb{R}^S $ that $ \| x\|_v \leq \| x \|_\infty $. In addition, for a measure $ \mu$ we have that $ \| \mu^\top \|_{1 } \leq \| \mu^\top \|_{\infty } \leq \| \mu^\top \|_v $. In the following we will omit the norm type indicator and use the generic $ || \cdot || $ sign whenever the result holds for any of the above norms. If a result is limited to a particular norm, this will be clearly indicated. To illustrate the efficiency of the bounds we will use throughout the paper three different types of finite Markov chains introduced in the following. An example of a Markov chain on a denumerable state space will be discussed in detail in Section \[sec:A5\]. \[ex:mc\] [**Two-State Chain:**]{} Let $S = \{ 0, 1\}$ and $$P^s = \begin{pmatrix} 1 - p & p \\ q & 1 - q \\ \end{pmatrix},$$ with $p, q \in (0, 1)$. It is easily checked that $$\pi_{P^s} = \frac{1}{ p + q} (q , p)^\top$$ is the stationary distribution of $ P^s $. The deviation matrix is given by: $$D_{P^s} = \frac{ 1 }{ ( p + q )^{2} } \begin{pmatrix} p & - p \\ - q & q \\ \end{pmatrix}.$$ [**Ring Network:**]{} The next example that we will discuss is that of a ring, introduced in the following. Let $S = \{0, \ldots , n-1 \}$ and for any $n \geq 2$, $$P^\circ (n) = \left( \begin{array}{cccccc} 1- 2 b & b & 0 & 0 & \ldots & b \\ b & 1 - 2 \, b & b & 0 & \ldots & 0 \\ 0 & b & 1 - 2 \, b & b & \ldots & 0 \\ \vdots & \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & \ldots & b & 1 - 2 \, b & b \\ b & 0 & \ldots & 0 & b & 1 - 2 b \\ \end{array} \right),$$ with $b \in (0, 1/2]$. We get the stationary distribution: $$\pi^\circ_i (n) = \frac{1}{n}, \;\; \mbox{ for } i \in S.$$ For the deviation matrix, we obtain: $$D^\circ(n) : = D_{ P^\circ (n) } = \left( \begin{array}{ccccc} d_{0} & d_{1} & d_{2} & \ldots & d_{n-1} \\ d_{n-1} & d_{0} & d_{1} & \ldots & d_{n - 2} \\ d_{n - 2} & d_{n-1} & d_{0} & \ldots & d_{n - 3} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ d_{1} & d_{2} & d_{3} & \ldots & d_{0} \\ \end{array} \right),$$ where $$\label{di:eq} d_{i} = \frac{ (n - 1) (n + 1) }{12 \, b \, n} - \frac{ (n - i) i }{2 \, b \, n} \, \, \, \hbox{for} \, \, \, i \in S.$$ Furthermore, $\sum_{i = 0}^{n-1} d_{i} = 0$. Equivalently, $D^\circ (n)$ can be expressed as $$D^\circ (n) = \left( \widetilde D_{ij}(n) \right)_{i,j \in S},$$ where $$\label{dij:eq} \widetilde D_{ij} (n) = d_{(j - i)(\mbox{\emph{mod}} \; n) + 1} = \frac{ (n - 1) (n + 1) }{12 \, b \, n} -\frac{ \{ n - (j - i) (\mbox{\emph{mod}} \; n)\} \{ (j - i) (\mbox{\emph{mod}} \; n)\}}{2 \, b \, n}.$$ [**Star Network:**]{} The third example considered is the Star Network with state space $S=\{0,\dots,n-1\}$. For $n\geq 2$ let $$P^\star (n)= \left( \begin{array}{cccccc} 1- \beta & \frac{ \beta }{ n - 1 } & \frac{ \beta }{ n - 1 } & \frac{ \beta }{ n - 1 } & \ldots & \frac{ \beta }{ n - 1 } \\ 1 - \gamma & \gamma & 0 & 0 & \ldots & 0 \\ 1- \gamma & 0 & \gamma & 0 & \ldots & 0 \\ \vdots & \vdots & \ddots & \ddots & \ddots & \vdots \\ 1- \gamma & 0 & \ldots & 0 & \gamma & 0 \\ 1 -\gamma & 0 & \ldots & 0 & 0 & \gamma \\ \end{array} \right) ,$$ for $\beta \in (0, 1]$ and $\gamma \in [0, 1)$. Following [@golub2010], the stationary distribution is given by $$\pi_i^\star(n) = \left\{ \begin{array}{ll} \frac{ 1 - \gamma }{ 1 - \gamma + \beta } & \hbox{ for } \, i = 0, \\ \\ \frac{ \beta }{ ( n - 1 ) ( 1 - \gamma + \beta )} & \hbox{ for } i > 0. \end{array} \right.$$ For the deviation matrix, we obtain: $$D^\star(n) = \left( \begin{array}{ccc} \frac{ \beta }{ ( 1 - \gamma + \beta )^{ 2 } } & \vline & - \frac{ \beta }{ ( 1 - \gamma + \beta )^{ 2 } ( n - 1 ) } \bar{1}^{T} \\ ------ & \vline & ------------- \\ - \frac{ ( 1 - \gamma ) }{ ( 1 - \gamma + \beta )^{ 2 } } \bar{1} & \vline & \frac{ 1 }{ ( 1 - \gamma ) } I - \frac{ \beta \{ ( 1 - \gamma ) + ( 1 - \gamma + \beta ) \} }{ ( 1 - \gamma ) ( 1 - \gamma + \beta )^{ 2 } ( n - 1 ) } \bar{1}\bar{1}^\top\\ \end{array} \right),$$ where $\bar{1} = [1, \ldots, 1]^{T}$ of size $n-1$ and $I$ denotes the $(n-1)\times(n-1)$ identity matrix. In our analysis we will frequently work with the [*taboo kernel*]{} of a Markov transition matrix $ P $. In [@kartashov86] a very elegant and flexible way for obtaining a taboo kernel is described. For this let $ h $ be a non-negative vector and $ \sigma $ a probability measure on $ S $, such that $\pi^\top_P h > 0$ and $ P - h \sigma^\top $ is matrix with non-negative values, where $ h \sigma^\top $ denotes the matrix product of vector $ h $ and $ \sigma $, i.e., $ h \sigma^\top $ is a square matrix. Then, the taboo kernel of $ P $ with respect to $ h $ and $ \sigma $ is defined as $$\label{eq:taboo} T := P - h \sigma^\top .$$ For example, let $$h = ( P (0 ,0 ), P (1 , 0), P ( 2 , 0) , ...)^\top$$ denote the first column of $ P$, and let $ \sigma = ( 1 , 0 , 0 , \ldots )^\top $, then $$T = P - h \sigma^\top = \left \{ \begin{array}{cc} P ( i , j ) & j > 0 \\ 0 & \text{ otherwise } \end{array} \right. .$$ In words, $ T $ is a degenerate transition kernel that avoids entering state zero which is obtained by setting the first column of $ P $ to zero. Alternatively, letting $ h = ( 1 , 0 , 0 , \ldots )^\top $ and $$\sigma = ( P (0 ,0 ), P (0 , 1), P ( 0 , 2) , ...)^\top ,$$ then $ T = P - h \sigma^\top $ is a degenerate transition kernel that never leaves state zero, which is obtained by setting the first row of $ P$ to zero. The taboo kernel is also known as the [*residual matrix*]{} in the literature, see [@MA]. In the following we write $ _i \! P $ for the degenerate transition kernel that avoids entering state $i$ which is obtained by setting the $i$th column of $ P $ to zero, i.e., letting $ \sigma = ( 0 , \ldots 0 , 1 , 0 , \ldots )^\top $, where the entry 1 is at the $ i$th position, and $ h $ is the $ i$th column of $ P$. The taboo kernel $ _i \! P $ provides a convenient sufficient condition for positive recurrence of $ P $ on a denumerable state space. The precise statement is provided in the following proposition. \[cor:pos\] Let $ P $ be irreducible. If for at least one $ i \in S $ it holds that $ || _i \! P || < 1 $, then $ P $ is positive recurrent. [**Proof:**]{} First note that the $(j,k)$-th element of $ \sum_{n=0}^\infty ( _i P )^n $ gives the expected number of visits to state $k$ before jumping to state $i$ when starting in state $j$. The mean recurrence time at state $ i $ is thus given by summing the $i$-th row of $ \sum_{n=0}^\infty ( _i P )^n$, which is finite due to the norm condition. Therefore, state $ i $ is positive recurrent. From irreducibility of $ P $ it follows that all states are positive recurrent. $ \Box $ We call $ T $ *proper* if $ || T || < 1$. Provided that $ T $ defined in (\[eq:taboo\]) is proper in case of the $v$-norm, the $ v$-norm of $ \pi_P^\top $ can be bounded by $$\label{eq:taboo2} \| \pi_P^\top \|_{v} \leq \frac{ \pi_P^\top h \| \sigma^\top \|_v }{ 1 - || T ||_v },$$ see [@kartashov86]. Moreover, if $ T $ is proper, the deviation matrix can alternatively be written as $$\label{eq:D} D_P = ( I - \Pi_P ) \sum_{n=0}^\infty T^n ( I - \Pi_P ) = ( I - \Pi_P ) ( I - T)^{-1} ( I - \Pi_P ) ,$$ see [@H1; @karta96], where used that $$\label{eq:hordijk} ( I - T)^{-1} = \sum_{n=0}^\infty T^n ,$$ provided that $ || T || < 1 $. The idea behind considering $ T $ rather than $ P $, is that $ T $ might be constructed in such a way that the norm of $ T $ is strictly less than one. The following example illustrates the effect on $ || T ||$ from either removing the first column or first row. Note that removing the second column or second row may lead to other values of $|| T ||$. \[ex:T\] For the two-state chain, i.e., $P=P^s$, we find after removing the first column $$|| T ||_{v} = \max\{ \alpha p, 1 - q \}.$$ Removing the first row leads to $$|| T ||_{v} = \frac{(1-\alpha)q}{\alpha} + 1 .$$ For the Ring and the Star networks we present the resulting norms for $||T||_{v}$ (including the $\infty$-norm by letting $ \alpha $ tend to 1) and $||T||_{1}$, respectively, in Table \[tab:1\] and Table \[tab:2\]. Removing: Ring (i.e., $P=P^\circ (n)$) Star (i.e., $P=P^\star (n)$) --------------------- ------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------- $1$st row of $P$ $ \frac{ b }{ \alpha } + 1 - 2 \, b + \alpha \, b $ $ \frac{ b }{ \alpha } + 1 - 2 \, b + \alpha \, b $ $1$st column of $P$ $ \max \{\alpha \, b + \alpha^{n - 1} \, b, \frac{ b }{ \alpha } + 1 - 2 \, b + b \, \alpha \}$ $\max\left\{ \gamma, \frac{ \alpha \, \beta }{ n - 1}\frac{1 - \alpha^{n - 1}}{ 1 - \alpha } \right\}$ : The $v$-norm for different choices for $T$ (including the $\infty$-norm). \[tab:1\] Removing: Ring (i.e., $P=P^\circ (n)$) Star (i.e., $P=P^\star (n)$) --------------------- ------------------------------ ------------------------------------ $1$st row of $P$ $ 1 $ $ \max\{\gamma,(n-1)(1-\gamma)\} $ $1$st column of $P$ $ 1 $ $\gamma + \frac{\beta}{n-1}$ : The 1-norm for different choices for $T$.\[tab:2\] In the following we discus a general way of choosing $ T $. Let $ P_{ \bullet j } $ denote the $ j$-th column of $ P $. For a column vector $ x$ we let $ \| x \|_{\inf} = \inf_i | x_i | $. We denote the $ j$th unit vector by $ e_j $, i.e., $ e_j $ has all elements zero except for the $j$-th element which is equal to 1. \[le:taboo\] Let $ P $ be a Markov transition matrix on $ S $. Let $ j^\ast $ be the column index with maximal value $ \| P_{ \bullet j } \|_{\inf}$. If $ || P_{ \bullet j^\ast } ||_{\inf } > 0 $, let $ h = P_{ \bullet j^\ast} $ and $ \sigma =e_{j^\ast}$, then for $ T $ defined as in (\[eq:taboo\]) it holds that $ || T ||_v < 1 $, where $ v \equiv 1 $. [**Proof:**]{} Without loss of generality assume that after appropriate relabeling of the states $ j^\ast = 0 $. Let $|| P_{ \bullet 0 } ||_{\inf } = q> 0 $. Removing the first column from $ P $ thus decreases the row sum of each row of $ P $ by at least $ q$, which implies the desired result. $ \Box $ Condition Number Perturbation Bounds for Finite Chains {#sec:norms2} ------------------------------------------------------ Several condition numbers have been proposed in the literature for finite Markov chains with state space $ S = \{0,1,\dots,n-1\}$, see [@chomayer] for an overview. We keep the numbering as in [@chomayer], where seven different condition numbers were discussed. Moreover it is shown in [@chomayer] that condition numbers $ \kappa_3 $ and $ \kappa_ 6$, to be defined presently, outperform the other condition numbers, while the choice between $ \kappa_3 $ and $ \kappa_ 6$ depends on the choice of norms. Condition number $\kappa_{3}$ is given by [@haviv; @Kirk]: $$\label{kappa:3} \kappa_{3} (P)=\frac{ \max_{j}( D_P(j,j) - \min_{i} D_P(i,j) ) }{ 2 }$$ and leads to the following bound: $$|| {\pi}_R^\top - \pi_P^\top ||_{1} \leq \kappa_{3}(P) || R - P ||_{\infty} .$$ Alternatively, condition number $\kappa_{6}$ in [@Senata] is given by: $$\label{kappa:7} \kappa_{6}(P) = \frac{1}{2} \max _{ i , j } \sum_{k=0}^{n-1} | D_P(i,k) - D_P(j,k) | ,$$ and the resulting bound is as follows: $$|| {\pi}_R^\top - \pi_P^\top ||_{\infty} \leq \kappa_{6}(P) || R - P ||_{\infty} .$$ The condition numbers for the Markov chains introduced in Example \[ex:mc\] are as follows: $$\kappa_{3} ( P^s ) = \frac{1}{2(p+q)} \quad \mbox{and} \quad \kappa_{6} ( P^s ) = \frac{1}{p+q} ,$$ $$\kappa_{3} ( P^\circ (n) ) = \frac{\lfloor \frac{ n }{ 2 }\rfloor (n - \lfloor \frac{ n }{ 2 }\rfloor ) }{ 4 \, b \, n} ,$$ $$% \quad \mbox{and} \quad \kappa_{6} ( P^\circ (n) ) = \frac{1}{2}\sum\limits_{k=0}^{n-1} \left |D_{P^\circ (n)} \left ( \left \lfloor \frac{ n }{ 2 } \right \rfloor + 1, k-1 \right )-D_{P^\circ (n)}(1, k-1) \right | ,$$ and $$\kappa_{3} ( P^\star (n)) = \frac{ 1 }{ 2 ( 1 - \gamma ) } \quad \mbox{and} \quad \kappa_{6} ( P^\star (n)) = \frac{1}{1-\gamma}.$$ It is worth noting that $ \kappa_{3} ( P^\circ (n) ) $ grows linearly in $ n$. As the condition number applies to the 1-norm of $ \pi_R^\top - \pi_P^\top $, which is bounded by 1, the bound becomes thus trivial for large $ n $. For the Star Network, $ \kappa_3 $ and $ \kappa_6 $ do not depend on $ n$ but become trivial for $ \gamma $ close to $ 1 $. The fact that $ \kappa_3 $ and $ \kappa_6 $ behave so different for the Ring Network and the Star Network stems from the fact that both condition numbers are defined via the deviation matrix. The elements of the deviation matrix are related to mean recurrence times of the corresponding Markov chain, see [@bbb; @2]. Specifically, in the Ring Network the length of a path from, say, node 0 to node $ \lfloor n/2 \rfloor $ grows with $ n $, whereas in the Star Network any node can reached from any other node in 2 steps. It is known that $ \kappa_3 ( P ) < \kappa_6 ( P ) $ (in fact it holds that $2\kappa_3(P) \leq \kappa_6(P)$), see [@Kirk]). Note that this inequality implies for the Ring Network that $ \kappa_6 (P^\circ (n)) $ tends to infinity as well. In [@Kirk] it is shown that $ \kappa_3 (P) \geq (n-1)/(2n)$, with $ n$ being the size of transition matrix, and a Markov chain is provided for which equality is reached. As we will discuss in the subsequent section, $ \kappa_6 ( P ) $ may be preferable to $ \kappa_3 ( P ) $ in case bounds on perturbations of expected rewards are considered. The Choice of Norms in Perturbation Analysis {#sec:norms} -------------------------------------------- In bounding perturbations it is important to understand how a perturbation of the Markov chain affects the steady-state reward. Put differently, using the notation as already introduced in the introduction, relating a perturbation bound for $ || \pi_R^\top - \pi_P^\top || $ to that of $ | \pi_R^\top f - \pi_ P^\top f | $ is of importance in applications. The following lemma formalizes how the steady-state reward can be bounded via perturbation bounds for $ || \pi_R^\top - \pi_P^\top || $ in case of different norms. \[le:norms\] For arbitrary measures $ \widetilde{\mu} $ and $ \mu $ on $ \mathbb{R}^S$ and cost function $ f \in \mathbb{R}^S $ such that $ | \widetilde{\mu}^\top f - \mu^\top f | < \infty $ it holds $$| \widetilde{\mu}^\top f - \mu^\top f | \leq \left \{ \begin{array}{c} || \widetilde{\mu}^\top - \mu^\top ||_\infty \, || f ||_\infty \\ \\ || \widetilde{\mu}^\top - \mu^\top ||_1 \, || f ||_1 \\ \\ || \widetilde{\mu}^\top - \mu^\top ||_v \, || f ||_v \\ \end{array} \right . .$$ [**Proof:**]{} By simple algebra, $$| \widetilde{\mu}^\top f - \mu^\top f | \leq \sum_i | \widetilde{\mu}_i - \mu_i| \, | f_i | \leq \sup_j | f_j | \sum_i | \widetilde{\mu}_i - \mu_i| = || \widetilde{\mu }^\top - \mu^\top ||_\infty \, || f ||_\infty .$$ For the last inequality, which coincides with the second inequality in case of $\alpha = 1$, note that $$\begin{aligned} | \widetilde{\mu}^\top f - \mu^\top f | & \leq \sum_i | \widetilde{\mu}_i - \mu_i| \, | f_i | \\ & = \sum_i | \widetilde{\mu}_i - \mu_i| v_i \, \frac{ | f_i|}{v_i} \\ & \leq \left ( \sup_j \frac{| f_j | }{v_j } \right ) \sum_i | \widetilde{\mu}_i - \mu_i| v_i \\ & = || \widetilde{\mu}^\top - \mu^\top ||_v \, || f ||_v ,\end{aligned}$$ which concludes the proof. $ \Box $ In this chapter we study the case that $ \mu $ in Lemma \[le:norms\] is a stationary distribution. Lemma \[le:norms\] illustrates that there is a trade-off in the choice of norms. Indeed, since $ \| \pi_R^\top - \pi_P^\top \|_1 \leq \| \pi_R^\top - \pi_P^\top \|_\infty $ it seems attractive to ask for perturbation bounds on $ \| \pi_R^\top - \pi_P^\top \|_1 $. The downside is that this choice affects the norm of the reward vector, in particular, it holds that $\|f\|_\infty \leq \|f\|_1$. As an illustration, consider the following example of a finite Markov chain. Let $ P $ be the transition matrix of a M/M/1/N queue, where $ N $ is the size of the buffer of the queue including the service place, and suppose that we are interested in the effect that replacing $ P $ by $ R $ has on the stationary queue length. More specifically, let $ f_l( s ) = s $, for $ s \in S = \{ 0 , 1 , \ldots , N \} $, and note that $$\| f_l \|_1 = \frac{N ( N+1)}{2} > N = \| f_l \|_\infty .$$ In the light of Lemma \[le:norms\], in bounding $ | \pi_P^\top f_l - \pi_R^\top f_l |$ the smaller bound on the norm distance of $ \pi_R^\top - \pi_P^\top $ by applying the 1-norm might be outweighed by the increase in norm for the reward. If, on the other side, one is only interested in an overflow probability, i.e., $ f_p ( s ) = 0 $ for $s < N$ and $ f_p (N) = 1$, then $ \| f_p \|_1 = \| f_p \|_\infty=1 $ and the 1-norm bound for $ \pi_R^\top - \pi_P^\top $ is appropriate. Another example where this norm trade-off is relevant is in the analysis of the ‘wisdom of crowds’ phenomenon in social networks, [@golub2010]. Here, $ f $ represents a belief vector with bounded support, i.e., $ f ( s) \in [ a , b ] $ for $ a < b \in \mathbb{R} $, and $ \pi_P^\top f $ is the consensus reached in the social network modelled by $ P $. From the above discussion it is clear that the choice of the norm for evaluating $ \pi_R^\top - \pi_P^\top $ depends on the application. In the light of the above discussion it is worth noting that the $v$-norm can be adjusted to the problem under consideration. To see this, recall that we have assumed that $v $ is of the form $ v ( i ) = \alpha^i $, $ i \in S $, with $ \alpha $ some unspecified constant. Let us express this dependency of $ v $ on $ \alpha $ here by writing $ v_\alpha $. Hence, the best bound for $ | \widetilde{\mu}^\top f - \mu^\top f | $ by means of the $ v $-norm is given by the solution of the following minimization problem $$\label{eq:starrr} | \widetilde{\mu}^\top f - \mu^\top f | \leq \min_\alpha || \widetilde{\mu}^\top - \mu^\top ||_{v_\alpha} \, || f ||_{v_\alpha} .$$ The upside of this minimization is that it trades off the effect the norm has on the reward and the measure distance. The downside is of course that the minimization itself can be rather demanding as $ || \widetilde{\mu}^\top - \mu^\top ||_{v_\alpha} $ or a bound thereof typically has a complex form. For denumerable Markov chains, $ v $ can be constructed via a Lyapunov-type of drift condition; see [@liu12] for details. Perturbation Bounds ------------------- In perturbation analysis, $ D_P $ occurs in conjunction with a perturbation matrix $ \Delta = R -P $ which has row sums equal to zero. From $ \Delta ( I - \Pi_P ) = \Delta $ and it follows that $$\label{eq:DnewState} \Delta ( I - T)^{-1} ( I - \Pi_P ) = \Delta D_P$$ and instead of $ D_P $ for perturbation bounds it suffices to consider $$\label{eq:Dnew} ( I - T)^{-1} ( I - \Pi_P ) .$$ Note that due to the fact that $ \Delta ( I - T)^{-1} $ fails to have row sums equal to zero, the term $ I - \Pi_P $ on the LHS in cannot be disregarded. In other words, $ \Delta ( I - T)^{-1} \not = \Delta ( I - T)^{-1} ( I - \Pi_P ) $, except for special cases. By simple algebra, it holds for Markov transition matrices $ R $ and $ P $ that $$\begin{aligned} \pi_{ R }^\top & = & \pi_{P }^\top + \pi_{R}^\top ( R - P) D_{ P } \label{updateD} \\ & = & \pi_{P }^\top + \pi_{R}^\top ( R - P ) ( I - T)^{-1} ( I - \Pi_P ) \label{update} .\end{aligned}$$ \[re:1\] The above formula is called [*update formula*]{} and allows for deriving a first perturbation bound. Using the fact that $ || \pi_{R}^\top ||_\infty = 1 $, (\[update\]) yields $$|| \pi_{ R}^\top - \pi_{P }^\top ||_\infty \leq || R- P ||_\infty \, || ( I - T)^{-1} ( I - \Pi_P ) ||_\infty ,$$ which provides a first perturbation bound. Put differently $ || ( I - T)^{-1} ( I - \Pi_P ) ||_\infty $ yields a condition number for $|| \pi_{ R}^\top - \pi_{P }^\top ||_\infty$. Repeated insertion of the expression for $ \pi_{ R} $ in (\[updateD\]) on RHS of (\[updateD\]), yields $$\label{eq:21} \pi_{ R }^\top = \pi_{P}^\top \sum_{ k=0}^N ( ( R - P ) D_P )^k + \pi_{R} ^\top( ( R - P ) D_P )^{N+1} .$$ We call $$B ( R , P ) = \lim_{N \rightarrow \infty } \pi_{R}^\top ( ( R - P ) D_P )^{N}$$ the [*bias term*]{}, provided that the limit exists. Letting $ N $ tend to infinity in (\[eq:21\]) we arrive at $$\begin{aligned} \pi_{ R }^\top & = & \pi_{P}^\top \sum_{ k=0}^\infty ( ( R - P ) D_P )^k \: + B ( R , P ) \label{eq:basicseries} \\ & = & \pi_{P}^\top ( I - ( R - P ) D_P )^{-1} \nonumber + B ( R , P ) ,\end{aligned}$$ provided the series exists and the bias term is finite. As we will explain in the following, the bias term is typically zero in case that $ R $ and $ P $ are uni-chain. The series in (\[eq:basicseries\]) already appears without the bias term in [@PA7]. It has been rediscovered in [@Cao] and extended to Markov chains on a general state-space in [@heidergott], both references study problem classes where the bias term is zero. In deriving the series expansion in (\[eq:basicseries\]) we required that the stationary distribution $ \pi_{R } $ exists. As the next theorem shows, convergence of the series already implies existence of $ \pi_{R } $. Moreover, sufficient conditions are provided for the bias term to be equal to the zeros vector. \[th:stability\] Let $ P $ be irreducible, aperiodic and positive recurrent. Suppose that the series in (\[eq:basicseries\]) converges to some finite limit $ \mu^\top $, i.e., let $$\mu^\top = \pi_{P}^\top ( I - ( R - P ) D_P )^{-1} .$$ - If $ \mu_i \geq 0 $, for $ i \in S $, then $ \mu $ is a stationary distribution of $ R $. - If $ R $ is irreducible and aperiodic and there exists $ i \in S $ such that $ || _i R || < 1 $, then $ \mu $ is the unique stationary distribution of $ R $ and $ B ( R , P ) $ is the zero matrix. [**Proof:**]{} To see that $ \mu $ is an invariant measure with respect to $ R$, note that, $$\Pi_{P} + ( I - P ) D_{P} = I.$$ Multiplying the above equation from the left by $ \mu$, yields $$\label{eq:erst1} \pi_{P}^\top + \mu^\top ( I - P) D_{P} = \mu^\top .$$ By simple algebra, $$\begin{aligned} \mu^\top & = & \pi_{P}^\top \sum_{ k=0}^\infty ( ( R - P ) D_P )^k \nonumber \\ & = & \pi_{P}^\top + \pi_{P}^\top \sum_{ k=1}^\infty ( ( R - P ) D_P )^k \nonumber \\ & = & \pi_{P}^\top + \pi_{P}^\top \sum_{ k=0}^\infty ( ( R - P ) D_P )^k ( R - P ) D_{P} \nonumber \\ & = & \pi_{P}^\top + \mu^\top ( R - P ) D_{P} . \label{eq:zweite2} \end{aligned}$$ Subtracting (\[eq:erst1\]) from (\[eq:zweite2\]) yields $$\mu^\top ( I - R ) D_{P} = 0 .$$ Existence of $ D_{P} $ implies that $ D_{P} = ( I - P + \Pi_{P} ) ^{-1} - \Pi_{P} $, see (\[eq:ddd\]). Since $ ( I - R ) \Pi_{P} = 0 $, it holds that $$\mu^\top ( I - R ) ( I - P + \Pi_{P} ) ^{-1} = 0 .$$ Multiplying the above equation from the right with $ ( I - P + \Pi_{P} ) $ yields $ \mu = \mu R $, which shows that $ \mu $ is invariant to $ R$. Further, multiplying from the right with an appropriate column vector of ones, i.e., $\bar{1}$, shows $$\pi_{P}^\top \bar{1} + \mu^\top ( I - P) D_{P} \bar{1} = \mu^\top \bar{1} \Leftrightarrow \mu^\top \bar{1} = 1$$ since $( I - P) D_{P} \bar{1} = ( I - \Pi_P) \bar{1}$ = 0. This shows that $\mu$ sums up to $1$. Provided that $ \mu $ is component-wise a non-negative vector, $ \mu $ is a stationary distribution, which proves part (i). For part (ii), note that by Proposition \[cor:pos\] it follows that $ R $ is positive recurrent. This together with the assumption that $ R $ is irreducible and aperiodic implies that $ R$ is ergodic and $$\label{eq:nmb} \lim_{ n \rightarrow \infty } R^n = \Pi_R ,$$ where $ \Pi_R $ is a matrix with all rows equal to $ \pi_R^\top $ and $ \pi_R $ is the unique stationary distribution of $ R $. Since all rows of $ \Pi_R $ are identical to $ \pi_R^\top $ and $ \mu^\top \bar{1} = 1 $, it holds that $$\label{eq:nmb2} \mu^\top \Pi_R = \pi_R^\top .$$ We have already shown that $ \mu $ is an invariant distribution of $ R $. This together with (\[eq:nmb\]) and (\[eq:nmb2\]) yields $$\mu^\top = \lim_{ n \rightarrow \infty } \mu^\top R^n = \mu^\top \Pi_R = \pi_R^\top .$$ Uniqueness of the solution follows from ergodicity of $ R $ and the bias term is consequently the zeros vector, which concludes the proof. $ \Box $ Part (i) of Theorem \[th:stability\] applies in case that $ R $ is a multi-chain with transient states. In this case the stationary distribution is not unique. This can be nicely explained via the bias term. As the bias term depends on $ P$, it carries information on the Markov chain that is used in approximating $ \pi_R $. Letting $ P $ tend to $ R $, the limit of $ B ( R, P ) $ typically will not tend to zero if $ R $ is a multi-chain. This phenomenon is studied in the literature on singular perturbations, see, for example, [@sp1; @sp2; @sp3]. Note that uniqueness of the stationary distribution can only be established under the conditions put forward in part (ii) of Theorem \[th:stability\]. The series in (\[eq:basicseries\]) can be facilitated for deriving perturbation bounds by $$\begin{aligned} \pi_R^\top - \pi_ P^\top & = & \pi_P^\top \sum_{ k=1}^\infty ( (R - P ) D_P )^k + B ( R, P ) \label{proof} \\ & = & \pi_P^\top ( R - P ) D_P \sum_{ k=0}^\infty ( ( R- P ) D_P )^k + B ( R, P ) \nonumber \\ & = & \pi_P^\top ( R - P ) D_P ( I - ( R - P ) D_P )^{-1} + B ( R, P ) \label{eq:SSBB} .\end{aligned}$$ Following the above line of equations, bounding $ \pi_R^\top - \pi_ P^\top $ requires bounding $ ( I - ( R - P ) D_P )^{-1} $. We will show that the conditions put forward in the following lemma not only imply norm bounds for $ ( I - ( R - P ) D_P )^{-1} $ but also imply that $ B ( R , P ) $ is the zero matrix. \[le:basicbound\] For any matrix norm it holds with the above notation that: - If $ || ( R - P ) D_P || < 1 $, then $$|| ( I - ( R - P ) D_P )^{-1} || \leq \frac{1}{1 - || ( R - P ) D_P || } ,$$ - if $ || R - P || \, || D_P || < 1 $, then $$|| ( I - ( R - P ) D_P )^{-1} || \leq \frac{1}{1 - || R - P || \, || D_P || } ,$$ - if $\|T\| + || R - P ||( 1 + || \pi_P ^\top || ) < 1 $, then $$|| ( I - ( R - P ) D_P )^{-1} || \leq \frac{1 - || T || }{1 - || T || - || R - P || ( 1 + ||\pi_P^\top|| )} .$$ In addition, any of the conditions (i), (ii) or (iii) implies that the bias term equals the zeros vector. [**Proof:**]{} We only provide a proof of part (iii) as the proofs of (i) and (ii) can be obtained from a similar (and simpler) line of arguments. Using the taboo kernel representation in (\[eq:D\]) it holds that $$( R - P ) D_P = ( R - P ) \sum_{k=0}^\infty T^k ( I - \Pi_P ) .$$ By the condition it follows that $ || T || <1 $ and thus applying norms yields $$\label{eq:proof1} || ( R - P ) D_P || \leq || R - P || \frac{ 1 + || \pi_P^\top || }{1 - ||T|| } .$$ Our condition $\|T\| + || R - P ||( 1 + || \pi_P ^\top || ) < 1 $ is equivalent to the expression on the above RHS being strictly less than 1. This implies that the Neumann series $ \sum_{k=0}^\infty ( ( R - P ) D_P )^k $ converges. Consequently $ I - ( R - P ) D_P $ is invertible with norm bounded by $$\begin{aligned} || ( I - ( R - P ) D_P )^{-1} || & \leq & \sum_{ k=0}^\infty || ( R - P ) D_P ||^k \\ & = & \frac{1}{1 - || ( R - P ) D_P || } .\end{aligned}$$ Inserting the bound in (\[eq:proof1\]) in the expression on the above RHS concludes the proof of the statement. For the proof of the last part of the lemma, note that $ || \pi_ R^\top (( R - P ) D_ P )^N || \leq || \pi_R^\top || \, || ( R - P ) D_P ||^N $, so that $ || ( R - P ) D_P || < 1 $ implies convergence of $ || \pi_R ^\top (( R - P ) D_P )^n || $ to zero as $ n $ tends to infinity. $ \Box $ It is worth noting that $ || ( R - P ) D_P || < 1 $ typically fails in case $ R $ is a multi-chain. Put differently, while in principle the results in the remainder of this article apply to $ R $ being a multi-chain, we have found no example of a pair $ R , P $ with $ R $ a multi-chain and $ P $ a uni-chain such that $ || ( R - P ) D_P || < 1 $. We conjecture that $ || ( R - P ) D_P || < 1 $ rules out the case that $ R $ is a multi-chain but we have not been able to prove this so far. Note that $$|| ( R - P ) D_P || \leq || R - P || \, || D_P || \leq \frac{|| R - P || ( 1 + || \pi_P ^\top|| )}{ 1 - || T ||}$$ implies that the bounds put forward in Lemma \[le:basicbound\] are increasingly limited in their applicability, while the evaluation of the bounds becomes simpler. In fact, computing $ || ( R - P ) D_P || $ is often not feasible as $ D_P $ is either not known in closed form or is prohibitively complex in general, see [@heder4; @heder3; @Koole]. For the Markov chains in Example \[ex:mc\], $D_P$ is known in explicit form. For this type of problems it makes sense to apply the norm bound put forward in Lemma \[le:basicbound\] [*(i)*]{} to (\[eq:SSBB\]). More specifically, assuming $||( R - P ) D_P || < 1$ let $$\Delta_{\rm DB} ( R , P ) := \frac{|| \pi_P^\top ( R - P) D_P || }{1 - ||( R - P ) D_P || } ,$$ then $$\label{eq:direct} || \pi_R^\top - \pi_ P^\top || \leq \Delta_{\rm DB} ( R , P ) ,$$ which we will call the [*direct bound*]{} (DB). \[rem:3\] The bound in (\[eq:direct\]) has the following nice feature. Let $ P $ and $ R $ be two Markov chains with $ P \not = R $ but with the same stationary distribution. Then, (\[eq:direct\]) detects this and yields the correct value 0, whereas condition number type bounds yield a non-zero bound. The next bound can serve as alternative in case $D_P$ is difficult to find. It follows from replacing $(R-P)D_P$ in with the taboo kernel representation and bounding the result via . Specifically, this leads to $$\begin{aligned} || \pi_R^\top - \pi_ P^\top || & \leq & || \pi_P^\top || \, || R - P || \frac{ 1+ || \pi_P^\top||}{ 1 - || T || } \frac{1 - || T||}{ 1 - || T|| - || R - P|| (1+ || \pi_P^\top|| ) } .\end{aligned}$$ Let $$\Delta_{\rm SSB} ( R , P ) := || \pi_P^\top || \, || R - P || \frac{1+ || \pi_P^\top||}{ 1- || T|| - || R - P|| ( 1+ || \pi_P^\top|| )} \label{eq:SSB} ,$$ provided that $ || T || + || R - P|| ( 1+ || \pi_P^\top|| ) < 1 $. Then, $$|| \pi_R^\top - \pi_ P^\top || \leq \Delta_{\rm SSB} ( R , P )$$ and the bound $ \Delta_{\rm SSB} ( R , P ) $ in (\[eq:SSB\]) is called [*Strong Stability Bound*]{} (SSB) in the literature [@karta96]. For applications of SSB, we refer to [@abbassm; @karta83; @rabta-4; @bouammor; @boukir; @lekadir]. An obvious improvement of the bound in (\[eq:SSB\]) is to replace $ || \pi_P^\top || \, || R - P || $ by $ || \pi_P^\top ( R - P ) ||$; see Remark \[rem:3\]. While $ P $ and $ \pi_P $ are fixed, and $ T $ offering in practice only limited flexibility, $ R $ is a free variable of the perturbation bound. Essentially, the direct bound and SSB only apply if $ R $ is not too far away from $ P$, i.e., if $ || R - P || $ is small. This is the major drawback of this type of perturbation bounds compared to condition number bounds. To overcome this drawback, we may scale the perturbation such that the perturbation bounds do apply. To see this, consider the scaled model in (\[eq:convex\]), where the static perturbation is replaced by a scaled one, i.e., we perturb $ P $ by $ \theta ( R - P ) $ and denote the resulting transition matrix by $ P ( \theta ) $. Now, $ \theta $ can be chosen such that the norm bounds apply to $ \theta || R - P || $. For example, the condition on the applicability for SBB in (\[eq:SSB\]) translates to $$|| T || + \theta || R - P|| ( 1+ || \pi_P^\top|| ) < 1 \quad \text{ iff } \quad 0 \leq \theta < \frac{ 1 - || T ||}{ || R - P|| ( 1+ || \pi_P^\top || ) } .$$ We call the upper bound for $ \theta $ on the RHS above the [*domain of SBB with respect to $ R $*]{}. In the following we take an alternative route for obtaining a perturbation bound. Starting point is (\[updateD\]) but other than for deriving (\[eq:basicseries\]) we now only perform the insertion operation $ K $ times, leading to $$\label{eq:rty} \pi_{ P ( \theta ) }^\top = \pi_{P }^\top \sum_{k=0}^K (\theta ( R - P ) D_P)^k \, + \pi_{P(\theta)}^\top ( \theta ( R - P) D_{ P } )^{ K+1 } .$$ For $ K \geq 1 $, equation (\[eq:rty\]) yields the following bound: $$\label{eq:seab} \| \pi_{ P ( \theta ) }^\top - \pi_P^\top \| \leq \left \| \pi_P^\top \sum_{k=1}^K (\theta ( R - P ) D_P)^k \right \| \, + \| \pi_{P(\theta)}^\top ( \theta ( R - P) D_{ P } )^{ K+1 } \| .$$ Obviously, $ \pi_{P(\theta)}^\top $ is not known and for the actual bound we use the fact that $$\begin{aligned} \| \pi_{P(\theta)}^\top ( \theta ( R - P) D_{ P } )^{ K+1 } \| & \leq & \| \pi_{P(\theta)}^\top \| \| ( \theta ( R - P) D_{ P } )^{ K+1 } \| \\ & \leq & c_{||\cdot||} \| ( \theta ( R - P) D_{ P } )^{ K+1 } \| , \end{aligned}$$ where we define the norm dependent upper bound $c_{||\cdot||}$ for $\| \pi_{P(\theta)}^\top \| $ as follows $$\label{eq:cbound} c_{||\cdot||} = \sup_{Q \in \mathbb{P}(S)} \| \pi_Q^\top \|,$$ where $\mathbb{P}(S)$ represents all stochastic matrices defined on $S$. In case the $1$-norm (resp., infinity-norm) is applied to $ \pi_{P(\theta)}^\top $ we thus have $$\| \pi_{P(\theta)}^\top ( \theta ( R - P) D_{ P } )^{ K+1 } \| \leq \| ( \theta ( R - P) D_{ P } )^{ K+1 } \|.$$ For the general $ v $-norm, a bound $c_{||\cdot||}$ can be obtained from (\[eq:taboo2\]). The *series expansion perturbation bound of order $K$* (SEB($K$)) is now introduced by $$\label{eq:SEAPertBound} \Delta_{\rm SEB (K)} ( P ( \theta ) , P ) := \left \| \pi_P^\top \sum_{k=1}^K (\theta ( R - P ) D_P)^k \right \| \, + c_{||\cdot||} \| ( \theta ( R - P) D_{ P } )^{ K+1 } \|,$$ where $c_{||\cdot||}$ is as defined in (\[eq:cbound\]), and it holds that $$\| \pi_{ P ( \theta ) }^\top - \pi_P^\top \| \leq \Delta_{\rm{ SEB} (K)} ( P ( \theta ) , P ) ,$$ for $ \theta \in [ 0 , 1 ] $. \[re:ohoh\] Note that we may bound as follows $$\label{eq:seabNumEfficient} \| \pi_{ P ( \theta ) }^\top - \pi_P^\top \| \leq \sum_{k=1}^K \|\pi_P^\top (( R - P ) D_P)^k \| \theta^k \, + c_{||\cdot||} \| (( R - P) D_{ P } )^{ K+1 } \| \theta^{K+1} ,$$ so that the polynomial terms only have to be calculated once and can be used for evaluating the bound for different values of $\theta$. This is allows for fast computation and memory efficiency but, due to the additional bounding, the numerical quality of the bound decreases. From $$\| ( ( R - P) D_{ P } )^{ K+1 } \| \leq \| ( R - P) D_{ P } \| ^{ K+1 }$$ it follows that the series in (\[eq:basicseries\]) converges for $P(\theta)= P + \theta ( R - P ) $ at least for $ \theta < ( \| ( R - P) D_{ P } \| )^{-1} $. Hence, for $ \theta $ sufficiently small $$\label{eq:series} \pi_{P }^\top \sum_{k=0}^K ( \theta ( R - P ) D_P)^k$$ provides an approximation of $ \pi_{P(\theta)} $, where the error is bounded by some constant times $ \theta^{K+1} \| ( ( R - P) D_{ P } )^{ K+1 } \| $. The series put forward in (\[eq:series\]) is called series expansion approximation of order $ K$. Letting $ K $ tend to infinity in (\[eq:series\]) we obtain that $$\label{eq:neu3} \pi_{ P ( \theta )}^\top = \pi_{P }^\top \sum_{k=0}^\infty \theta^k ( ( R - P ) D_P)^k ,$$ for $ 0 \leq \theta < || ( R - P ) D_P || $. Note that the above series expansion implies that $ \pi_{ P ( \theta )} $ tends to $ \pi_P $ as $ \theta $ tends to zero; for more details we refer to [@heidergott2; @heder3]. To test the performance of the different bounds in the scaled perturbation setting (i.e., ) we will investigate the relative error of the perturbation bounds. Clearly, a better bound results in a smaller relative error. Consider a condition number bound for $ || \pi_{ P ( \theta )}^\top - \pi_P^\top || $. The following reasoning only uses the basic definition of a CNB in (\[opl\]) so that the arguments apply to the condition number bounds for finite chains discussed in Section \[sec:norms2\] and the CNB in Remark \[re:1\] as well. Generally speaking, let $ \Delta_{\rm CNB} ( P(\theta) , P ) = \theta \kappa || R - P || $ denote a condition number bound for $ \| \pi^\top_{P(\theta)} - \pi_P^\top \|$. Following (\[relerr\]), the relative error inferred by using $ \theta \kappa || R - P || $ rather than $ || \pi_{ P ( \theta )}^\top - \pi_P^\top || $ is given by $$\begin{aligned} \label{eq:err} \eta_{\rm{CNB}}(\theta) & :=& \frac{ \Delta_{\rm CNB} ( P ( \theta) , P ) - || \pi_{ P ( \theta )}^\top - \pi_P^\top || }{ || \pi_{ P ( \theta )}^\top - \pi_P^\top || } \nonumber \\ & = & \frac{ \theta \kappa|| R - P || - || \pi_{ P ( \theta )}^\top - \pi_P^\top ||}{ || \pi_{ P ( \theta )}^\top - \pi_P^\top || } \nonumber \\ & = & \frac{ \theta \kappa|| R - P || }{ || \pi_{ P ( \theta )}^\top - \pi_P^\top || } - 1.\end{aligned}$$ Note that this relative error is by definition $\geq 0$. In the same vein, when we replace $ {\bf \Delta } ( R , P ) $ in (\[relerr\]) by the bounds $ \Delta_{\rm{SSB}}(P ( \theta) , P)$, $ \Delta_{\rm{DB}}(P ( \theta) , P) $ and $ \Delta_{\rm{SEB}(K)}(P ( \theta) , P) $, respectively, we obtain the corresponding absolute relative error expressions denoted by $ \eta_{\rm{SSB}}( \theta)$, $ \eta_{\rm{DB}}(\theta) $ and $ \eta_{\rm{SEB}(K)}( \theta) $. The following theorem analyses the relative error of the discussed bounds. It shows that in general the relative error of a condition number bound and SSB converges for $\theta \downarrow 0$ to a finite non-zero value, while the SEB($K$)-based bounds have the desirable property that the relative error vanishes. Moreover, the rate of convergence of the relative error of SEB($K$) can be explicitly computed. \[th:relerror\] Let $\| \pi_{P(\theta)}^\top - \pi_P^\top \| > 0$, for all $\theta \in (0,1]$. - The relative error of the condition number bound (CNB) is given by $$\eta_{\rm{CNB}}(\theta) = \frac{ || R - P || \kappa}{ || \pi_{P(\theta)}^\top ( R - P ) D_P || } - 1,$$ and it holds that $$\lim_{\theta \downarrow 0} \eta_{\rm{CNB}}(\theta) = \frac{ || R - P || \kappa}{ || \pi_P^\top ( R - P ) D_P || } - 1 \geq 0 ,$$ where equality is only reached in the special case when $|| R - P || \kappa$ equals $|| \pi_P^\top ( R - P ) D_P ||$. <!-- --> - Provided that $ || T || + \theta || R - P|| ( 1+ || \pi_P^\top|| ) < 1 $, the relative error of the strong stability bound (SSB) is given by $$\eta_{\rm{SSB}}(\theta) = \frac{ || R - P || \, || \pi_P^\top || (1+ || \pi_P^\top ||) }{ || \pi_{P(\theta)}^\top (R - P) D_P || (1 - || T || - \theta || R - P|| ( 1+ || \pi_P^\top || ))} -1,$$ and it holds that $$\lim_{\theta \downarrow 0} \eta_{\rm{SSB}}(\theta) = \frac{ || R - P || \, || \pi_P^\top || (1+ || \pi_P^\top ||) }{ || \pi_P^\top (R - P) D_P || (1 - || T ||)} -1 \geq 0 ,$$ where equality is only reached in the special case when the nominator equals the denominator in the fraction. - Provided that $\theta\|(R-P)D_P\| < 1$, the relative error of the direct bound (DB) is given by $$\eta_{\rm{DB}}(\theta) = \frac{\frac{\|\pi_P^\top (R-P)D_P\|}{1-\theta\|(R-P)D_P\|}}{ || \pi_{P(\theta)}^\top (R - P) D_P ||} - 1 ,$$ and it holds that $\lim_{\theta \downarrow 0} \eta_{\rm{DB}}(\theta) = 0$. - Provided that $\theta\|(R-P)D_P\| < 1$, the relative error of the series expansion bound of order $ K \geq 1$ (i.e., [[SEB]{}]{}($K$)) is given by $$\eta_{\rm{SEB}(K)}(\theta) = \frac{ 2 c_{||\cdot||} \| ( ( R - P) D_{ P } )^{ K+1 } \| \theta^K } { || \pi_{P(\theta)}^\top ( R - P ) D_P|| } ,$$ and it holds that $ \eta_{\text{SEB($K$)}}(\theta) $ is of order $ O ( \theta^{K} )$. [**Proof:**]{} All relative error expressions follow by simply inserting the different bounds and using the result that $$\label{eq:partProofRelErrors} \pi_{ P ( \theta )}^\top - \pi_{P }^\top = \theta \pi_{P(\theta)}^\top ( R - P ) D_P$$ in the denominator is of order $ O ( \theta )$. Indeed, using the fact that $ P ( \theta ) $ is irreducible and aperiodic for $ \theta < 1 $ it follows from (\[eq:basicseries\]) together with Theorem \[th:stability\] that $$\label{eq:split} \pi_{P(\theta)}^\top ( R - P ) D_P = \frac{1}{\theta} ( \pi_{P(\theta)}^\top - \pi_{P}^\top) = \pi_P^\top ( R -P ) D_P + \pi_P^\top \sum_{k=2}^\infty \theta^{k-1} ( ( R -P ) D_P )^k ,$$ which shows that $ \pi_{P(\theta)}^\top ( R - P ) D_P $ can be written as power series with leading term $ \pi_P^\top ( R -P ) D_P $, and thus implies that $ \theta \| \pi_{P(\theta)}^\top ( R - P ) D_P \| $ is of order $ O ( \theta ) $. We now turn to perturbation bounds. For CNB it holds that $$\eta_{\text{CNB}}(\theta) = \frac{ \theta || R - P || \kappa }{ || \pi_{ P ( \theta ) }^\top - \pi_P^\top || } - 1 = \frac{ || R - P || \kappa}{ || \pi_{P(\theta)}^\top ( R - P ) D_P ) || } - 1$$ and the limit result then follows from (\[eq:split\]), where the second equality is obtained by (\[eq:partProofRelErrors\]). The proof of the statements for SSB, DB and SEB($K$) follow from the same line of argument and we will in the following only present the proof for the most challenging of these cases which is the relative error of the $K$-th order SEB. Following (\[eq:SEAPertBound\]) we can write $$\label{eq:relErrorSE} \eta_{\text{SEB($K$)}}(\theta) = \frac{ \overbrace{ \left \| \pi_P^\top \sum_{k=1}^K (\theta ( R - P ) D_P)^k \right \|}^{=:H} \, + c_{||\cdot||} \| ( \theta ( R - P) D_{ P } )^{ K+1 } \| } { || \pi_{ P ( \theta ) }^\top - \pi_P^\top ||} - 1 .$$ For $ H $ it holds that $$H = \left \| \pi_P^\top \sum_{k=0}^{K-1} (\theta ( R - P ) D_P)^k \theta ( R - P ) D_P \right \|.$$ After some algebra, $$H = \left \| \pi_P^\top \sum_{k=0}^\infty (\theta ( R - P ) D_P)^k \left[ I - (\theta ( R - P ) D_P)^K \right] \theta ( R - P ) D_P \right \|$$ and using condition $\|\theta(R-P)D_P\| < 1$ together with (\[eq:neu3\]) we arrive at $$H = \left \| \pi_{P(\theta)}^\top \left[ I - (\theta ( R - P ) D_P)^K \right] \theta ( R - P ) D_P \right \| ,$$ which can be straightforwardly bounded by $$H \leq \| \pi_{P(\theta)}^\top \theta ( R - P ) D_P \| + c_{||\cdot||} \| (\theta ( R - P ) D_P)^{K+1}\|.$$ Inserting the above bound for $H$ into yields for the relative error $$\label{eq:boundRelErrorSEBK} \eta_{\text{SEB($K$)}}(\theta) \leq \: \theta^{K+1} \, \frac{ 2 c_{||\cdot||} \| (( R - P ) D_P)^{K+1}\| } { || \pi_{ P ( \theta ) }^\top - \pi_P^\top || }.$$ The limit results now follows from the fact that $ || \pi_{ P ( \theta ) }^\top - \pi_P^\top || $ is of order $ O ( \theta ) $. $ \Box $ For an illustration of Theorem \[th:relerror\] we generated two random transition matrices $P$ and $R$ with $40$ states. The random generation is done by drawing random numbers from $(0,1)$ and normalizing the rows so that they sum up to 1. Then we considered in case of the $\infty$-norm all perturbation bounds from Theorem \[th:relerror\] on the interval $\theta \in (0,1]$ together with the true perturbation effect $\| \pi_{ P ( \theta )}^\top - \pi_P^\top \|_\infty$ (the true effect was calculated numerically). The results can be found in Figure \[fig:pertBoundsSimpleExample\]. Figure \[fig:pertBoundsSimpleExample\] shows that in this experiment all bounds, except for CNB, are similar in performance on the interval $\theta \in [0,0.1]$. For $\theta > 0.1$ SEB of order $K=3$ performs best. DB performs similar to SEB($1$) on the interval $\theta \in (0,0.3]$ but for $\theta > 0.3$ SEB($1$) outperforms DB. This simple example illustrates that in a scaled perturbation setting CNB is apparently too general to be competitive compared to the other bounds. The differences become more apparent if we look at the relative errors for the different bounds plotted in Figure \[fig:pertBoundsSimpleExampleRelErrors\]. The results for SSB are not plotted because the condition in part (ii) of Lemma \[le:basicbound\] is not met. (400,250) ![Perturbation bounds for $\|\pi_{P(\theta)}^\top-\pi_P^\top\|_\infty$ with $\theta \in (0,1]$, where $P(\theta)=(1-\theta)P+\theta R$ for randomly generated $P$ and $R$ consisting of $40$ states.[]{data-label="fig:pertBoundsSimpleExample"}](figure1.pdf "fig:") (400,250) ![Relative errors of the perturbation bounds for $\|\pi_{P(\theta)}^\top -\pi_P^\top \|_\infty$ with $\theta \in (0,1]$, where $P(\theta)=(1-\theta)P+\theta R$ for randomly generated $P$ and $R$ consisting of $40$ states.[]{data-label="fig:pertBoundsSimpleExampleRelErrors"}](figure2.pdf "fig:") \[re:6\] Provided that $\theta_0$ exists such that $\theta_0 || ( R - P ) D_P || < 1$, then $$\eta_{\text{SEB($K$)}}(\theta) = O(\theta_0^{K}) ,$$ for $ 0 \leq \theta \leq \theta_0 $. The result put forward in Theorem \[th:relerror\] seems to contradict the fact that for finite Markov chains it holds that $$\label{eq:relrel} \left | \frac{ (\pi_R )_i - (\pi_P )_i }{ (\pi_R )_i } \right | \leq 2 \eta n + O ( \eta^2 ) , \: i \in S = \{ 0 , \ldots ,n-1 \} ,$$ where $ \eta $ is bounded by $ || R - P || $ and $n$ denotes the size of the state-space, which indicates that the relative element-wise error in using $ \pi_P $ as a substitute for $ \pi_R $ tends to zero as $ P $ approaches $ R $, see [@n1; @n2; @n3] for details. Note that above equation is equivalent to $$\left | (\pi_R )_i - (\pi_P )_i \right | \leq (\pi_R )_i \left ( 2 \eta n + O ( \eta^2 ) \right ) , \: i \in S = \{ 0 , \ldots ,n-1 \} ,$$ and reads in norm-version, using, for example, the $\infty$-norm (or 1-norm), $$|| \pi_R^\top - \pi_P^\top ||_{ \bf 1} \leq 2 \eta n + O ( \eta^2 ) ,$$ see Remark \[re:ohoh\]. Hence, the element-wise relative error result in (\[eq:relrel\]) is a statement about continuity of finite Markov chains and does not imply that the relative error in predicting the true norm distance between $ \pi_R $ and $ \pi_P $ by a CNB becomes small; for details compare the definition of the relative error in (\[relerr\]) and (\[eq:err\]), respectively, with that in (\[eq:relrel\]). We conclude this section by presenting an interesting result for stability theory. \[cor:suff\] Consider the model $ P ( \theta ) = ( 1- \theta ) P + \theta R $, $ \theta \in [ 0 ,1 ) $, with $ P $ aperiodic, irreducible and positive recurrent. If $$\theta < \frac{1 - || _i \! P ||}{ || R - P ||} ,$$ then $ P ( \theta) $ has a unique stationary distribution. [**Proof:**]{} Note that $ P ( \theta ) $ is aperiodic and irreducible for $ \theta \in [ 0 , 1 )$. It remains to be shown that $ P ( \theta ) $ is positive recurrent. By computation, $$\begin{aligned} || _i ( P ( \theta ) ) || & = & || _i ( ( 1- \theta ) P + \theta R ) || \\ & \leq & || _i \! P + \theta ( R - P ) || \\ & \leq & || _i \! P || + \theta || R - P || . \end{aligned}$$ Hence, provided that $ \theta $ satisfies $ || _i \! P || + \theta || R - P || < 1 $, it follows $ || _i ( P ( \theta ) ) || < 1 $ and by Proposition \[cor:pos\] we conclude that $ P ( \theta ) $ is positive recurrent. Solving $ \theta $ out of $ || _i \! P || + \theta || R - P || < 1 $ concludes the proof. $ \Box $ \[re:ssb\] Note that from Corollary \[cor:suff\] it follows that if condition (ii) in Theorem \[th:relerror\] for the SSB with $T = {}_i P $, for some $i \in S$, is satisfied, then $ P ( \theta ) $ is stable, i.e., has a unique stationary distribution. Kartashov established in [@karta96] a result similar to Theorem \[th:stability\]. It is worth noting that Kartashov didn’t provide a lower bound for the region of stability as detailed in Corollary \[cor:suff\] together with Remark \[re:ssb\]. Explicit Perturbation Bounds for the Two-State Chain (Finite State Space) {#sec:leadex} ========================================================================= In this section we explicitly compute the bounds put forward in Theorem \[th:relerror\] for the two-state chain from Example \[ex:mc\]. The following convex combination is considered $$P(\theta) = (1 - \theta) \underbrace{\begin{pmatrix} 1 - p & p \\ q & 1 - q \\ \end{pmatrix}}_{=P^s} + \theta \underbrace{\begin{pmatrix} 1 - \widetilde{p} & \widetilde{p} \\ \widetilde{q} & 1 - \widetilde{q} \\ \end{pmatrix}}_{:=\widetilde{P}^s} .$$ We are interested perturbing $P(0)$ by choosing $\theta > 0$. Note that for the difference in Markov transition matrices it holds $$P(\theta) - P(0) = \theta( \widetilde{P}^s - P^s ) = \theta \begin{pmatrix} p - \widetilde{p} & \widetilde{p} - p\\ \widetilde{q} - q & q - \widetilde{q} \\ \end{pmatrix}.$$ which gives $$|| P(\theta) - P(0) ||_{v} = \theta (1 + \alpha ) \max \left\{ | p - \widetilde{p}|, \frac{ 1 }{ \alpha }| q - \widetilde{q}| \right\} .$$ In the following the explicit perturbation bounds are presented for the $v$-norm. Using in the calculation for CNB we get $$|| \pi_{P(\theta)}^\top - \pi_{P^s}^\top ||_{v} \leq || \pi_{P(\theta)}^\top ||_{v} ||P(\theta) - P^s||_{v} ||D_{P^s}||_{v}.$$ It holds that (see also Example \[ex:mc\]) $$|| \pi_{P(\theta)}^\top ||_{v} \leq \alpha \qquad \mbox{ and } \qquad ||D_{P^s}||_v = \frac{1+\alpha}{(p+q)^2}\max \left \{ p , \frac{q}{\alpha} \right \}$$ so that we obtain for CNB $$\Delta_{\rm {CNB}} (P(\theta) , P^s ) = \theta \left(\frac{1+\alpha}{p+q}\right)^2 \max \left\{ \alpha |p-\widetilde{p}| , |q-\widetilde{q}| \right\} \max\left\{ p , \frac{q}{\alpha} \right\}.$$ In the general framework of CNB given in with it holds that $\kappa = \frac{1+\alpha}{(p+q)^2}\max\{\alpha p , q\}$ for this example. For the SSB we compute $$|| \pi_{P^s}^\top ||_{v}=\frac{q + p\alpha}{p + q}.$$ Next, the individual terms in (\[eq:SSB\]) have to be computed. Here, we make use of the taboo kernel bound as provided in Example \[ex:T\], where the taboo kernel may be obtained by removing one of the columns where the choice of column depends on the value of $ p $ and $ q $, and we arrive at $$|| T||_v \leq \min \{ \max\{ \alpha p, 1 - q \} , \max \{ 1-p , q \} \} .$$ Note that a similar analysis can be carried out when considering removing rows of $ P^s $. SSB can only be provided for small perturbations, i.e., small values of $\theta$. More specifically, provided that $$\theta < \frac{ 1 - \min \{ \max\{ \alpha p, 1 - q \} , \max \{ \alpha ( 1-p ) , q \} }{ \left( 1 + \frac{q+p \alpha}{p+q} \right) (1 + \alpha ) \max \left\{ | p - \widetilde{p}|, \frac{ 1 }{ \alpha }| q - \widetilde{q}| \right\} } ,$$ the SSB bound for $ || \pi_{P(\theta)}^\top - \pi_{P^s}^\top ||_{v}$ is given by $$\Delta_{\rm {SBB}} (P(\theta) , P^s ) = \frac{ \left( \frac{q+p \alpha}{p+q} \right) \left( 1 + \frac{q+p \alpha}{p+q} \right) \theta (1 + \alpha ) \max \left\{ | p - \widetilde{p}|, \frac{ 1 }{ \alpha }| q - \widetilde{q}| \right\}} {1 - \min \{ \max\{ \alpha p, 1 - q \} , \max \{ 1-p , q \} \} - \left( 1 + \frac{q+p \alpha}{p+q} \right) \theta (1 + \alpha ) \max \left\{ | p - \widetilde{p}|, \frac{ 1 }{ \alpha }| q - \widetilde{q}| \right\} }.$$ For example, letting $ \alpha = 1 $, which is possible, see Lemma \[le:taboo\], yields the simplified expression $$\Delta_{\rm {SBB}} (P(\theta) , P^s ) = \frac{4 \theta \max \left\{ | p - \widetilde{p}|, | q - \widetilde{q}| \right\}}{1 - \min \left\{ \max\{ p, 1 - q \} , \max \{ 1-p , q \} \right\} - 4 \theta \max \left\{ | p - \widetilde{p}|, | q - \widetilde{q}| \right\} }$$ for SSB. By inspection of above, it is obvious that SSB behaves poorly for $ p $ and $ q $ close to one or close to zero as in this case the norm of the taboo kernel approaches one. Calculations show that DB leads to $$\Delta_{\rm {DB}} (P(\theta) , P^s ) = \frac{\theta|p\widetilde{q}-\widetilde{p}q|(1+\alpha)}{(p+q) \left( p + 1 - \theta(1+\alpha) \max \{ |p-\widetilde{p}| , \frac{|q-\widetilde{q}|}{\alpha} \} \right) }$$ under the assumption that $$\theta < \frac{p+1}{(1+\alpha)\max \{ |p-\widetilde{p}| , \frac{|q-\widetilde{q}|}{\alpha} \} }.$$ For SEB($K$) with $K=0$ it holds $$\Delta_{\rm {SEB} (0) } (P(\theta) , P^s ) = \frac{\theta(1+\alpha)}{p+q} \max \{ \alpha |p-\widetilde{p}|, |q-\widetilde{q}| \}$$ of which the construction is similar to CNB but with the difference that CNB requires an additional bounding on $||(P(\theta)-P^s)D_{P^s}||_v$ to obtain $||(P(\theta)-P^s)||_v ||D_{P^s}||_v$, which stems from the fact that $||(P(\theta)-P^s)D_{P^s}||_v \leq ||(P(\theta)-P^s)||_v ||D_{P^s}||_v$. More specifically, $ \Delta_{\rm {CNB}} (P(\theta) , P^s ) $ is by factor $$\frac{\Delta_{\rm {CNB}} (P(\theta) , P^s ) }{\Delta_{\rm {SEB}(0)} (P(\theta) , P^s ) } = \frac{1+\alpha}{p+q} \max\left\{p,\frac{q}{\alpha}\right\} \geq 1$$ larger than $ \Delta_{\rm {SEB}(0)} (P(\theta) , P^s ) $. In case $\alpha = 1$ this factor is $2\max\{p,q\}/(p+q)$, which is greater than 1 for $p \neq q$. When $\alpha$ is chosen to be $>>1$ this factor grows linearly in $\alpha$. This illustrates that, although being more general, CNB loses on quality in contrast to SEB$(0)$ since it does not utilize the contraction property of $(P(\theta)-P^s)D_{P^s}$. After similar calculations it can be shown that SEB($K$) with $K=1$ results in $$\Delta_{\rm {SEB} (1) } (P(\theta) , P^s ) = \frac{\theta(1+\alpha)}{(p+q)^2} \left( |p\widetilde{q} - \widetilde{p} q| + \theta | p - \widetilde{p} + q - \widetilde{q} | \max \{ \alpha |p-\widetilde{p}|, |q-\widetilde{q}| \} \right).$$ An Elaborate Perturbation Analysis of a Queueing System {#sec:XXX} ======================================================= To illustrate the application of perturbation bounds in a setting where the deviation matrix is not available in a closed-form, we discuss in this section the M/G/1 queue with breakdowns. In addition, we consider the finite version of the queue, i.e., the M/G/1/N queue with breakdowns and we illustrate SEB($K$). The breakdown model will have the special feature that we perturb the system with no breakdowns by an unstable chain modeling a pure birth process. The basic model of the M/G/1 queue with breakdowns is introduced in Section \[sec:A1\] and in Section \[sec:A3\] a discussion of the literature is provided. The perturbation bounds for both models are presented in Section \[sec:A5\] and Section \[sec:A6\], respectively. The Basic Model {#sec:A1} --------------- Consider a single server queue. Customers arrive at the queue according to a Poisson-$\lambda$-arrival process. Service times are identically distributed with mean $ 1 / \mu $ and we denote the service time distribution by $ \mathcal{S}(x) $. Throughout this section we assume that $ \lambda / \mu < 1$. At the beginning of each service, there is a probability $ \theta $ that the server breaks down (and the customer is send back to the queue) and enters a repair state, the length of which is exponentially distributed with rate $ r $ and which is independent of everything else, and with probability $ ( 1 - \theta ) $ the server is operational and serves the customer (if any, according to FCFS). The only points in time where a possible server breakdown can occur is right at the beginning of a service. This system is modeled by the jump chain embedded at service completions and completions of a repair, and it has state space $S=\{0,1,\dots\}$. The transition probabilities from $i \in S$ to $j \in S$, denoted as $P_\theta(i,j)$, are given as follows: For $i = 0$, the process jumps to $ j \geq 0 $ if a customer arrives and the server is operational and during the service of this customer there are $j$ additional arrivals. This probability is given by $$( 1 - \theta ) \int_{0}^{\infty} e^{ -\lambda x } \frac{( \lambda x )^{j}}{j!} \, d \mathcal{S} ( x ).$$ Alternatively, a customer arrives at the empty queue and the server breaks down at service initiation and during the repair time of the server there are $ j-1 $ additional arrivals, so that at the end of the repair time there are in total $ j$ customers at the server. This probability is given by $$\theta \int_{0}^{\infty} e^{ -\lambda x } \frac{( \lambda x )^{j-1}}{(j-1)!} \, r e^{ -r x } d x = \theta \frac{r}{\lambda + r} \left( \frac{\lambda}{\lambda + r} \right)^{j-1},$$ for $ j \geq 1 $ and zero for $ j=0$, where we make use of the convention that $ 0! = 1$. Combining these results, for $ i =0 $, we arrive at $$P_\theta ( 0 , j ) = ( 1 - \theta ) \int_{0}^{\infty} e^{ -\lambda x } \frac{( \lambda x )^{j}}{j!} \, d \mathcal{S} ( x ) + \theta \frac{r}{\lambda + r} \left( \frac{\lambda}{\lambda + r} \right)^{j-1} 1_{ j \geq 1} .$$ For $ i \geq 1 $, the process jumps to state $ j \geq i-1 $ if the server remains operationally, so that service of the subsequent customer in the queue may begin, and during the service of this customer there are $j - i + 1 \geq 0$ additional arrivals. This probability is given by $$( 1 - \theta ) \int_{0}^{\infty} e^{ -\lambda x } \frac{(\lambda x)^{j - i + 1}}{(j - i +1)!} \, d \mathcal{S} ( x ) .$$ Alternatively, there is a server breakdown and during the exponential repair time there are $ j - i \geq 0 $ arrivals from the outside. This probability is given by $$\theta \frac{r}{\lambda + r} \left( \frac{\lambda}{\lambda + r} \right)^{j - i }.$$ Combining these results, we arrive at $$\label{eq:Pcentral} P_\theta ( i , j ) = ( 1 - \theta ) \int_{0}^{\infty} e^{ -\lambda x } \frac{(\lambda x)^{j - i + 1}}{(j - i +1)!} \, d \mathcal{S} ( x ) + \theta \frac{r}{\lambda + r} \left( \frac{\lambda}{\lambda + r} \right)^{j-i} 1_{ j \geq i },$$ for $ 1 \leq i $ and $ i -1\leq j $. All other entries of $ P_\theta $ are set to zero. Observe that for $ \theta = 1 $, $ P_1 $ models a pure birth process and the queue is not stable, whereas $ P_0 $ models a stable M/G/1 queue with no breakdowns. The kernel $ P_\theta $ is given through the convex combination $ \theta P_1 + ( 1- \theta ) P_0 $ of the two kernels. Discussion of Literature {#sec:A3} ------------------------ Since the pioneering work of Thiruvengadam [@thir] and Avi-Itzhak and Naor [@avi], there has been a considerable interest in the study of queues with server breakdowns, see for example [@cao1982; @lishi; @wangcao] and references therein. However, the majority of results is expressed in terms of systems of equations the solution of which is rather challenging, or have solutions which are not easily interpretable in practice. For instance, Baccelli and Znati [@bacz] provide the generating function of the number of customers in the $M/G/1$ system with dependent breakdowns. Also, results are given in terms of the inverse of Laplace transforms, see, e.g., [@bacz], which require numerical inversion for solving a given system. To overcome these difficulties, approximation methods are used where the complex (real) system is replaced by one which is “close” to it in some sense but which has a simpler structure (resp., components) and for which analytical results are available. The Infinite Capacity M/G/1 Queue with Breakdowns (Denumerable State Space) {#sec:A5} --------------------------------------------------------------------------- In this section the M/G/1 queue with breakdowns is considered. Note that SSB is the only bound applicable as the size of the state-space is infinite and the deviation matrix is not known in explicit form. As next we provide auxiliary results for obtaining the overall SSB. Recall that $ P_0 $ is the transition kernel of the embedded jump chain of an M/G/1 queue and we consider the taboo kernel $ T = _0 \! (P_0 )$, i.e., we remove the first column of $ P_0$. For the taboo kernel $T$ it holds that $$\begin{aligned} \| T \|_{\upsilon} & = & \sup\limits_{i \geq 0} \frac{ 1 }{ \alpha^{i} } \sum\limits_{j \geq 1} \alpha^{j} \left| \int_{0}^{\infty} e^{-\lambda x} \frac{(\lambda x)^{j - i +1 }}{ ( j - i + 1 )! } d\mathcal{S}(x)\right| 1_{ j - i + 1 \geq 0 } \nonumber \\ & = & \sup\limits_{i \geq 0} \frac{ 1 }{ \alpha^{i} } \sum\limits_{j \geq 1} \alpha^{j} \int_{0}^{\infty} e^{-\lambda x} \frac{(\lambda x)^{j - i +1 }}{ ( j - i + 1 )! } d\mathcal{S}(x) 1_{ j \geq i-1 } \nonumber \\ & = & \sup\limits_{i \geq 0} \frac{ 1 }{ \alpha^{i} } \sum\limits_{j \geq \max(i-1,1)} \alpha^{j } \int_{0}^{\infty} e^{-\lambda x} \frac{(\lambda x)^{ j }}{ j! } d\mathcal{S}(x) \nonumber %\\ % & \leq & \frac{ 1 }{ \beta } \int_{0}^{\infty} e^{-\lambda x} \sum\limits_{j \geq 1} \frac{(\lambda \beta x)^{ j }}{ j! } d\mathcal{S}(x) \nonumber\end{aligned}$$ For $ i = 0,1 $, $$\begin{aligned} \sup\limits_{0 \leq i \leq 1} \frac{ 1 }{ \alpha^{i} } \sum\limits_{j \geq \max(i-1,1)} \alpha^{j } \left| \int_{0}^{\infty} e^{-\lambda x} \frac{(\lambda x)^{ j }}{ j! } d\mathcal{S}(x) \right| % \nonumber \\ & = & \sum\limits_{j \geq 1} \alpha^{j } \int_{0}^{\infty} e^{-\lambda x} \frac{(\lambda x)^{ j }}{ j! } d\mathcal{S}(x) \nonumber \\ & = & \sum\limits_{j \geq 1} \int_{0}^{\infty} e^{-\lambda x} \frac{(\lambda \alpha x)^{ j }}{ j! } d\mathcal{S}(x) \nonumber \\ & = & \int_{0}^{\infty} e^{-\lambda x} \sum\limits_{j \geq 1} \frac{(\lambda \alpha x)^{ j }}{ j! } d\mathcal{S}(x) \nonumber \\ & = & \int_{0}^{\infty} e^{-\lambda x} ( e^{ \lambda \alpha x} -1 ) d\mathcal{S}(x) \nonumber \\ & = & \int_{0}^{\infty} e^{- \lambda (1- \alpha ) x} d\mathcal{S}(x) - \int_{0}^{\infty} e^{- \lambda x} d\mathcal{S}(x) , \nonumber\end{aligned}$$ and for $ i > 1 $ $$\begin{aligned} & & \sup\limits_{i \geq 2} \frac{ 1 }{ \alpha^{i} } \sum\limits_{j \geq \max(i-1,1)} \alpha^{j - 1} \left| \int_{0}^{\infty} e^{-\lambda x} \frac{(\lambda x)^{ j }}{ j! } d\mathcal{S}(x) \right| \nonumber \\ & & \qquad = \sup\limits_{i \geq 2} \frac{ 1 }{ \alpha^{i} } \sum\limits_{j \geq i-1} \alpha^{j - 1} \int_{0}^{\infty} e^{-\lambda x} \frac{(\lambda x)^{ j }}{ j! } d\mathcal{S}(x) \nonumber \\ %& & \qquad \leq \sup\limits_{i \geq 2} \frac{ 1 }{ \alpha^{i} } \sum\limits_{j \geq 1} \alpha^{j - 1} \int_{0}^{\infty} e^{-\lambda x} \frac{(\lambda x)^{ j }}{ j! } d\mathcal{S}(x) % \nonumber \\ & & \qquad = \frac{ 1 }{ \alpha^3 } \int_{0}^{\infty} e^{-\lambda x} \sum\limits_{j \geq 0} \frac{(\lambda \alpha x)^{ j }}{ j! } d\mathcal{S}(x) - \frac{ 1 }{ \alpha^3 } \int_{0}^{\infty} e^{-\lambda x} d\mathcal{S}(x) \nonumber \\ & & \qquad = \frac{1}{\alpha^3 } \left ( \int_{0}^{\infty} e^{ -\lambda (1 - \alpha ) x} d\mathcal{S}(x) - \int_{0}^{\infty} e^{-\lambda x} d\mathcal{S}(x) \right ) \nonumber .\end{aligned}$$ Denoting by $\mathcal{S}^{*}(z)$ the Laplace-Stieltjes transform of $\mathcal{S}(x)$ and using the fact that $ \alpha \geq 1 $ we arrive at $$\| T \|_v = \|_{0} ( P_0 ) \|_{\upsilon} \leq b_1 ( \alpha ) := \mathcal{S}^{\ast} ( \lambda (1 - \alpha ) ) - \mathcal{S}^{*} (\lambda) ,$$ provided that $ \alpha $ is such that $$\label{eq:condition2} \mathcal{S}^{\ast} ( \lambda (1 - \alpha ) ) < \infty .$$ Furthermore, using one obtains $$|| \pi_0^\top ||_v \leq b_2 ( \alpha ) := \frac{\sum_i \pi_{0} (i) P_0 ( i , 0 ) }{ 1 - b_1 (\alpha) } = \frac{ \pi_{0} (0) }{ 1 - b_1 (\alpha) }.$$ We now turn to computing a bound for $ || P_1 - P_0||_v $. For $i = 0$: $$\begin{aligned} & & \sum\limits_{j \geq 0} \alpha^{j} | P_1 ( 0 , j ) - P_0 ( 0 , j) | \nonumber \\ & & \quad = \sum\limits_{j \geq 0} \alpha^{j} \left| \frac{r}{r + \lambda} \left ( \frac{\lambda}{\lambda +r } \right )^{j-1}1_{ j \geq 1 } - \int_{0}^{\infty} e^{-\lambda x} \frac{(\lambda x)^{j}}{ j! } d\mathcal{S}(x)\right| \nonumber \\ & & \quad = \int_0^\infty e^{-\lambda x} d\mathcal{S}(x) + \sum\limits_{j \geq 0} \alpha^{j+1} \left| \frac{r}{r + \lambda} \left ( \frac{\lambda}{\lambda +r } \right )^{j} - \int_{0}^{\infty} e^{-\lambda x} \frac{(\lambda x)^{j+1}}{ (j+1)! } d\mathcal{S}(x)\right| . \nonumber \end{aligned}$$ For $i \geq 1$: $$\begin{aligned} & & \frac{ 1 }{ \alpha^{i} } \sum\limits_{j \geq 0} \alpha^{j} | P_1 ( i ,j ) - P_0 ( i ,j) | \nonumber \\ & & \qquad = \frac{ 1 }{ \alpha^{i} } \sum\limits_{j \geq 0} \alpha^{j } \left| \frac{r}{r + \lambda} \left ( \frac{\lambda}{\lambda +r } \right )^{j-i}1_{ j \geq i } - \int_{0}^{\infty} e^{-\lambda x} \frac{(\lambda x)^{j-i+1}}{ (j-i+1)! } d\mathcal{S}(x)\right| 1_{ j -i+1 \geq 0} \nonumber\\ & & \qquad = \frac{1}{\alpha} \int_0^\infty e^{ - \lambda x } d \mathcal{S} ( d x ) + \frac{ 1 }{ \alpha^{i} } \sum\limits_{j \geq i} \alpha^{j +1} \left| \frac{r}{r + \lambda} \left ( \frac{\lambda}{\lambda +r } \right )^{j-i} - \int_{0}^{\infty} e^{-\lambda x} \frac{(\lambda x)^{j-i+1}}{ (j-i+1)! } d\mathcal{S}(x)\right| \nonumber\\ & & \qquad \leq \int_0^\infty e^{ - \lambda x } d \mathcal{S} ( d x ) + \sum\limits_{j \geq 0} \alpha^{j+1 } \left| \frac{r}{r + \lambda} \left ( \frac{\lambda}{\lambda +r } \right )^{j} - \int_{0}^{\infty} e^{-\lambda x} \frac{(\lambda x)^{j+1}}{ (j+1)! } d\mathcal{S}(x)\right| . \nonumber \end{aligned}$$ Combining the above results we let $$\begin{aligned} b_3 ( \alpha ) & := & \int_0^\infty e^{-\lambda x} d\mathcal{S}(x) + \sum\limits_{j \geq 0} \alpha^{j+1} \left| \frac{r}{r + \lambda} \left ( \frac{\lambda}{\lambda +r } \right )^{j} - \int_{0}^{\infty} e^{-\lambda x} \frac{(\lambda x)^{j+1}}{ (j+1)! } d\mathcal{S}(x)\right| \end{aligned}$$ and obtain $$|| P_1 - P_0 ||_v \leq b_3 ( \alpha ) .$$ Inserting the above bounds into (\[eq:SSB\]) we obtain as SSB $$\begin{aligned} || \pi_\theta^\top - \pi_0^\top ||_v & \leq & b_2 ( \alpha ) \frac{ \theta (1 +b_2 ( \alpha ) ) b_3 (\alpha)}{ 1 - b_1 ( \alpha ) - \theta (1 +b_2 ( \alpha ) ) b_3 (\alpha)} ,\end{aligned}$$ provided that $$\theta < \frac{1 - b_1 ( \alpha ) }{ (1 +b_2 ( \alpha ) )b_3 (\alpha)}$$ and $ 1 \leq \alpha \leq \min ( 1/ \lambda , z_\lambda )$, where $z_\lambda $ denotes the right point of the domain of the values for $ \alpha $ such that $ \mathcal{S}^{\ast} ( \lambda (1 - \alpha ) ) $ is finite (the case $ z_\lambda = \infty$ is not excluded). \[ex:erer\] If the service times are exponentially distributed with rate $ \mu $ it holds that $$\mathcal{S}^{\ast} ( \lambda (1 - \alpha ) ) = \frac{\mu}{\mu + \lambda ( 1 - \alpha ) }$$ and $ z_\lambda = \frac{\mu + \lambda}{\lambda} - \epsilon $, for $ \epsilon > 0 $. The above bounds can now be explicitly computed: $$b_1 (\alpha) = \frac{\mu}{\mu + \lambda (1 - \alpha ) }- \frac{\mu}{\mu + \lambda } = \frac{\lambda \mu \alpha}{(\mu + \lambda )(\mu + \lambda (1- \alpha ))} ,$$ $$b_2 ( \alpha ) = \frac{1-\lambda / \mu}{1 - b_1 ( \alpha )}.$$ and $$b_3 ( \alpha ) = \frac{\mu}{\lambda + \mu} + \alpha \sum\limits_{j \geq 0} \alpha^{j} \left| \frac{r}{r + \lambda} \left ( \frac{\lambda}{\lambda +r } \right )^{j} - \left ( \frac{\lambda}{ \lambda + \mu } \right )^{j+1} \frac{\mu}{\mu + \lambda}\right| .$$ Note that in case $ \mu = r $, $ b_3 ( \alpha ) $ simplifies to $$b_3 ( \alpha ) = \frac{\mu}{\lambda + \mu} + \alpha \sum\limits_{j \geq 0} \alpha^{j} \left(\frac{\mu}{\mu + \lambda}\right)^2 \left ( \frac{\lambda}{ \lambda + \mu } \right )^{j} = \frac{\mu}{\lambda + \mu}\left( 1 + \frac{\alpha \mu}{\mu+ \lambda - \alpha\lambda} \right) %\leq %\frac{\mu}{\lambda + \mu} + %%\frac{\alpha \lambda }{ \lambda + \mu } %\frac{\alpha}{1 - \frac{\alpha \lambda }{ \lambda + \mu } } %= %%\frac{\mu}{\lambda + \mu} + \frac{\alpha \lambda }{ \mu + (1 - \alpha ) \lambda} . %\frac{\mu}{\lambda + \mu} + \frac{ \alpha ( \lambda + \mu )}{ (1 - \alpha) \lambda + \mu } ,$$ provided that $ \alpha < \frac{\lambda+\mu}{\lambda }$. In the following, we let $ \lambda= 0.5$, $ \mu = 1 $, $ r = 1 $ and $ f (s ) = 0 $ for $ s \leq 2 $ and $ f ( s ) = 1 $ for $ s > 2 $, i.e., we are interested in the probability of having more than 2 customers at the queue in stationary regime, i.e., $$|| f ||_v = \frac{1}{ \alpha^{3}} .$$ For ease of computation we assume that the service times are exponentially distributed. We are now able to apply the bound provided in Lemma \[le:norms\] to $ | \pi_\theta f - \pi_0 f | $ in combination with the above SSB, where we let $ \theta $ vary from 0 to 0.01, see Figure \[fig:ss2\]. The minimization with respect to $ \alpha $ in (\[eq:starrr\]) has been solved numerically. (400,250) ![The true change in probability of more than 2 customers in the system vs. the strong stability bound.[]{data-label="fig:ss2"}](figure3.pdf "fig:") As can be seen from Figure \[fig:ss2\], SSB provides qualitative insight rather than numerically satisfying approximations. Recall that $ T = _0 \! ( P_0 ) $ and, by Remark \[re:ssb\], applicability of SSB implies stability of the system with breakdowns. SSB can thus be used as means of establishing a lower bound for the domain of stability of the queue with breakdowns. More precisely, by Example \[ex:erer\], for $ \mu = r=1 $ condition $$|| T ||_v \leq b_1 ( \alpha ) < 1$$ implies $$\alpha \leq \frac{ ( \mu + \lambda )^2 }{ ( 2 \mu + \lambda ) \lambda } ,$$ which yields for the numerical setting of our example $$\alpha \leq \frac{9}{5} .$$ In accordance with Corollary \[cor:suff\], a lower bound for the region of stability of $ P ( \theta ) $ is $$\frac{1 - || T ||_v}{|| P_1 - P_0 ||_v } \geq \max_{ 1 \leq \alpha \leq 9/5 } \frac{(\mu+\lambda)^2 - \lambda(2\mu+\lambda)\alpha}{\mu(\mu+\lambda+\alpha(\mu-\lambda))},$$ where we used the bounds provided in Example \[ex:erer\]. For the numerical values of the example we obtain $$\max_{ 1 \leq \alpha \leq 9/5 } \frac{9 - 5 \alpha}{6 + 2 \alpha} = \frac{1}{2} ,$$ where the maximum is attained at $ \alpha = 1 $. Hence, the system remains stable for a breakdown probability up to $ \approx 1/2 $. In the following section, we will show that the series expansion bound yields numerically better bounds. This comes, however, at the price of restricting the analysis to a finite version of model. The M/G/1/N Queue with Breakdowns (Finite State Space) {#sec:A6} ------------------------------------------------------ In this section a M/G/1/N queue is considered with finite size $N$ (where $N$ is not too large). In this case the state space is $S=\{0,1,\dots,N\}$, and $ D_\theta $ (short for $D_{P_\theta}$) as well as $ \pi_\theta $ (short for $\pi_{P_\theta}$) can be easily computed numerically. In this case, SEB can be used for numerical computations. We illustrate the series expansion bound with some numerical examples. We choose $ N =50 $ as the maximum number of jobs in the system. Like in the previous section, we let $ \lambda =0.5$, $ \mu = 1 $, $ r = 1 $, and assume that service times are exponentially distributed. Note that for large $ N $ the mean queue length of the finite system is (almost) identical to that of the infinite one. In this case one could use the strong stability bounds for approximate performance evaluation rather than computing SEB explicitly. We compute SEB for the $v$-norm with $\alpha = 1$. We have to check the condition put forward in (iv) of Theorem \[th:relerror\] numerically. For our numerical setting we obtain $ || ( P_{1} - P_0 ) D_0 ||_{ v } = 8 $, which implies $ \theta || ( P_{1} - P_0 ) D_0 ||_{ v } < 1 $ for $ 0 \leq \theta \leq \theta_0 < 1 /8 $. In the following we choose $ \theta_0 = 0.1 $. In Figure \[fig:1\] we plot the relative absolute error of SEB($K$) for $ K=1,2 $ and $3$, for the probability of having more than 2 customers in the systems. More specifically, we bound $ | \pi_\theta^\top f - \pi_0^\top f | $ for $ \theta \in [ 0 , \theta_0 ] $, with $ \theta_0 = 0.1 $, using SEB($K$), where $ f ( s ) = 1 $ if $ s > 2 $ and zero otherwise. It thus holds that $ || f ||_v = 1 $. In line with Lemma \[le:norms\], we obtain the bound $$| \pi_\theta^\top f - \pi_0^\top f | \leq \Delta_{ \rm{SEB} ( K )} ( P ( \theta ) , P_0 ) .$$ We plot in Figure \[fig:1\] the absolute relative error, given by $$\frac{ \left| \Delta_{ \rm{SEB} ( K )} ( P ( \theta ) , P_0 ) - | \pi_\theta^\top f - \pi_0^\top f | \, \right| }{ | \pi_\theta^\top f - \pi_0^\top f | } ,$$ for $ K=1,2,3$ and $ \theta \in [ 0 , 0.1 ] $. (400,250) ![The relative absolute error for approximating the $ | \pi_\theta^\top f - \pi_0^\top f | $ with SEB($K$) with $ K=1,2$ and $3$.[]{data-label="fig:1"}](MG1-eps-converted-to.pdf "fig:") Discussion of Results --------------------- In this section we discussed numerical approximations for the single server queue with breakdowns. SSB has the advantage of providing bounds for infinite queues, unfortunately, the numerical quality of the bounds is rather poor. In light of Theorem \[th:relerror\], this comes as no surprise. SEB proved to be numerically very efficient for the model but required that a finite queue is studied. There is, however, an interesting link between the two approaches as the techniques developed for SSB lend themselves to establish lower bounds of convergence for series expansions. Conclusion ========== Perturbation bounds for Markov chains have been intensively studied in the literature. Condition number bounds are attractive as they provide uniform perturbation bounds. Unfortunately, due to their simple structure they fail to capture the true non-linear dependence of the stationary distribution on the Markov kernel. SSB, which provides a non-linear expression in the size of the perturbation, overcomes this drawback and is the only bound applicable in case of an infinite state space. We introduced a new family of bounds based on a series expansion approach. As illustrated by a series of examples both analytical and numerical, our new bounds yield good results and have the desirable property that the relative error vanishes when the size of the perturbation tends to zero. A realistic example from queueing theory illustrated the potential use of perturbation bounds in robustness analysis. Acknowledgement {#acknowledgement .unnumbered} =============== The authors are grateful to an anonymous reviewer for valuable remarks on an earlier version of the paper. [22]{} \(2010) Structural perturbation analysis of a single server queue with breakdowns. *Stochastic Models* **26** 78–97. \(2009) The pseudo-marginal approach for efficient Monte Carlo computations. *Annals of Statistic* **37** 697–725. \(1983) Ergodicity and stability of Markov chains with respect to operator topology in the space of transition kernels. *Doklady Akademii Nauk Ukrainskoi S.S.R.* seriya A **11** 3–5. \(1984) Strong stability of the imbedded [M]{}arkov chain in an [M]{}/[G]{}/1 system. *International Journal Theory of Probability and Mathematical Statistics, American Mathematical Society* [**29**]{} 1–5. \(2016) Noisy Monte Carlo: convergence of Markov chains with approximate transition kernels. *Statistics and Computing* **26** 29-47. \(1988) Estimates for the deviations of the transition characteristics of nonhomogeneous Markov processes. *Ukrainian Math. J.* **40** 588–592. \(1963) Some queueing problems with the service station subject to breakdown. *Operations Research* **11** 303–320. \(1981) Queueing systems with breakdowns in data base modeling. Proceedings of Performance 81 (8 th IFIP International Symposium on Comp. Perf. Model.), North Holland: Amsterdam [**81**]{} 213–232 \(2013) Bounds on the deviation of discrete-time Markov chains from their mean-field model. *Performance Evaluation* **70** 736–749. \(2006) Performance analysis approximation in a queueing system of type $M/G/1$. *Mathematical Methods of Operations Research* **63** 341–356. \(2010) Strong stability of the batch arrival queueing systems. *Stochastic Analysis and Applications* **28** 8–25. \(2013) Sensitivity analysis of discrete Markov chains via matrix calculus *Linear Algebra and its Applications* **438** 1727-–1745. \(1998) The Maclaurin Series for performance functions of Markov chains. [*Advances in Applied Probability*]{} [**30**]{} 676–692. \(1982) Analysis of $M/G/1$ queueing system with repairable service station. *Acta Math. Appl. Sinica* **5** 113–127. \(2014) Rough parameter dependence in climate models and the role of Ruelle-Pollicott resonances. *Proceedings of the National Academy of Sciences of the United States of America* **111** 1684–1690. \(2001) Comparison of perturbation bounds for the stationary distribution of a Markov chain. *Linear Algebra and its Applications* **335** 137–150. \(2015) Robustness of large-scale stochastic matrices to localized perturbations. *IEEE Transactions on Network Science and Engineering* [**2**]{} 53–64. \(2002) The deviation matrix of a continuous-time [M]{}arkov chain. *Probability in the Engineering and informational Sciences* [**16**]{} 351–366. \(2010) Naïve learning in social networks and the wisdom of crowds. *American Economic Journal Microeconomics* **2** 112–149. \(1984) Perturbation bounds for the stationary probabilities of a finite Markov chain. *Advances in Applied Probability* **16** 804–818. \(2003) Taylor expansions for stationary Markov chains. *Advances in Applied Probability* **35** 1046–1070. \(2010) Series expansions for continuous-time [M]{}arkov processes. *Operations Research* [**58**]{} 756–767. \(2007) Series expansions for finite-state Markov chains. *Probability in Engineering and Informational Sciences* **21** 381–400. \(1994) , Chapter 36 in [*Probability, Statistics and Optimization*]{} (F.P. Kelly, ed.) Wiley. \(1982) Gernalized inverses and their application to applied probability problems. *Linear Algebra and its Applitcations* **45** 157-198. \(1994) Uniform stability of Markov chains. *SIAM Journal on Matrix Analysis and Applications* **4** 1061–1074. \(2011) Ergodicity coefficients defined by vector norms. *SIAM Journal on Matrix Analysis and Applications* **32** 153–200. \(1998) Blockwise perturbation theory for Markov chains. *SIAM Journal on Matrix Analysis and Applications* **20** (1998) 270–278. \(1996) Asymptotic expansions of singularly perturbed systems involving rapidly fluctuating Markov chains. *SIAM Journal on Applied Mathematics* **56** 277-293. \(1996) Strong Stable Markov Chains; VSP Utrech, TbiMC Scientific Publishers. \(1986) Strongly stable Markov chains. *Journal of Soviet Mathematics* **34** 1493–1498. \(1960) Finite Markov Chains. *Van Nostrand, New York.* \(1997) [*Markov Processes for Stochastic Modeling*]{}, Chapman & Hall: London. \(2008) On optimal condition numbers for Markov chains. *Numerische Mathematik* **110** 521–537. \(2002) On a question concerning condition numbers for Markov chain. *SIAM Journal on Matrix Analysis and Applications* **23** 1109–1119. \(1998) Applications of Paz’s inequality to perturbation bounds for Markov chains. *Linear Algebra and its Applications* **268** 183–196. \(2001) On deviation matrices for birth-death processes. [*Journal Probability in the Engineering and Informational Sciences*]{} **15** 239–258. \(2011) Error bounds on practical approximation for two tandem queue with blocking and non-preemptive priority. *Computers and Mathematics with Applications* **61** 1810–1822. \(2013) On perturbation bounds for the joint stationary distribution of multivariate Markov chain models. *East Asian Journal on Applied Mathematics* **3** 1–17. \(1997) Reliability analysis of $M/G/1$ queueing systems with server breakdowns and vacations. *Journal of Applied Probability* **34** 546–555. \(2012) Perturbation bounds for the stationary distribution of Markov chains. *SIAM Journal of Matrix Analysis and Applications* **33** 1057–1074. \(2010) New perturbation bounds for denumerable chains. *Linear Algebra and its Applications* [**432**]{} 1627–1649. \(1975) The role of the generalized inverse in the theory of finite Markov chains. *SIAM Review* **17** 443-464. \(1980) The condition of a finite Markov chain and perturbation bounds for the limiting probabilities. *SIAM Journal of Algebraic Discrete Methods* **1** 273–283. \(1982) Analysis of finite Markov chains by group inversion techniques. In: Recent Application of Generalized Inverses, S.L. Campbell (Ed.), Research Notes in Mathematics, Boston, [**68**]{} 50-81. \(2005) Sensitivity and convergence of uniformly ergodic Markov chains. *Journal of Applied Probability* **42** 1003–1014. \(2005) Sensitivity of hidden Markov models. *Journal of Applied Probability* **42** 632–642. \(2010) New perturbation bounds for denumerable Markov chains. *Linear Algebra and its Applications* **432** 1627–1649. \(2003) Improved bounds for a condition number of Markov chains. *Linear Algebra and its Applications* **386** 225–241. \(1993) Entrywise perturbation theory and error analysis for Markov chains. *Numerische Mathematik* **65** 109–120. \(2008) Robust intervention in probabilistic Boolean networks. *IEEE Transactions on Signal Processing* **56** 1280–1294. \(2013) Perturbation results for comparison of Markov models. *Journal of Statistics Applications & Probability* **2** 27-31. \(1968) Perturbation theory and finite Markov chains. [*Journal of Applied Probability*]{} [**5**]{} 410-413. \(2013) Perturbation bounds for quantum Markov processes and their fixed points. *Journal of Mathematical Physics* **54** no. 032203. \(2001) Sensitivity analysis, ergodicity coefficients, and rank-one updates of finite Markov chains. In: Stewart, W. J. (ed.) [*Numerical Solutions of Markov Chains*]{}, Marcel Dekker, New York. \(2002) Ergodic potential. *Stochastic Processes* [**16**]{}, 351-366. \(1963) Queueing with breakdowns. *Operations Research* **11** 62–71. \(2001) Reliability analysis of the retrial queue with server breakdowns and repairs. *Queueing Systems* **38** 363–380. \(1998) [*Continuous-Time Markov Chains and Applications: A Singular Perturbations Approach.*]{} Springer, New York. \(2000) Singularly perturbed discrete-time Markov chains. *SIAM Journal on Applied Mathematics* **61** 834-854. [^1]: Edited version of the paper that appeared in [*Markov Processes and Related Fields*]{}, [**22**]{}, pages 227-265, 2016. [^2]: We use the transposed here as in this paper all vectors are by convention column vectors. [^3]: As exemplified in (\[eq:tr\]), distributions on $ S $ are represented as row vectors in vector-matrix notation. Since by convention a tuple $ \mu= ( \mu_i : i \in S ) $ representing a distribution on $ S $ when written as vector $ \mu \in [0,1]^S$ becomes a column vector, we explicitly denote $ \mu $ in transposed form to make it a row vector, i.e., we write $ \mu^\top$, see (\[eq:tr\]). When it causes no confusion we will refer to either $ \mu $ or $ \mu^\top $ as distributions. For example, in (\[eq:tr\]) we may refer to $ \mu $ as well as $ \mu^\top $ as initial distribution. [^4]: Note that this implies for $ x \in S^\mathbb{R}$: $ || x^\top ||_\infty = || x ||_1 $ and $|| x^\top ||_ 1 = \| x \|_\infty $.
--- abstract: 'In this work we consider a stochastic version of the Primitive Equations (PEs) of the ocean and the atmosphere and establish the existence and uniqueness of pathwise, strong solutions. The analysis employs novel techniques in contrast to previous works [@EwaldPetcuTemam], [@GlattHoltzZiane1] in order to handle a general class of nonlinear noise structures and to allow for physically relevant boundary conditions. The proof relies on Cauchy estimates, stopping time arguments and anisotropic estimates.' author: - | Nathan Glatt-Holtz and Roger Temam\ \ \ \ bibliography: - 'ref\_teressa.bib' title: 'Pathwise Solutions of the 2-D Stochastic Primitive Equations' --- Dedicated to Alain Bensoussan on the occasion of his 70th birthday. Introduction {#sec:introduction} ============ The Primitive Equations (PEs) are widely regarded as a fundamental description of geophysical scale fluid flows. They provide the analytical core of large General Circulation Models (GCMs) that are at the forefront of numerical simulations of the earth’s ocean and atmosphere (see e.g. [@Trenberth1]). In view of the wide progress made in computation the need has appeared to better understand and model some of the uncertainties which are contained in these GCMs. This is the so called problem of “parameterization”. Besides all of the physical forms of parameterization [@Trenberth1; @ColmanPotter2; @ColmanPotter1] , stochastic modeling has appeared as one of the major modes in the contemporary evolution of the field (see [@EP09; @PenlandSardeshmukh; @PenlandEwald1; @Rose1; @LeslieQuarini1; @MasonThomson; @BernerShuttsLeutbecherPalmer; @ZidikheriFrederiksen] and also [@GlattHoltzTemamTribbia1]). In this context there is a clear need to better understand the numerical and analytical underpinnings of stochastic partial differential equations. In the present article we will establish the global well-posedness of the stochastically forced Primitive Equations of the ocean in dimension two. While this system has been treated in a simplified form in previous works, for the case of additive noise [@EwaldPetcuTemam] and nonphysical boundary conditions [@GlattHoltzZiane1], our aim here is to go further and treat a more physically realistic version of these equations in the context of a multiplicative noise. In the formulation herein we face two new fundamental difficulties in contrast to previous work. Firstly, due to the imposed boundary conditions we lose higher order cancelations in the nonlinear terms. This complicates the a priori estimates which in turn prevent the usage of more direct compactness arguments adopted in, [@Breckner], [@GlattHoltzZiane1]. On the other hand, due to the nonlinear multiplicative noise structure, the system may not be transformed into a random PDE as in [@EwaldPetcuTemam]. For this reason we are not able to treat the probabilistic dependence as a parameter in the problem. The analysis therefore requires the usage of advanced tools both from stochastic analysis, namely continuous time martingale theory and stopping time arguments, and PDE theory which we treat in detail in a separate work [@GlattHoltzTemam1]. A significant literature exists concerning the Navier-Stokes equations driven by a multiplicative volumic white noise forcing. See [@BensoussanTemam; @Viot1; @Cruzeiro1; @CapinskiGatarek; @Flandoli1; @MikuleviciusRozovskii4; @ZabczykDaPrato2; @Breckner; @BensoussanFrehse; @BrzezniakPeszat; @MikuleviciusRozovskii2; @GlattHoltzZiane2]. While our point of view is similar to some of these works we would like to point out that the Primitive Equations, not withstanding very recent results on global well-posedness in 3D, are technically more involved than the Navier- Stokes equations. This article is dedicated to Alain Bensoussan on the occasion of his 70th birthday with friendship and admiration, and, for the second author (RT), sweet reminiscences of many interactions, from Junior High School, to the early papers on stochastic partial differential equations [@BensoussanTemam2], [@BensoussanTemam], on the subject of this article, and to many more interactions over the years. Presentation of the 2D Stochastic PEs {#ss1.1a} ------------------------------------- The 2D stochastic Primitive Equations take the form \[eq:PE2DBasic\] $$\begin{gathered} {\partial_{t}} u + u {\partial_{x}} u + w {\partial_{z}} u - \nu \Delta u -f v + {\partial_{x}} p = F_u + \sigma_u(\mathbf{v},T) \dot{W}_1, \label{eq:PEMoment1}\\ {\partial_{t}} v + u {\partial_{x}} v + w {\partial_{z}} v - \nu \Delta v + f u = F_v + \sigma_v(\mathbf{v},T) \dot{W}_2, \label{eq:PEMoment2}\\ {\partial_{z}}p = - \rho g, \label{eq:PEHydroStatic}\\ {\partial_{x}} u + {\partial_{z}} w = 0, \label{eq:PEDivFree}\\ {\partial_{t}} T + u {\partial_{x}} T + w {\partial_{z}} T - \mu \Delta T = F_T + \sigma_T(\mathbf{v},T) \dot{W}_3, \label{eq:PETempCouple}\\ \rho = \rho_0( 1 - \beta_T ( T - T_0)). \label{eq:PEDensityRelationEmp} \end{gathered}$$ This two dimensional model may be derived from the classical three dimensional formulation by positing invariance in one of the horizontal directions, namely the $y-$ (south-north) direction. Here $(\mathbf{v}, w) = (u,v,w)$, $T$, $\rho$ denote respectively the flow field, the temperature and the density of the fluid being modeled. The coefficients $\nu$, $\mu$ account for the molecular viscosity and the rate of heat diffusion. A further parameter $f$, which is a function of the earth’s rotation, appears in an antisymmetric term and is taken constant (see below). The terms $F_u$, $F_v$ and $F_T$ correspond to external sources of horizontal momentum and heat. While the first two terms do not usually appear in practice we retain them here for mathematical generality and to allow for the possible treatment, not carried out here, of non-homogenous boundary conditions. The white noise processes $\dot{W}_i$, the raison d’être of the present work may be written in the expansions $$\label{eq:noiseFormalExp} \left( \begin{split} \sigma_{u}(\mathbf{v},T) \dot{W}_1\\ \sigma_{v}(\mathbf{v},T) \dot{W}_2\\ \sigma_{T}(\mathbf{v},T) \dot{W}_3\\ \end{split} \right) = \sigma_{\mathbf{v},T} (U) \dot{W} = \sum_k \sigma^k_{\mathbf{v},T}(U) \dot{W}^k.$$ The $\dot{W}^k$s may be interpreted as the time derivatives of a sequence of independent standard 3-D brownian motions. However, since the sample paths of brownian motion are nowhere differentiable we make rigorous sense of , and in a time integrated sense, appealing to the theory of stochastic integration which we consider in the Itō sense. From the physical point of view these terms may be introduced in the model as a means to “parameterize” physical and numerical uncertainties. We consider the evolution of (\[eq:PE2DBasic\]) over a rectangular domain $\mathcal{M} = (0,L) \times (-h,0)$ and label the boundary $\Gamma_i = (0,L) \times \{0\}$, $\Gamma_b = (0,L) \times \{-h\}$ and $\Gamma_l = \{0, L\} \times (-h,0)$. We posit the physically realistic boundary conditions \[eq:PE2DBasicBC\] $$\begin{gathered} {\partial_{z}} \mathbf{v} + \alpha_{\mathbf{v}} \mathbf{v} = 0, \quad w = 0, \quad {\partial_{z}} T + \alpha_T T = 0, \quad \textrm{ on } \Gamma_i, \label{eq:BCTop}\\ \mathbf{v} = 0, \quad {\partial_{x}}T = 0, \quad \textrm{ on } \Gamma_l, \label{eq:BCSides}\\ \mathbf{v} = 0, \quad w= 0, \quad {\partial_{z}}T = 0, \quad \textrm{ on } \Gamma_b.\footnotemark \label{eq:BCBot} \end{gathered}$$ The equations and boundary conditions , are supplemented by initial conditions for $u$, $v$ and $T$, that is $$\label{eq:basicInitialCond} u = u_0, \quad v = v_0, \quad T = T_0, \quad \textrm{ at } t = 0.$$ The Primitive equations may be derived from the compressible Navier-Stokes equations with a combination of empirical observation and scale analysis. In particular, since deviations of the density of the fluid from a mean value are small at geophysical scales, the so—called Boussinesq approximation justifies treating the flow as incompressible.[^1] Another crucial feature, that the ocean and atmosphere form a thin later on the earth surface leads to the hydrostatic approximation which reduces the third momentum equation to . Beyond its obvious numerical significance, this anisotropy in the governing equations has many interesting theoretical consequences. We refer the interested reader to the classical texts [@RoisinBeckers] and [@Pedlosky] for an introduction from the physical point of view. Particularly in view of the numerous complications involved in extending the existing deterministic model to the stochastic setting we have made some simplifications for the purposes of clarity of presentation. The equation (\[eq:PE2DBasic\]) is a description of the earth’s ocean but all of what follows can be easily extended to the PEs of the atmosphere or of the coupled atmosphere-ocean system (see [@LionsTemamWang3]). We assume moreover that the $\beta$-plane approximation is valid. This assumption, that the earth is locally flat, is appropriate for regional climatological studies. Of course, for larger scales one must include additional terms that account for the curvature of the earth. Since it is convenient to work in the rotating reference frame of the earth’s surface, an additional antisymmetric term appears in the momentum equations. The Coriolis parameter in this term, which we denote by $f$, depends on the earth’s angular velocity and the local latitude of the region under investigation. In the context of the $\beta$-plane approximation, $f$ is usually a linear function of $y$, $f = f_{0}(1 + \beta y)$. Here we take $f$ to be constant, but once again the proof is easily modified to treat the more general case. Several other terms have been simplified or deleted which may be reintroduced in their full form with no new complications to the mathematical framework or to the proof of the main theorem. We neglect the density dependence on the salinity of the ocean. We therefore drop the diffusion equation that accounts for variations in salt concentration in the fluid. We also ignore further, possibly anisotropic, diffusion terms that may appear in both the momentum and temperature equations to account for subgrid scale processes, the so called eddy diffusion terms. Finally, as noted above, we consider only the case of homogenous boundary conditions. Dating back to a series of seminal works in the early 90’s [@LionsTemamWang1], [@LionsTemamWang2], and [@LionsTemamWang3] a significant mathematical literature has developed around the Primitive Equation. In a significant breakthrough, the global well posedness in 3-D was established [@CaoTiti], [@Kob06], [@Kob07]. Subsequent work of [@ZianeKukavica] developed alternative proofs, which allow for the treatment of physically relevant boundary conditions. For the two dimensional deterministic setting we mention [@PetcuTemamWirosoetisno], [@BreschKazhikhovLemoine] where both the cases of weak and strong solutions are considered. Despite these breakthroughs in the 3-D system, the 2-D primitive equations seem to be significantly more difficult mathematically than the 2-D Navier-Stokes equations. For instance, it is still an open problem as to whether weak solutions of the Primitive equations in the deterministic setting are unique. This is a classical exercise for the 2-D Navier Stokes equations. In any case we refer the interested reader to the recent survey papers [@RousseauTemamTribbia] and [@PetcuTemamZiane] (appearing in [@TemamTribbia]) which provide a systematic overview of deterministic theory. Note that, in regards to notational conventions and earlier deterministic results the present article relies heavily on this later work. While the deterministic mathematical theory is now on a firm ground the stochastic theory remain underdeveloped. In [@GlattHoltzZiane1] the existence of pathwise, $z$-weak solutions was established for a simplified model with nonlinear multiplicative noise and non-physical boundary conditions. A more extended system was considered in [@EwaldPetcuTemam] again for the so–called $z$-weak solutions but with additive noise and periodic boundary conditions. Adapting the methods of [@CaoTiti] the 3-d case with additive noise and nonphysical boundary conditions was recently treated in [@GuoHuang]. In contrast, beginning with the seminal work [@BensoussanTemam], extensive investigations for the stochastic Navier-Stokes equations have been undertaken. For weak or martingale solutions we mention [@Viot1], [@Cruzeiro1], [@CapinskiGatarek], [@Flandoli1], [@MikuleviciusRozovskii4] and further references therein. Regarding pathwise solutions we mention [@ZabczykDaPrato2], [@Breckner], [@BensoussanFrehse], [@BrzezniakPeszat], [@MikuleviciusRozovskii2]. In recent joint work of the first coauthor [@GlattHoltzZiane2] the local and global theory of pathwise solution in $H^1 = W^{1,2}$ was established. Some of the tools and techniques developed in this final reference play a central role herein. In the present work we will establish the global existence and uniqueness of a pathwise solution to (\[eq:PE2DBasic\]), supplemented by for all $U_0 = (u,v, T) \in (H^1)^3$. We conclude this introduction with an outline of the basic difficulties we encounter along with the main steps in the proof. Basic Estimates and some Difficulties Particular to the Stochastic Case {#sec:BasicEst} ----------------------------------------------------------------------- The first step in the proof is to establish the local existence, up to a strictly positive stopping time $\tau$, of a solution $U$ for (\[eq:PE2DBasic\]) in $L^\infty_t H^1_x \cap L^2_tH^2_x$. Here and throughout the rest of the work $U$ stands for the (prognostic) unknowns in the problem, $U = (u,v,T) = (\mathbf{v},T)$; $U^{(n)}$ will denote some Galerkin approximation of $U$. Having implemented a Galerkin scheme the passage to the limit is delicate as it is not evident a priori how to uniformly choose $\tau > 0$ such that $$\sup_n {\mathbb{E}}\left( \sup_{0 \leq t \leq \tau} |U^{(n)}|^2_{H^1} + \int_0^\tau |U^{(n)}|^2_{H^2} \right) < \infty.$$ Even if such a $\tau$ were to be found it would remain unclear how to infer the necessary sub-sequential (strong) compactness without changing the underlying stochastic basis. To overcome these difficulties we follow [@GlattHoltzZiane2] and perform Cauchy type estimates for the Galerkin solutions $\{U^{(n)}\}_{n \geq 1}$ associated with (\[eq:PE2DBasic\]) up to a carefully chosen sequence of stopping times. Since we have sufficient uniform control of the growth of $U^{(n)}$ at time zero we are able to pass to the limit almost surely up to a strictly positive time. Note that this stage of the investigation required us to establish some novel bounds on the nonlinear portion of the equation in $H^1$ (see (\[eq:H1EstBU\]) below) and to make careful use of the equivalence of some fractional order spaces. Since a significant portion of this analysis is non-probabilistic in character, we have separated these delicate and technical points to a separate work, [@GlattHoltzTemam1]. With a local solution $(U,\tau) = ((u,v,T), \tau)$ in hand, further a posteriori estimates are needed to preclude the possibility of a finite time blowup. In previous work in the deterministic setting (which corresponds to the admissible case, $\sigma \equiv 0$) successive estimates on $U$, ${\partial_{z}} u$ and ${\partial_{x}} u$ in $L^\infty_t L^2 \cap L^2_tH^1$ were conducted to finally obtain an estimate for $U$ in $L^\infty_t H^1_{x} \cap L^2_t H^2_{x}$. See [@PetcuTemamZiane]. For the present stochastic setting several difficulties emerge which prevent a trivial repetition of these estimates. The first difficulty appears when one tries to make estimates for ${\partial_{z}} u$. If, on the one hand, we take ${\partial_{z}}$ of (\[eq:PEMoment1\]) and then apply Itō’s formula to determine an evolution equation for $|{\partial_{z}} u|^2_{L^2(\mathcal{M})}$, we encounter terms of the form $$\int_{\mathcal{M}} {\partial_{zzz}} u {\partial_{z}} u d\mathcal{M}.$$ Due to (\[eq:BCTop\]) and (\[eq:BCBot\]) second order terms occur on the boundary that seem to be intractable a priori. If, on the other hand, following [@PetcuTemamZiane Section 3.3.4], we attempt to multiply (\[eq:PEMoment1\]) by $Q(-{\partial_{zz}} u)$ it is not clear what the appropriate stochastic interpretation of $du \cdot Q (-{\partial_{zz}} u)$ should be. Here $Q$ is the orthogonal complement of the vertical averaging operator and is needed to get rid of the pressure in the governing equations (cf. and the remarks immediately following). To address these difficulties we introduce an auxiliary linear stochastic evolution system with a diffusion governed by the now established local solution of the original system. We use this system to “subtract off” the noise terms from (\[eq:PEMoment1\]) at the cost of a number of new random terms which we must estimate. While we are indeed able to treat these terms, at each order our estimates require almost sure bounds (in $\omega$) on the norms of the solution at the previous order. For this reason an involved stopping time argument must be employed at the final step. Here we make repeated use of a novel abstract result concerning a generic class of stochastic processes (see Proposition \[thm:stArg1\]) which streamlines the analysis. Abstract Setting ================ We begin with a review of the mathematical setting for the stochastic Primitive Equations and define the pathwise solutions we will consider in this work. The deterministic and stochastic preliminaries are treated successively. For the deterministic elements we largely follow [@PetcuTemamZiane], to which we refer the reader for a more detailed treatment. For more theoretical background on the general theory of stochastic evolution systems we mention the classical book [@ZabczykDaPrato1] or the more recent treatment in [@PrevotRockner]. The Hydrostatic Approximation {#sec:HydrostaticAprox} ----------------------------- The hydrostatic approximation, in concert with the incompressibility and the boundary conditions leads one to several simple observations that allow a useful reformulation of . This will motivate the mathematical set-up below. First we consider the third component of the flow $w$. Notice that by integrating and making use of the boundary condition for $w$ we infer that $$\label{eq:diagonosticVar} w(x,z) = - \int^0_z {\partial_{z}} w(x, \bar{z}) d \bar{z} = \int^0_z {\partial_{x}} u(x, \bar{z}) d\bar{z}.$$ Accordingly $w = w(u)$ is seen to be an explicit functional of $u$[^2]. Also notice that according to the boundary conditions , we impose on $w$, $\smallint_{-h}^0 {\partial_{x}} u d \bar{z} = 0$. This implies that $\smallint_{-h}^0 u d \bar{z}$ is constant in $x$ and so, due to the lateral boundary condition , we conclude that $$\label{eq:divFreeStandIn2D} \int_{-h}^0 u dz = 0.$$ Next we consider the pressure. By integrating the hydrostatic balance equation and making use of the linear dependence of the density on the temperature we deduce $$\label{eq:pressureDecomp} p_s(x) - p(x,z) = \int_z^0 {\partial_{z}} p(x,\bar{z}) d \bar{z} = - g \rho_0 \int_z^0 ( 1 - \beta_T ( T(x,\bar{z}) - T_0)) d \bar{z}.$$ Here $p_s$ is the surface pressure, which is unknown and a function of the horizontal variable only. We have therefore decomposed the pressure into two components, the second of which couples the first momentum equation to the heat diffusion equation. Rearranging above and taking a partial derivative in $x$ we arrive at $$\label{eq:pressureDecomp} {\partial_{x}}p = {\partial_{x}}p_s - \beta_T g \rho_0 \int_z^0 {\partial_{x}} T d \bar{z}.$$ With the above considerations we now rewrite as: \[eq:PE2Dreform\] $$\begin{gathered} \begin{split} {\partial_{t}} u + u {\partial_{x}} u + w(u) {\partial_{z}} u - \nu \Delta u - f v + {\partial_{x}} p_s &- \beta_T g \rho_0 \int_z^0 {\partial_{x}} T d \bar{z} \\ =& F_u +\sigma_u(\mathbf{v},T) \dot{W}_1, \end{split} \label{eq:PEMoment1R}\\ {\partial_{t}} v + u {\partial_{x}} v + w(u) {\partial_{z}} v - \nu \Delta v + f u = F_v + \sigma_v(\mathbf{v},T) \dot{W}_2, \label{eq:PEMoment2R}\\ w(u) = \int^0_z {\partial_{x}} u d \bar{z}, \quad \int_{-h}^0 u dz = 0, \label{eq:PEDivFreeR}\\ {\partial_{t}} T + u {\partial_{x}} T + w(u) {\partial_{z}} T - \mu \Delta T = F_T + \sigma_T(\mathbf{v},T) \dot{W}_3, \label{eq:PETempCoupleR} \end{gathered}$$ Basic Function Spaces {#sec:BasicFnSpaces} --------------------- The main function spaces used are defined as follows. Take: $$\begin{split} H := \left\{U = (u, v,T) \in L^2(\mathcal{M})^3: \int_{-h}^0 u dz = 0 \right\}. \end{split}$$ We equip $H$ with the inner product[^3] $$ (U,U^\sharp) := \int_{\mathcal{M}} \mathbf{v}\cdot \mathbf{v}^\sharp d \mathcal{M} + \int_{\mathcal{M}} T T^\sharp d \mathcal{M}, \quad U = (\mathbf{v}, T), U^\sharp = (\mathbf{v}^\sharp, T^\sharp).$$ Here and below we shall make use of the vertical averaging operator $\mathit{P}\phi = \frac{1}{h} \smallint_{-h}^0 \phi(\bar{z}) d\bar{z}$ and its orthogonal complement $\mathit{Q}\phi = \phi - \mathit{P} \phi$. Note that the projection operator $\Pi: L^2(\mathcal{M})^3 \rightarrow H$ may be explicitly defined according to $U \mapsto (\mathit{Q} u, v, T)$. We also define $$ \begin{split} V := \left\{ U = (u, v, T) \in H^1(\mathcal{M})^3: \int_{-h}^0 u dz = 0, \mathbf{v} = 0 \textrm{ on } \Gamma_l \cup \Gamma_b \right\}. \end{split}$$ Here we take the inner product $((\cdot, \cdot)) = \nu ((\cdot, \cdot))_1 + \mu (( \cdot, \cdot ))_2$ where, for given $U = (\mathbf{v},T), U^\sharp =(\mathbf{v}^\sharp, T^\sharp)$ $$ \begin{split} ((U,U^\sharp))_1 &:= \int_{\mathcal{M}} {\partial_{x}} \mathbf{v} \cdot {\partial_{x}} \mathbf{v}^\sharp + {\partial_{z}} \mathbf{v} \cdot {\partial_{z}} \mathbf{v}^\sharp \,d \mathcal{M} + \alpha_{\mathbf{v}}\int_{\Gamma_i} \mathbf{v} \cdot \mathbf{v}^\sharp \,dx,\\ ((U,U^\sharp))_2 &:= \int_{\mathcal{M}} {\partial_{x}}T{\partial_{x}} T^\sharp + {\partial_{z}}T {\partial_{z}} T^\sharp\,d \mathcal{M} + \alpha_T \int_{\Gamma_i} T T^\sharp \,dx. \\ \end{split}$$ Note that under these definitions a Poincaré type inequality $|U| \leq C \|U\|$ holds for all $U \in H^1(\mathcal{M})^3 \supset V$. Moreover the norms $\| \cdot \|_{H^{1}}$, $\| \cdot \|$ may be seen to be equivalent over all of $H^1(\mathcal{M})^3$. Even if $U$ is very regular many of the main terms in the abstract formulation of (\[eq:PE2Dreform\]) do not belong to $V$ (see (\[eq:linLowerOrderDef\]),(\[eq:NLTerm1\]), (\[eq:NLTerm\])) As such, we shall also make use of some additional auxiliary spaces: $$\begin{split} \tilde{V} &:= \left\{ U = (u, v, T) \in H^1(\mathcal{M})^3: \int_{-h}^0 u dz = 0, \mathbf{v} = 0 \textrm{ on } \Gamma_l \right\},\\ \mathcal{Z} &:= \left\{ U = (u, v, T) \in H^1(\mathcal{M})^3: \mathbf{v} = 0 \textrm{ on } \Gamma_l \right\}. \end{split}$$ As for $V$ we endow both spaces with the norm $\| \cdot \|$. One may verify that $\Pi: \mathcal{Z} \rightarrow \tilde{V}$ and is continuous on $H^1(\mathcal{M})^3$. Finally we take $V_{(2)} = H^2(\mathcal{M})^3 \cap V$ and equip this space with the classical $H^2(\mathcal{M})$ norm which we denote by $| \cdot |_{(2)}$. Since a considerable portion of the work below will consist in making estimates for the first momentum equation (\[eq:PE2DBasic\]) (or equivalently (\[eq:PEMoment1R\])) we set for simplicity $$| u |_{L^2(\mathcal{M})} := |u|, \quad |\nabla u |_{L^{2}(\mathcal{M})} := \|u\|, \quad | u |_{H^2(\mathcal{M})} := | u |_{(2)}, \quad$$ for $u \in L^{2}(\mathcal{M})$ or $H^{1}(\mathcal{M})$ or $H^{2}(\mathcal{M})$. Note that since we will always use a lower case $u$ (or as needed $u^\sharp$, $u^\flat$) for the first component of elements in the spaces $H, V, V_{(2)}$ the context will be clear. The deterministic framework {#sec:determ-fram} --------------------------- The linear second order terms in the equation are captured in the Stokes-type operator $A$ which is understood as a bounded operator from $V$ to $V'$ via $\langle A U, U^{\sharp} \rangle = ((U, U^{\sharp}))$. The additional terms in the variational formulation of this portion of the equation capture the Robin boundary condition (\[eq:BCTop\]). They may be formally derived by multiplying $-\nu\Delta u, -\nu \Delta v, -\mu \Delta T$ in , , by test functions $u^{\sharp}, v^{\sharp}, T^{\sharp}$, integrating over $\mathcal{M}$ and integrating by parts. We shall make use of the subspace $D(A) \subset V_{(2)}$ given by $$\begin{split} D(A) = \{ U = (\mathbf{v}, T) \in V_{(2)}:\ & {\partial_{z}} \mathbf{v} +\alpha_{\mathbf{v}} \mathbf{v} = 0, {\partial_{z}} T + \alpha_T T = 0 \textrm{ on } \Gamma_i,\\ &{\partial_{x}} T = 0 \textrm{ on } \Gamma_l, {\partial_{z}} T = 0 \textrm{ on } \Gamma_b \}. \end{split}$$ On this space we may extend $A$ to an unbounded operator by defining $$AU = \left( \begin{split} -\nu \mathit{Q} \Delta u\\ -\nu \Delta v\\ -\mu \Delta T\\ \end{split} \right), \quad U \in D(A).$$ Since $A$ is self adjoint, with a compact inverse $A^{-1} : H \rightarrow D(A)$ we may apply the standard theory of compact, symmetric operators to guarantee the existence of an orthonormal basis $\{\Phi_k\}_{k \geq 0}$ for $H$ of eigenfunctions of $A$ with the associated eigenvalues $\{\lambda_k\}_{k \geq 0}$ forming an unbounded, increasing sequence. Note that by the regularity results in [@Ziane1] or [@TemamZiane1] we have $\Phi_k \in D(A) \subset V_{(2)}$. Define $$ H_n = span \{\Phi_1, \ldots, \Phi_n\}.$$ Take $P_n$ and $Q_n = I - P_n$ to be the projections from $H$ onto $H_n$ and its orthogonal complement respectively. For $m > n$ let $P^{n}_{m} = P_{m} -P_{n}$. Note that in some previous works, the second component of the pressure (cf. and [@PetcuTemamZiane Section 2]), is included in the definition of the principal linear operator $A$. Since this breaks the symmetry of $A$ we relegate such terms to a separate, lower order operator $A_p$, which we define from $V'$ via $\langle A_p U, U^\sharp \rangle:= \kappa g \rho_0 \int_{\mathcal{M}} \int_z^0 T d \bar{z} {\partial_{x}} u^\sharp d\mathcal{M}, \forall U^\sharp\in V.$ Taking into account the boundary conditions for $u^\sharp$ on $\Gamma_\ell\enspace (x=0,L),$ this may be extended to a map $A_{p}: V \rightarrow H$ via $$\label{eq:linLowerOrderDef} A_p U = \left( \begin{array}{c} -\beta_T g \rho_0 Q \left( \int_z^0 {\partial_{x}} T d \bar{z} \right) \\ 0\\ 0 \end{array} \right).$$ If $U \in D(A)$, $A_{p} U \in \tilde{V}$ and we have that $$\label{eq:estimateAloworder} \begin{split} | A_p U| \leq c \|U\|, \quad \| A_p U \| \leq c |U|_{(2)}.\\ \end{split}$$ We next capture the nonlinear portion of (\[eq:PE2DBasic\]). Accordingly we *define* the diagnostic function $w$ by setting $$\label{2.7a} w(U) = w(u) = \int^0_z {\partial_{x}} u d \bar{z}, \quad U = (u, v, T) \in V.$$ For $U =(\mathbf{v}, T), U^\sharp=(\mathbf{v}^\sharp,T^\sharp) \in V$ we take $B(U,U^\sharp) = B_1(U,U^\sharp) + B_2(U,U^\sharp)$ where $$\label{eq:NLTerm1} B_1(U,U^\sharp) := \left( \begin{split} \mathit{Q} (u {\partial_{x}}u^\sharp)\\ u {\partial_{x}}v^\sharp\\ u {\partial_{x}}T^\sharp\\ \end{split} \right) = \left( \begin{split} B_1^1(u,u^\sharp)\\ B_1^2(u,v^\sharp)\\ B_1^3(u,T^\sharp)\\ \end{split} \right) $$ and $$\label{eq:NLTerm} B_2(U,U^\sharp) := \left( \begin{split} \mathit{Q} (w(u) {\partial_{z}} u^\sharp)\\ w(u) {\partial_{z}} v^\sharp \\ w(u) {\partial_{z}} T^\sharp \\ \end{split} \right) = \left( \begin{split} B_2^1(u,u^\sharp)\\ B_2^2(u,v^\sharp)\\ B_2^3(u,T^\sharp)\\ \end{split} \right). $$ We also set $B^j = B^j_1 + B^j_2$, $j = 1,2,3$. We summarize some properties of $B$ needed in the sequel \[thm:Best\] $B$ is well defined as a bilinear and continuous map from $V\times V$ to $V',$ from $V \times V_{(2)}$ and $V_{(2)}\times V$ to $H$. Moreover $B$ satisfies the following properties and estimates: - For any $U, U^\sharp \in V$ and $\langle B(U,U^\sharp),U^\sharp \rangle = 0$. - For $U, U^\sharp, U^\flat \in V$ $$\label{eq:3dtypeEstimateFor2DPE} | \langle B(U, U^\sharp), U^\flat \rangle | \leq c \|U \| \|U^\sharp \| | U^\flat |^{1/2} \| U^\flat \|^{1/2}.$$ - On the other hand if we assume that $U \in V$ $U^\sharp \in V_{(2)}$ and $U^\flat \in H$ then $$\label{eq:strongTypeEstimate} |\langle B(U, U^\sharp), U^\flat \rangle| \leq c \|U\| \|U^\sharp\|^{1/2} |U^\sharp|^{1/2}_{(2)} |U^\flat|.$$ In particular, for $U \in V_{(2)}$, $$\label{eq:L2NormBUv2} |B(U,U)|^2 \leq c\|U\|^{3} |U|_{(2)}.$$ Also if $U = (\mathbf{v}, T) = (u,v, T) \in V_{(2)}$, $U^\sharp\in V$ and $U^b\in H,$ then $$\label{eq:strongTypeEstimateFirstCompControl} | \langle B(U, U^\sharp), U^\flat \rangle | \leq c \|u\|^{1/2} |u|_{(2)}^{1/2}\|U^\sharp\| |U^\flat|.$$ - For $U \in V_{(2)}$, $B(U) \in \tilde{V}$ and satisfies the estimate $$\label{eq:H1EstBU} \begin{split} \|B(U,U)\|^2 \leq& c \|U\| |U|^3_{(2)}. \end{split}$$ - Given $U, U^\sharp \in V_{(2)}$, $U^\flat \in H$ $$\label{eq:genericFirstCompEstL2ClassComp1} \begin{split} | \langle B^1_1(u, u^\sharp), u^\flat \rangle| \leq c |u |^{1/2} | u |_{(2)}^{1/2} | {\partial_{x}} u^\sharp | |u^\flat |, \end{split}$$ $$\label{eq:genericFirstCompEstL2ClassComp2} \begin{split} | \langle B^1_1(u, u^\sharp), u^\flat \rangle| \leq c |u |^{1/2} \|u \|^{1/2} |{\partial_{x}}u^\sharp |^{1/2} \|{\partial_{x}}u^\sharp \|^{1/2} |u^\flat |.\\ \end{split}$$ On the other hand $$\label{eq:genericFirstCompEstL2FuckedComp1} \begin{split} | \langle B^1_2(u, u^\sharp), u^\flat \rangle| \leq c |{\partial_{x}} u | |{\partial_{z}} u^\sharp |^{1/2} \|{\partial_{z}} u^\sharp \|^{1/2} |u^\flat |, \end{split}$$ $$\label{eq:genericFirstCompEstL2FuckedComp2} \begin{split} | \langle B^1_2(u, u^\sharp), u^\flat \rangle| &\leq c \| u \|^{1/2} |u |_{(2)}^{1/2} | {\partial_{z}} u^\sharp | |u^\flat |. \end{split}$$ - For $U = (\mathbf{v}, T) \in D(A)$ $$\label{eq:semiCancelpdz} \langle B^1(u, u), -{\partial_{zz}} u\rangle = - \frac{2}{h} \int_\mathcal{M} u {\partial_{x}} u ( \alpha_{\mathbf{v}} u(x,0) + {\partial_{z}} u(x, -h ) d\mathcal{M}$$ which admits the estimate $$\label{eq:semiCancelpdzEst} \begin{split} |\langle B^1(u, u), -{\partial_{zz}} u\rangle| &\leq c (|u|\| u \|^2 +|{\partial_{z}}u|^{1/2} \|{\partial_{z}}u\|^{1/2} |u|^{1/2} \| u \|^{3/2}). \end{split}$$ The continuity properties of $B$ as well as the basic cancellation property (i) are well established in the literature. The estimates , may be established as for the classical Navier-Stokes systems (see, for example, [@Temam1]). On the other hand the estimates , , , , , may be proved with anisotropic techniques. See [@TemamZiane1] or [@GlattHoltzZiane1]. The property , which is new and requires extensive computations, may be found in [@GlattHoltzTemam1]. We next capture the Coriolis forcing with the bounded operator $E: H \rightarrow H$ given by $$\label{eq:CorTerm} EU := \left( \begin{array}{c} -Q f v\\ f u\\ 0 \end{array} \right).$$ We observe that $E$ is also continuous from $V$ to $\tilde{V}$ and that $$\label{eq:CorbndOperator} | EU | \leq c|U|, \quad \|EU\| \leq c\|U\|.$$ Finally, for brevity of notation we shall sometimes write $$\label{eq:nonlinearExt} N(U) = A_{p}U + B(U, U) + EU, \quad U \in V.$$ The stochastic framework: nonlinear, multiplicative white noise forcing {#sec:stoch-fram-non} ----------------------------------------------------------------------- It finally remains to define the white noise driven terms in . To begin we fix a stochastic basis $\mathcal{S}:= (\Omega, \mathcal{F}, \{\mathcal{F}_t\}_{t \geq 0}, \mathbb{P}, \{W^k\}_{k \geq 1})$, that is a filtered probability space with $\{W^k\}_{k \geq 1}$ a sequence of independent standard 1-D Brownian motions relative to the filtration $\mathcal{F}_t$. In order to avoid unnecessary complications below we may assume that $\mathcal{F}_t$ is complete and right continuous (see [@ZabczykDaPrato1]). Fix a separable Hilbert space $\mathfrak{U}$ with an associated orthonormal basis $\{e_k\}$. We may formally define $W$ by taking $W = \sum_{k} W^k e_k$. As such $W$ is a cylindrical Brownian motion evolving over $\mathfrak{U}$. We next recall some basic definitions and properties of spaces of Hilbert-Schmidt operators. For this purpose we suppose that $X$ and $\tilde{X}$ are any separable Hilbert spaces with the associated norms and inner products given by $| \cdot |_X$, $| \cdot |_{\tilde{X}}$ and $\langle \cdot, \cdot \rangle_{X}$ $\langle \cdot, \cdot \rangle_{\tilde{X}}$, respectively. We denote by $$L_2(\mathfrak{U}, X) = \{ R \in \mathcal{L}(\mathfrak{U},X): \sum_k |Re_k|^2_{X} < \infty \},$$ the collection of Hilbert Schmidt operators from $\mathfrak{U}$ to $X$. By endowing this collection with the inner product $$\langle R, S \rangle_{L_2(\mathfrak{U}, X)} = \sum_k \langle R e_k, S e_k \rangle_X,$$ we may consider $L_2(\mathfrak{U},X)$ as itself being a Hilbert space. One may readily show that if $R^{(1)} \in L_2(\mathfrak{U}, X)$ and $R^{(2)} \in L(X,\tilde{X})$ then indeed $R^{(2)}R^{(1)} \in L_2(\mathfrak{U}, \tilde{X})$. Given an $X$-valued predictable[^4] process $G \in L^{2}(\Omega; L^{2}_{loc}([0, \infty),L_{2}(\mathfrak{U}, X)))$ one may define the (Itō) stochastic integral $$M_{t} := \int_{0}^{t} G dW = \sum_k \int_0^t G_k dW^k,$$ as a square integrable function from $\Omega$ into $X.$ Furthermore $M_t$ is an element of $\mathcal{M}^2_X$, that is the space of all $X$-valued square integrable martingales (see [@PrevotRockner Section 2.2, 2.3]), and, as such, $\{M_t \}_{t \geq 0}$ has many desirable properties. Most notably the Burkholder-Davis-Gundy (BDG) inequality holds which in our context takes the form $$\label{eq:BDG} {\mathbb{E}}\left(\sup_{ t' \in [0,t]} \left| \int_0^{t'} G dW \right|_X \right) \leq c\enspace {\mathbb{E}}\left( \int_0^{t} |G|_{L_2(\mathfrak{U}, X)}^2 \right)^{1/2},$$ for any $t > 0$, where $c$ is here an absolute constant. Given any Banach spaces $\mathcal{X}$ and $\mathcal{Y}$ we denote by $Bnd_u(\mathcal{X}, \mathcal{Y})$, the collection of all mappings $$\Psi: \Omega \times [0, \infty) \times \mathcal{X} \rightarrow \mathcal{Y} ,$$ such that $\Psi$ is almost surely continuous in $[0,\infty) \times \mathcal{X}$ and $$\| \Psi(x) \|_{\mathcal{Y}} \leq c(1 + \|x\|_{\mathcal{X}}), \quad x \in \mathcal{X},\\$$ where the numerical constant $c$ may be chosen independently of $t$ and $\omega$. If in addition $$\| \Psi(x) - \Psi(y)\|_{\mathcal{Y}} \leq c \|x - y \|_{\mathcal{X}}, \quad x, y \in \mathcal{X}\\$$ we say that $\Psi$ is in $Lip_u(\mathcal{X}, \mathcal{Y})$. With these notations now in place we define $$\label{eq:Hdef} \sigma(U) = \left( \begin{array}{c} Q \sigma_u(\mathbf{v},T)\\ \sigma_v(\mathbf{v},T)\\ \sigma_T(\mathbf{v},T) \end{array} \right)$$ We shall assume throughout this work that $$\sigma: \Omega \times [0, \infty) \times H \rightarrow L_2(\mathfrak{U},H)$$ such that $$\label{eq:measurablityConditions} \begin{split} \textrm{If }U \textrm{ is an } &H\textrm{-valued, predictable process, then}\\ & \sigma(U) \textrm{ is an $L_2(\mathfrak{U},H)$-valued, predictable process,} \end{split}$$ and $$\label{eq:lipCond} \sigma \in Lip_u(H, L_2(\mathfrak{U}, H)) \cap Lip_u(V, L_2(\mathfrak{U}, V)) \cap Bnd_u(V, L_2(\mathfrak{U}, D(A))).$$ Note that under the conditions imposed above the stochastic integral $\int_0^\tau \sigma(U) dW$ may be shown to be well defined, taking values in $H$ for any $H$ predictable $U \in L^2(\Omega, L^2_{loc}([0,\infty); H))$. Denoting $\sigma_k(\cdot) = \sigma(\cdot) e_k$ we may interpret this integral in the expansion[^5] $$ \begin{split} \int_0^t \sigma(U) dW = \sum_{k \geq 1} \int_0^t \sigma^k(U) dW^k = \sum_{k \geq 1} \left( \begin{split} \int_0^t Q\sigma_{u}^k(U) dW^k,\\ \int_0^t \sigma_{v}^k(U) dW^k,\\ \int_0^t \sigma_{T}^k(U) dW^k\\ \end{split} \right).\\ \end{split}$$ \[rmk:NoiseCond\] The condition (\[eq:lipCond\]) may be weakened to $$\label{eq:lipCondClass} \sigma \in Lip_u(H, L_2(\mathfrak{U}, H)) \cap Lip_u(V, L_2(\mathfrak{U}, V)) \cap Bnd_u(D(A), L_2(\mathfrak{U}, D(A)))$$ in the proof of local and maximal existence of solutions below (see Proposition \[thm:MaxExist\]). However, for the proof of global existence of solutions we need the stronger condition (\[eq:lipCond\]). See Remark \[rmk:BndMods\] below, for further details. Even with this more restrictive condition the theory covers a physically interesting class of additive and nonlinear multiplicative stochastic forcing regimes relevant to the ’parametrization’ problem discussed in the Introduction. We refer the interested reader to [@GlattHoltzTemamTribbia1] for further details and examples. For the external forcing terms $F_u,F_v, F_T$ we let: $$F = \left( \begin{array}{c} Q F_u\\ F_v\\ F_T \end{array} \right).$$ We assume throughout the analysis below that $F$ is an $H$-valued, predictable process with $$\label{eq:SizeConditiononF} F \in L^2( \Omega; L^2_{loc} ([0,\infty), H)).$$ We shall allow for the case of probabilistic dependence in the initial data $U_{0} = (u_{0}, v_{0},T)$ as well. Specifically we assume that $$\label{eq:dataCond} U_{0} \in L^{2}(\Omega; V) \textrm{ and is } \mathcal{F}_{0} \textrm{-measurable}.$$ Definition of solutions {#sec:definition-solutions} ----------------------- With the abstract mathematical definitions for each term in the original system now in hand we may reformulate (\[eq:PE2Dreform\]) as an abstract evolution equation $$\label{eq:mainSystemAbsDiffForm} \begin{split} d U + (AU + N(U)) dt &= F dt + \sigma(U) dW,\\ U(0) &= U_0. \end{split}$$ More precisely we have the following basic notion of local and global pathwise solutions to the above system. \[def:solutionNot\] Let $\mathcal{S} = (\Omega, \mathcal{F}, \{\mathcal{F}_t\}_{t \geq 0}, \mathbb{P}, W)$ be a fixed stochastic basis. Assume that $F$ is as in (\[eq:SizeConditiononF\]), that $U_{0}$ satisfies and that $\sigma$ satisfies (\[eq:measurablityConditions\]), (\[eq:lipCond\]). - A pair $(U, \tau)$ is *a local strong (pathwise) solution* of (\[eq:mainSystemAbsDiffForm\]) if $\tau$ is a strictly positive stopping time and $U(\cdot \wedge \tau)$ is a $\mathcal{F}_{t}$ adapted process in $H$ so that $$\label{eq:Uregularity} \begin{split} U(\cdot \wedge \tau) \in L^2(\Omega; C ([0,\infty); V)),\\ U(\tau) {1 \! \! 1_{t \leq \tau}} \in L^2(\Omega; L^2_{loc}([0,\infty); D(A))), \end{split}$$ and satisfies, for every $t \geq 0$ and every $\tilde{U} \in H$, $$\label{eq:spdeAbstrac} \begin{split} \langle U(t \wedge &\tau), \tilde{U} \rangle + \int_0^{t \wedge \tau} \langle A U + N(U), \tilde{U} \rangle ds\\ &= \langle U_{0}, \tilde{U} \rangle + \int_0^{t \wedge \tau} \langle F, \tilde{U} \rangle ds + \int_0^{t \wedge \tau} \langle \sigma(U), \tilde{U} \rangle dW. \end{split}$$ - Strong solutions of are said to be *(pathwise) unique* up to a stopping time $\tau > 0$ if given any pair of strong solutions $(U^1,\tau)$, $(U^2,\tau)$ which coincide at $t = 0$ on $\tilde{\Omega} = \{U^1(0) = U^2(0)\}$, then $${\mathbb{P}}\left( {1 \! \! 1_{\tilde{\Omega}}} ( U^1(t \wedge \tau) - U^2(t \wedge \tau)) = 0; \forall t \geq 0 \right) = 1.$$ - Suppose that $\{\tau_n\}_{n\geq 1}$ is a strictly increasing sequence of stopping times converging to a (possibly infinite) stopping time $\xi$ and assume that $U$ is a continuous $\mathcal{F}_{t}$-adapted process in $H$. We say that the triple $(U,\xi, \{\tau_n\}_{n\geq 1} )$ is *a maximal strong solution* if $(U, \tau_n)$ is a local strong solution for each $n$ and $$\label{eq:FiniteTimeBlowUp} \sup_{t \in [0, \xi]} \|U\|^2 + \int_0^{\xi} |A U|^2 ds = \infty$$ almost surely on the set $\{\xi < \infty\}$. - If $(U, \xi, \{\tau_n\}_{n\geq 1} )$ is a maximal strong solution and $\xi = \infty$ a.s. then we say that the solution is global. We now have a complete mathematical framework and may state, in precise terms, the main theorem in this work: \[thm:MainReslt\] Suppose that the conditions imposed in Definition \[def:solutionNot\] hold. Then there exists a unique global solution $U$ of (\[eq:mainSystemAbsDiffForm\]). Local and Maximal Existence and Uniqueness ========================================== The proof of local and maximal existence of solutions for (\[eq:mainSystemAbsDiffForm\]) makes use of techniques developed for the 3D Navier-Stokes Equations [@GlattHoltzZiane2]. Since the analysis here is very similar on many points to [@GlattHoltzZiane2] our treatment will be brief in some details. However, one crucial step, to show that the Galerkin approximations associated to are Cauchy (in appropriate spaces) is quite delicate. This is due to stray terms that arise from the discretization which must be controlled. See Proposition \[thm:CompEst\] below. \[thm:MaxExist\] Suppose that $U_0$, $F$ satisfy the conditions imposed in Definition \[sec:HydrostaticAprox\]. For $\sigma$ we assume (\[eq:measurablityConditions\]) and may weaken (\[eq:lipCond\]) to (\[eq:lipCondClass\]). Then there exists a unique maximal strong solution $(U,\xi)$ for (\[eq:mainSystemAbsDiffForm\]). Moreover, for any (deterministic) $t > 0$, $$\label{eq:weakBnds} \mathbb{E} \left(\sup_{0 \leq t' \leq \xi \wedge t} |U|^2 + \int_0^{\xi \wedge t} \|U\|^2 dt' \right) < \infty.$$ The first step in the proof, to establish certain Cauchy estimates for the Galerkin approximations of is carried out in in Lemma \[thm:CompEst\]. For the details of the passage to limit we refer the reader to [@GlattHoltzZiane2 Proposition 4.2] and the remarks thereafter. To establish local, pathwise, uniqueness in the sense of Definition \[def:solutionNot\] we note that the estimate of $B$ (in dimension 2) is the same as may be achieved for the Navier-Stokes non-linearity in $d =3$. The proof is therefore identical to [@GlattHoltzZiane2 Proposition 4.1]. With a local strong solution in hand it remains to extend this solution to a maximal existence time $\xi$ as in Definition \[def:solutionNot\], (iii). For this point we may employ an argument going back to [@Jacod1]. For a more recent treatment see [@GlattHoltzZiane2 Lemma 4.1, 4.2, Theorem 4.1]. Since we have the cancellation property in $B$ (Lemma \[thm:CompEst\],(i)) the bound on the weak norms up to a possible finite time blow up (\[eq:weakBnds\]) may be established exactly as in [@GlattHoltzZiane2 Lemma 4.2] Local Cauchy Estimates for the Galerkin System {#sec:CompEst} ---------------------------------------------- We turn now to the task of estimating the difference of solutions of the Galerkin system associated to at different orders. We begin by recalling some definitions. A $\mathcal{F}_{t}$-adapted process $U^{(n)} \in L^2(\Omega,C([0,\infty); H_n))$ is a solution of the Galerkin system of order $n$ for if it satisfies: $$\label{eq:galerkinSystem} \begin{split} d U^{(n)} + (A U^{(n)} + P_n N(U^{(n)}))dt &= P_{n}F dt + P_{n} \sigma (U^{(n)}) dW,\\ U^{(n)}(0) &= P_nU_0. \end{split}$$ Note that by the standard theory of stochastic ordinary differential equations one may establish the global existence of a unique solution $U^{(n)}$ at each order. See e.g. [@Flandoli1] for details. \[thm:CompEst\] Let $\{U^{(n)}\}_{n \geq 1}$ be the (global) solutions of the Galerkin systems and suppose that there exists a deterministic constant $M$ such that $$\label{eq:uniformDataBnd} \|U_{0}\|^2 \leq M \quad a.s.$$ Then - there exists a stopping time $\tau$, with $\tau > 0$, a subsequence $n_j$ and a process $U$ almost surely in $C([0,\infty); V) \cap L^{2}_{loc}([0,\infty); D(A))$ such that: $$\label{eq:StrongSubConv} \lim_{j \rightarrow \infty} \sup_{t \in [0,\tau]} \|U^{(n_j)} - U\|^2 + \int_0^\tau | A (U^{(n_j)} - U)|^2ds = 0,$$ almost surely. - for any $p \geq 1$, there exists a sequence of $\Omega_{n_j} \in \mathcal{F}_0$, with $\Omega_{n_j} \uparrow \Omega$ such that: $$\label{eq:UniformBnd} \sup_j \mathbb{E} \left[{1 \! \! 1_{\Omega_{n_j}}} \left( \sup_{t \in [0,\tau]} \|U^{(n_j)}\|^2 + \int_0^\tau | A U^{(n_j)}|^2 ds \right)^{p/2} \right]< \infty$$ and $$\label{eq:MeanBndOnCandidateSoln} \mathbb{E}\left(\sup_{t \in [0,\tau]} \|U\|^2 + \int_0^\tau |AU|^2 ds \right)^{p/2} < \infty$$ \[r3.1\] The technical condition is needed so that we may obtain the uniform pathwise bound: $$\label{eq:UniformPathwiseBndStrayTm} \begin{split} \sup_{m,n} \underset{\omega \in \Omega}{\operatorname{ess\,sup}} \left( \sup_{0 \leq t' \leq \tau^{M}_{m,n}} \|U^{(m)}\|^{2} + \int_{0}^{\tau^{M}_{m,n}}(1 + |AU^{(m)}|^{2}) ds\right) &< \infty.\\ \end{split}$$ See , below. Note however that this condition may be removed in the final step of the proof of the local existence. See [@GlattHoltzZiane2 Proposition 4.2]. As in previous work [@GlattHoltzZiane2], the proof consists in establishing the sufficient conditions , for [@GlattHoltzZiane2 Lemma 5.1] (see also related results in [@MikuleviciusRozovskii2]), from which (i) and (ii) follow directly. The proof makes use of some delicate estimates present even in the deterministic case ($\sigma \equiv 0$) that have been carried out in a separate work [@GlattHoltzTemam1]. We assume with no loss of generality that $M > 1$ and consider the stopping times $$\label{eq:UniformLocalExistTms} \tau^{M}_{n} = \inf_{t \geq 0} \left\{ \sup_{t' \in [0,t]} \|U^{(n)}\|^2 + \int_0^t |A U^{(n)}|^2 dt' > 4 M \right\}.$$ Note that implies that $$\label{e3.8a} \sup_{t' \in [0,t]} ||U^{(n)}||^2 + \int^t_0 |AU^{(n)}|^2dt'\leq 4M, \text{ for } 0\leq t < \tau^M_n.$$ We set $\tau^{M}_{m,n} := \tau^{M}_{n} \wedge \tau^{M}_{m}$. The first step in the proof is to perform estimates on $U^{(m)} - U^{(n)}$ which we denote by $R^{(m,n)}$ to simplify the notation below. We will show that $$\label{eq:LocalCauchyCriteria} \lim_{n \rightarrow \infty} \sup_{m > n} \mathbb{E} \left( \sup_{0 \leq t' \leq \tau_{m,n}^M} \|R^{(m,n)} \|^2 + \int_0^{\tau_{m,n}^M }| A R^{(m,n)}|^2 dt \right) =0 ,$$ which is the first condition required for [@GlattHoltzZiane2 Lemma 5.1]. We fix $m > n$, subtract the equations for $m, n$, then apply $A^{1/2}$ to the resulting system. Note that $D(A^{1/2}) = V$ with $\|U\|^2 = |A^{1/2}U|$. By the Itō lemma we may also infer that $$\label{eq:diffH1Differential} \begin{split} d \| R^{(m,n)}\|^2 + & 2| A R^{(m,n)} |^2 dt \\ = &- 2\langle P_{m} N(U^{(m)}) - P_{n} N(U^{(n)}), A R^{(m,n)} \rangle dt\\ &+ 2 \langle P_{m}^{n} F, A R^{(m,n)}\rangle dt\\ &+ \| P_m\sigma(U^{(m)}) - P_n\sigma(U^{(n)}) \|^2_{L_2(\mathfrak{U},V)} dt\\ &+ 2 \langle P_m \sigma(U^{(m)}) - P_n \sigma(U^{(n)}), A R^{(m,n)}\rangle dW.\\ \end{split}$$ We now estimate each of the terms above with a view of finally applying a stochastic analogue of the Gronwall inequality, [@GlattHoltzZiane2 Lemma 5.3]. With this in mind fix any pair of stopping times $\tau_a, \tau_b$ such that $0 \leq \tau_a \leq \tau_b \leq \tau^M_{n,m}$. By integrating the above system, taking a supremum over the random interval $[\tau_a, \tau_b]$ and finally taking an expected values we may infer that $$\label{eq:mainInequality} \begin{split} {\mathbb{E}}\Biggl( \sup_{t \in [\tau_a, \tau_b]}&\| R^{(m,n)} \|^2 + \int_{\tau_a}^{\tau_b} | AR^{(m,n)} |^2 dt \Biggr)\\ \leq& c{\mathbb{E}}\|R^{(m,n)} (\tau_a)\|^2 + c {\mathbb{E}}\int_{\tau_a}^{\tau_b} |\langle (P_{m} - P_{n}) F, A R^{(m,n)} \rangle| dt\\ &+ c {\mathbb{E}}\int_{\tau_a}^{\tau_b} |\langle P_{m}N(U^{(m)}) -P_{n}N(U^{(n)}), A R^{(m,n)} \rangle| dt\\ &+ c{\mathbb{E}}\int_{\tau_a}^{\tau_b}\| P_m\sigma(U^{(m)}) - P_n\sigma(U^{(n)}) \|^2_{L_2(\mathfrak{U},V)} dt\\ &+ c {\mathbb{E}}\sup_{t \in [\tau_a,\tau_b]} \left|\int_{\tau_a}^t\langle P_m \sigma(U^{(m)}) - P_n \sigma(U^{(n)}), A R^{(m,n)} \rangle dW\right|.\\ \end{split}$$ We begin by addressing the ‘deterministic portions’ of . Using the equivalence fractional order spaces, and the generalized Poincaré inequality it is shown in [@GlattHoltzTemam1], (see in Theorem 3.1 of [@GlattHoltzTemam1]) that: $$\label{eq:DetCauchyEstSummary} \begin{split} |\langle P_{m}N(U^{(m)}) -& P_{n}N(U^{(n)}),A R^{(m,n)}\rangle|\\ \leq& \frac{1}{2} |A R^{(m,n)}|^{2} + c(1+ |AU^{(m)}|^2+ \|U^{(n)}\|^4) \|R^{(m,n)}\|^2\\ &+\frac{c}{\lambda_n^{1/4}} (1+ \|U^{(n)}\|^2) (1 + |AU^{(n)}|^2). \end{split}$$ We next consider the terms which arise only in the stochastic context. The Itō correction term may be estimated according to $$\label{eq:itoCorectionComp} \begin{split} \| P_m \sigma(U^{(m)}) &- P_n \sigma(U^{(n)})\|^2_{L_2(\mathfrak{U}, V)}\\ \leq& c \left( \| \sigma(U^{(m)}) - \sigma(U^{(n)})\|^2_{L_2(\mathfrak{U}, V)} + \| Q_n \sigma(U^{(n)}) \|^2_{L_2(\mathfrak{U}, V)} \right)\\ \leq& c(\|R^{m,n}\|^2 + \frac{1}{\lambda_n} | A \sigma(U^{(n)}) |^2_{L_2(\mathfrak{U}, H)})\\ \leq& c \left(\|R^{m,n}\|^2 + \frac{1}{\lambda_n}(1 + |AU^{(n)}|^2)\right) \end{split}$$ For the second inequality we have made use of the generalized Poincaré Inequality[^6]. The final inequality follows from (\[eq:lipCondClass\]). For the stochastic integral terms we apply and deduce $$\label{eq:stochasticIntCompEstBDG} \begin{split} \mathbb{E} & \sup_{\tau_a \leq t' \leq \tau_b} \left|\int_{\tau_a}^{t'} \langle P_m\sigma(U^{(m)}) - P_n \sigma(U^{(n)}) , A R^{(m,n)} \rangle dW \right|\\ \leq& c \mathbb{E} \left( \int_{\tau_a}^{\tau_b} \langle P_m\sigma(U^{(m)}) - P_n \sigma(U^{(n)}) , A R^{(m,n)} \rangle_{L_2(\mathfrak{U}, H)}^2 dt'\right)^{1/2}\\ \leq& c \mathbb{E} \left( \int_{\tau_a}^{\tau_b} \| P_m\sigma(U^{(m)}) - P_n \sigma(U^{(n)})\|^2_{L_2(\mathfrak{U}, V)} \| R^{(m,n)} \|^2 dt'\right)^{1/2}\\ \leq& c \mathbb{E} \left(\sup_{t \in [\tau_a, \tau_b]} \| R^{(m,n)} \| \right.\\ &\quad \quad \quad\left. \cdot \left( \int_{\tau_a}^{\tau_b} \| P_m\sigma(U^{(m)}) - P_n \sigma(U^{(n)})\|^2_{L_2(\mathfrak{U}, V)} dt'\right)^{1/2} \right)\\ \leq& \frac{1}{2} \mathbb{E} \left( \sup_{t \in [\tau_a, \tau_b]} \| R^{(m,n)} \|^2 \right) \\ &+ c {\mathbb{E}}\left( \int_{\tau_a}^{\tau_b} ( \|R^{(m,n)}\|^2 + \frac{1}{\lambda_n}(1 + |AU^{(n)}|^2) ) dt'\right).\\ \end{split}$$ The last inequality is achieved by applying the Schwarz inequality and then . We now gather the estimates , ), and compare with . Since $0 \leq \tau_a \leq \tau_b \leq \tau^M_{m,n}$ we conclude, using that $$\label{eq:FinalInequalityConclusion} \begin{split} {\mathbb{E}}\Biggl( &\sup_{t \in [\tau_a, \tau_b]}\| R^{(m,n)} \|^2 + \int_{\tau_a}^{\tau_b} | A R^{(m,n)} |^2 dt \Biggr)\\ \leq& c\ {\mathbb{E}}\|R^{(m,n}(\tau_a) \|^2 \\ &+c\ {\mathbb{E}}\int_{\tau_a}^{\tau_b} \left( (1 + |AU^{(m)}|^2) \|R^{(m,n)} \|^2+ \frac{1}{\lambda_n^{1/4}} (1+ |AU^{(n)}|^2) + | Q_n F|^2 \right)dt. \end{split}$$ Observe that the generic constant $c$ is independent of $m,n$ and that (\[eq:LocalCauchyCriteria\]) now follows from the stochastic Gronwall lemma. It remains to establish the other requirement of [@GlattHoltzZiane2 Lemma 5.1]. In the present context this translates to $$\label{eq:GrowthCond2} \lim_{\delta \rightarrow 0} \sup_{n} {\mathbb{P}}\left( \sup_{0 \leq t' \leq \tau^M_n \wedge \delta} \|U^{(n)}\|^2 + \int_0^{\tau^M_n \wedge \delta} | AU^{(n)}|^2 dt' > \tilde{M} \right) = 0,$$ for every $\tilde{M} > M$. By applying Itō we infer an equation for $t \mapsto \|U^{(n)}(t)\|^2$ very similar to (\[eq:AhalfUdifferential\]), below. Since, as for the Navier-Stokes system in $d = 3$ (see (\[eq:strongTypeEstimate\])) $$|\langle B(U), AU \rangle| \leq \|U\|^{3/2} |AU|^{3/2}, \quad U \in D(A)$$ and since the $A_p$ and $E$ terms are lower order (see ,), we may establish (\[eq:GrowthCond2\]) with a direct application of Doob’s inequality exactly as in [@GlattHoltzZiane2 Proposition 3.1]. With , the proof is complete. Global Existence ================ We now implement a series of anisotropic estimates that are used to infer global existence. Due to the non-commutativity introduced by the physical boundary conditions we must first define a new variable $\hat{U}$ that satisfies a system obeying the rules of ordinary calculus. We are then are able to derive suitable estimates for $\hat{u}_z$ and then $\hat{u}_x$ and finally for the entire original system in $V$. Since the resulting estimates yield only pathwise (rather than moment) bounds we must finally recourse to some involved stopping time arguments which make essential use of Lemma \[thm:stArg1\]. A Change of Variable to a Random PDE and some Auxiliary Estimates {#sec:change-vari-rand} ----------------------------------------------------------------- We consider the linear stochastic partial differential equation \[eq:PE2DLinearStoch\] $$\begin{gathered} {\partial_{t}} \check{u} - \nu \Delta \check{u} + {\partial_{x}} \check{p}_s ={1 \! \! 1_{t \leq \xi}} \sigma_u(\mathbf{v}, T) \dot{W}_1, \label{eq:PEMoment1LS}\\ {\partial_{t}} \check{v} - \nu \Delta \check{v} ={1 \! \! 1_{t \leq \xi}} \sigma_v(\mathbf{v},T) \dot{W}_2, \label{eq:PEMoment2LS}\\ {\partial_{t}}\check{T} - \mu \Delta \check{T} = {1 \! \! 1_{t \leq \xi}} \sigma_T(\mathbf{v},T) \dot{W}_3, \label{eq:PETempCoupleLS} \end{gathered}$$ with $\xi$ as in Proposition \[thm:MaxExist\]. This system is supplemented with the same boundary conditions as in . We posit the zero initial condition $\check{u}(0) = \check{v}(0) = \check{T}(0) = 0$. Note that the stochastic forcing terms depend on $(U,\xi) = ((\mathbf{v},T), \xi)$, maximal strong solution solution we found for (\[eq:PE2DBasic\])- in Proposition \[thm:MaxExist\]; $\sigma$ is exactly the same as appearing in and in particular satisfies . As in Section \[sec:definition-solutions\], may be formulated in an abstract form: $$\label{eq:auxLinSystemAbs} d \check{U} + A \check{U} dt = {1 \! \! 1_{t \leq \xi}} \sigma(U)dW, \quad \check{U}(0) = 0.$$ We shall need the following preliminary estimates below for $\check{U}$. \[thm:AuxSystemEst\] There exists a unique global pathwise strong solution of (\[eq:auxLinSystemAbs\]) taking its values in $D(A)$. Additionally for any deterministic finite time $t > 0$, we have $$\label{eq:checkStrongEstimate} \mathbb{E} \left( \sup_{t' \in [0, t]} | A\check{U}|^2 \right) < \infty.$$ We briefly outline the formal estimates that lead to (\[eq:checkStrongEstimate\]). Since is linear in the unknown everything, including the global existence, may be easily justified with a suitable Galerkin scheme (see e.g. [@Flandoli1]). Formally then we multiply by $A$ and apply the Itō lemma in $H$ to deduce $$\label{eq:H2EqnforcheckU} \begin{split} d | A \check{U}|^2 + 2 &|A^{3/2} \check{U}|^2 dt\\ =& 2 {1 \! \! 1_{t \leq \xi}}\langle A \sigma(U), A \check{U} \rangle dW + {1 \! \! 1_{t \leq \xi}} | A\sigma(U) |_{L_2(\mathfrak{U}, H)}^2 dt.\\ \end{split}$$ Fixing arbitrary $t > 0$ and taking a supremum over $t' \leq t$ and then expected values we infer from , and the fact that $\tilde{U}(0) = 0$, $$\label{eq:tildeUverystongEstimates} \begin{split} \mathbb{E}& \left( \sup_{t' \in [0, t]}| A\check{U} |^2 \right)\\ &\leq \mathbb{E} \sup_{t' \in [0, t]} \left| \int_0^{t' \wedge\xi}\langle A\sigma(U), A \check{U} \rangle dW \right|+ \mathbb{E}\int_0^{t \wedge\xi}| \sigma(U) |_{L_2(\mathfrak{U},D(A))}^2 dt'\\ &\leq \frac{1}{2} \mathbb{E} \left( \sup_{t' \in [0, t]}| A\check{U}|^2 \right) + c \mathbb{E}\int_0^{t \wedge\xi}| \sigma(U) |_{L_2(\mathfrak{U}, D(A)}^2 dt'\\ &\leq \frac{1}{2} \mathbb{E} \left( \sup_{t' \in [0, t]}| A\check{U}|^2 \right) +c\mathbb{E} \int_0^{t \wedge \xi} (1+ \|U \|^2) dt'. \end{split}$$ For the stochastic integral terms after the first inequality we apply and then estimate in a similar manner to . The final inequality is a consequence of the assumption imposed on $\sigma$. To complete the proof we rearrange and refer to in Proposition \[thm:MaxExist\] to conclude . We next subtract from and define $\hat{U} = U - \check{U}$. On the random interval $[0, \xi)$ we see that $\hat{U}$ must satisfy the following partial differential equation (without white noise driven forcing but with random coefficients) $$\label{eq:UPDEforzEstAbsPre} \frac{d}{dt} \hat{U} + A \hat{U} + A_p (\hat{U} + \check{U}) + B(\hat{U} + \check{U}) + E(\hat{U} + \check{U}) = F.$$ Note that, in contrast to this new system satisfies the usual rules of ordinary calculus. We may rewrite in a form more convenient for our purposes below: $$\label{eq:UPDEforzEstAbs} \begin{split} \frac{d}{dt} \hat{U} + A \hat{U} &+ A_p \hat{U} + B(\hat{U}) + E \hat{U} \\ =& F - B(\check{U},\check{U}) - B(\check{U},\hat{U}) - B(\hat{U},\check{U}) - E\check{U} - A_p \check{U}. \end{split}$$ By combining Lemma \[thm:AuxSystemEst\] with Proposition \[thm:MaxExist\] we may directly infer that \[thm:prelimestRandomSystem\] For any deterministic, finite $t > 0$ we have: $$\label{eq:weakEstUcheck} \mathbb{E} \left( \sup_{0 \leq t' \leq \xi \wedge t} | \hat{U}|^2 + \int_0^{\xi \wedge t} \|\hat{U} \|^2 ds\right) < \infty$$ Finally we note that the first momentum equation included in (\[eq:UPDEforzEstAbs\]), which will be the focus of our attention in the subsequent sections, is given by $$\label{eq:firstMomentumMinusNoise} \begin{split} {\partial_{t}} \hat{u} &+ \hat{u} {\partial_{x}} \hat{u} + w(\hat{u}) {\partial_{z}} \hat{u} - \nu \Delta \hat{u} - f \hat{v} + {\partial_{x}} \hat{p}_s - \beta_T g \rho_0 \int_z^0 {\partial_{x}} \hat{T} d \bar{z} \\ =& F_u + f \check{v} +\beta_T g \rho_0 \int_z^0 {\partial_{x}} \check{T} d \bar{z} \\ &-( \check{u} {\partial_{x}} \check{u} + w(\check{u}) {\partial_{z}} \check{u} ) -( \check{u} {\partial_{x}} \hat{u} + w(\check{u}) {\partial_{z}} \hat{u} ) -( \hat{u} {\partial_{x}} \check{u} + w(\hat{u}) {\partial_{z}} \check{u} )\\ =& F_u + f \check{v} +\beta_T g \rho_0 \int_z^0 {\partial_{x}} \check{T}d \bar{z} - (\tilde{\mathcal{B}}^1(\check{u}, \check{u}) + \tilde{\mathcal{B}}^1(\check{u},\hat{u}) + \tilde{\mathcal{B}}^1(\hat{u}, \check{u})). \end{split}$$ \[r4.1\] We infer from (\[eq:weakEstUcheck\]) that, $$\label{e4.9a} \sup_{0\leq t'\leq\xi\wedge t} |\hat{U}|^2 + \int^{\xi\wedge t}_0 ||\hat{U}||^2 ds \leq K^1(t,\omega) < \infty$$ where here and below, $K, K^i,$ denote a.s. finite constants which depend on $t$, on the data such as norms of $U_0$, $F$ and on $\omega$ through these norms and though stochastic integral terms driven by $W$. Anisotropic Estimates {#sec:time-unif-estim-uz} --------------------- We now turn to the estimates for ${\partial_{z}} \hat{u}$. \[thm:uzestimatesLemma\] Let $(U,\xi) = ((u,v,T), \xi)$ be the unique maximal strong solution of guaranteed by Proposition \[thm:MaxExist\]. Then, for every $t >0$ there exists a finite constant $K=K^2(t,\omega) < \infty$ depending on $t,\omega$ and the data such that $$\label{e4.10b} \sup_{0\leq t'\leq\xi\wedge t} |{\partial_{z}} \hat{u} |^2 + \int^{\xi\wedge t}_0\| {\partial_{z}} \hat{u} \|^2ds \leq K^2 \quad a.s.$$ We multiply by $-Q {\partial_{zz}}\hat{u}$ and integrate over the domain $\mathcal{M}$. Following closely the computations in [@PetcuTemamZiane] we to deduce: $$\label{eq:QuzzevolutionEqnRandomPDE} \begin{split} \frac{1}{2} \frac{d}{dt} \bigl( |{\partial_{z}}\hat{u}|^2 &+ \alpha_{\mathbf{v}} | \hat{u} |^2_{L^2(\Gamma_i)}\bigr) + \nu \| {\partial_{z}}\hat{u} \|^2 + \nu \alpha_{\mathbf{v}} |{\partial_{x}}\hat{u}|^2_{L^2(\Gamma_i)}\\ =& |P {\partial_{zz}} \hat{u}|^2 - \int_{\mathcal{M}}F_uQ {\partial_{zz}}\hat{u} \, d\mathcal{M}\\ &- \beta_T g \rho_0 \int_{\mathcal{M}} \left( \int^0_z {\partial_{x}}(\hat{T} + \check{T})d\bar{z} \right)Q{\partial_{zz}}\hat{u} \, d\mathcal{M}\\ &- \int_{\mathcal{M}}f(\hat{v} +\check{v})Q {\partial_{zz}}\hat{u} \, d\mathcal{M}\\ &+ \frac{2}{h} \int_{\mathcal{M}} \hat{u} {\partial_{x}}\hat{u} \left[\alpha_{\mathbf{v}} \hat{u}(0,x) + {\partial_{z}}\hat{u}(x,-h) \right] \, d \mathcal{M}\\ &+ \int_{\mathcal{M}} ( B^1(\check{u}, \check{u}) + B^1(\check{u},\hat{u}) + B^1(\hat{u}, \check{u})){\partial_{zz}}\hat{u} d\mathcal{M}\\ &= J_1 + J_2 + J_3 + J_4 + J_5 + J_6 + J_7 + J_8. \end{split}$$ Here the bottom boundary is flat which causes several terms to disappear present in [@PetcuTemamZiane]. The term $$-\langle B^1(\hat{u}, \hat{u}), - {\partial_{zz}} \hat{u} \rangle = \int (\hat{u} {\partial_{x}} \hat{u} + w( \hat{u}) {\partial_{z}} \hat{u}) \mathit{Q} {\partial_{zz}}\hat{u} \, d \mathcal{M}$$ largely cancels and appears as $J_5$ due to Lemma \[thm:Best\], (vi) above. Also we observe that $Q {\partial_{x}} \hat{p}_s =0$ which is why we multiply (\[eq:firstMomentumMinusNoise\]) by $-Q{\partial_{zz}}\hat{u}$ rather than $-{\partial_{zz}} u$. The first term $J_{1}$ on the right hand side of reduces to two terms at $z=-h$ and $0$ that are estimated using the trace theorem: $$\label{eq:J1Estdz} |J_1| \leq c \|\hat{U} \|^2+ \frac{\nu}{16} \|{\partial_{z}}\hat{u} \|^2.$$ The estimates for the next three terms are direct: $$\label{eq:J1234Estdz} \begin{split} |J_2| \leq& c |F|^2 + \frac{\nu}{16} \|{\partial_{z}}\hat{u}\|^2,\\ |J_3| \leq & c(\|\hat{U}\|^2 + \|\check{U}\|^2) + \frac{\nu}{16} \|{\partial_{z}}\hat{u} \|^2\\ \leq & c(\|\hat{U}\|^2 + |\check{U}|^2_{(2)}) + \frac{\nu}{16} \|{\partial_{z}}\hat{u} \|^2,\\ |J_4| \leq & c(\|\hat{U}\|^2 + \|\check{U}\|^2) + \frac{\nu}{16} \|{\partial_{z}}\hat{u} \|^2\\ \leq & c(\|\hat{U}\|^2 + |\check{U}|^2_{(2)}) + \frac{\nu}{16} \|{\partial_{z}}\hat{u} \|^2.\\ \end{split}$$ For $J_5$ we may estimate using and Young’s inequality $$\label{eq:J5Estdz} \begin{split} |J_5| \leq& c (|\hat{u}| \| \hat{u} \|^2 +|{\partial_{z}}\hat{u}|^{1/2} \|{\partial_{z}}\hat{u}\|^{1/2} |\hat{u}|^{1/2} \|\hat{u} \|^{3/2})\\ \leq& c (|\hat{U}| \|\hat{U}\|^2 + |{\partial_{z}}\hat{u}|^{2/3} |\hat{U}|^{2/3} \|\hat{U}\|^2) + \frac{\nu}{16} \|{\partial_{z}}\hat{u} \|^2\\ \leq& c (|\hat{U}| \|\hat{U}\|^2 + |{\partial_{z}}\hat{u}|^{2} \|\hat{U}\|^2) + \frac{\nu}{16} \|{\partial_{z}}\hat{u} \|^2.\\ \end{split}$$ For $J_6$, , allow $$\label{eq:checkchecknonlinearTerm} \begin{split} |J_6| \leq& c (|\check{u}|^{1/2} \| \check{u} \| + \| \check{u} \|^{3/2}) | \check{u} |^{1/2}_{(2)} | {\partial_{zz}} \hat{u} |\\ \leq& c \|\check{u}\| | \check{u} |_{(2)} | {\partial_{zz}} \hat{u} |\\ \leq& c \| \check{U} \|^2 |\check{U}|^2_{(2)} + \frac{\nu}{16} \| {\partial_{z}} \hat{u} \|^2. \end{split}$$ For $J_7$ we estimate with and : $$\label{eq:J7uzzEasyPart} \begin{split} |J_{7}| \leq& c (|\check{u} |^{1/2} |\check{u} |_{(2)}^{1/2} \| \hat{u}\| +\|\check{u} \|^{1/2} |\check{u} |_{(2)}^{1/2} |{\partial_{z}} \hat{u}|) | {\partial_{zz}} \hat{u} |\\ \leq&c |\check{U} |_{(2)}^2 (\| \hat{U} \|^2 + |{\partial_{z}} \hat{u}|^2) +\frac{\nu}{16} \| {\partial_{z}} \hat{u} \|^2.\\ \end{split}$$ Finally concerning $J_8 = \langle B^1_1(\hat{u}, \check{u}) +B^1_2(\hat{u}, \check{u}),{\partial_{zz}}\hat{u} \rangle := J_{8,1} + J_{8,2} $ we estimate $$\label{eq:termJ8uzzEasyPart} \begin{split} |J_{8,1}| \leq& c | \hat{u} |^{1/2} \|\hat{u} \|^{1/2} \|\check{u} \|^{1/2} |\check{u} |_{(2)}^{1/2} \|{\partial_{z}}\hat{u} \|\\ \leq& c (| \hat{U} |^{2} \|\hat{U} \|^{2} + \|\check{U} \|^{2} |\check{U} |_{(2)}^{2} ) + \frac{\nu}{32}\|{\partial_{z}}\hat{u} \|^2,\\ \end{split}$$ using , and, $$\label{eq:J82BIGPROBLEMHOUSTON} \begin{split} |J_{8,2}| &\leq c \| \hat{u} \| \| \check{u} \|^{1/2} | \check{u} |_{(2)}^{1/2} \|{\partial_{z}}\hat{u}\|\\ &\leq c \|\hat{U}\|^{2} | \check{U} |_{(2)}^2 +\frac{\nu}{32} \|{\partial_{z}}\hat{u}\|^2.\\ \end{split}$$ thanks to (\[eq:genericFirstCompEstL2FuckedComp1\]). Collecting the estimates , , , , , , above we may finally observe that $$\label{eq:diffinequalityduz} \begin{split} \frac{d}{dt} \bigl( |{\partial_{z}}&\hat{u}|^2 + \alpha_{\mathbf{v}} | \hat{u} |^2_{L^2(\Gamma_i)}\bigr) + \nu \| {\partial_{z}}\hat{u} \|^2\\ \leq& c( \| \hat{U} \|^2 + |\check{U}|^2_{(2)} ) |{\partial_{z}} \hat{u}|^2\\ &+c (1+ |\hat{U}|^2)\| \hat{U}\|^2 + c(1+ \|\hat{U} \|^2 + \| \check{U} \|^2 ) |\check{U} |^2_{(2)} +c|F|^2.\\ \end{split}$$ We therefore conclude that $$\label{eq:gronwallsetUpdUZ} \begin{split} \frac{d}{dt} \bigl( |{\partial_{z}}\hat{u}|^2 + \alpha_{\mathbf{v}} | \hat{u} |^2_{L^2(\Gamma_i)}\bigr) \leq (|{\partial_{z}} \hat{u}|^2 + \alpha_{\mathbf{v}} | \hat{u} |^2_{L^2(\Gamma_i)}) R_1 + R_2 + C|F|^2, \end{split}$$ where $$\label{eq:GronwallTermsDef} \begin{split} R_1 &:= \| \hat{U} \|^2 + |\check{U}|^2_{(2)}\\ R_2 &:= c(1+ |\hat{U}|^2)\| \hat{U}\|^2 + c(1+ \|\hat{U} \|^2 + \| \check{U}\|^2 ) |\check{U} |^2_{(2)} \end{split}$$ and the constants $c$ are as in . Note that, due to and , for all $t > 0$, there exists a constant $K=K(t,\omega)$ such that, $$\label{eq:PathwiseBndsR1R2pdz} \int_{0}^{t\wedge\xi} R_{j} ds \leq K(t,\omega) < \infty \;\; a.s. \quad j = 1,2.$$ The (deterministic) Gronwall inequality now yields $$\label{eq:GronwallInequalUzzEst} \begin{split} \sup_{t' \in [0, \tau_n \wedge t]} |{\partial_{z}} \hat{u}|^2 &\leq \sup_{t' \in [0, \tau_n \wedge t]} (|{\partial_{z}} \hat{u}|^2 + \alpha_{\mathbf{v}} | \hat{u} |^2_{L^2(\Gamma_i)})\\ &\leq \exp \left(\int_0^{\xi\wedge t} R_1 dt' \right) \left( | {\partial_{z}} u_0|^2 + \int_0^{\xi\wedge t} (R_2 + C|F|^2 )dt' \right)\\ & \leq K(t, \omega) \left( 1 + \|U_0\|^2 + \int_0^{\xi\wedge t} |F|^2 dt' \right). \end{split}$$ Finally, returning to (\[eq:diffinequalityduz\]), integrating over $[0, \tau_n \wedge t]$, and then neglecting the terms $|{\partial_{z}}\hat{u}|^2 + \alpha_{\mathbf{v}} | \hat{u} |^2_{L^2(\Gamma_i)}$ appearing on the left hand side of the resulting expression, we observe that: $$\label{eq:tmIntViscosityTrmUz} \begin{split} \int_0^{\xi\wedge t} \| {\partial_{z}} \hat{u} \|^2 dt' \leq& \|U_0\|^2 + \int_0^{\xi\wedge t} (|{\partial_{z}} \hat{u}|^2 R_1 + R_2 + c|F|^2 )dt'\\ \leq& K(t,\omega). \end{split}$$ Combining (\[eq:GronwallInequalUzzEst\]) and (\[eq:tmIntViscosityTrmUz\]), completes the proof. We next come to the estimates for ${\partial_{x}}u$. Here we show \[thm:uxestimatesLemma\] The hypotheses are the same as in Lemma \[thm:uzestimatesLemma\]. Then, for every $t > 0$, there exists a finite constant $K=K^3(t,\omega) < \infty$ depending on $t,\omega$ and the data such that $$\label{e4.24b} \sup_{0 \leq t' \leq \xi\wedge t} |{\partial_{x}} \hat{u} |^2 + \int_0^{\xi\wedge t} \|{\partial_{x}} \hat{u}\|^2 dt' \leq K^3 \quad a.s.$$ The hypotheses being the same as for Lemma 4.3, the conclusions of that Lemma thus hold, and in particular (\[e4.10b\]). To determine an evolution equation for $|{\partial_{x}} \hat{u}|$ we multiply (\[eq:firstMomentumMinusNoise\]) by $-{\partial_{xx}} u$ and integrate over $\mathcal{M}.$ After some direct manipulations, this yields $$\label{eq:evolutionequationuxhat} \begin{split} \frac{1}{2} \frac{d}{dt} | &{\partial_{x}} \hat{u}|^2 + \nu \| {\partial_{x}} \hat{u}\|^2 + \nu \alpha_{\mathbf{v}} | {\partial_{x}} \hat{u} |^2_{L^2(\Gamma_i)}\\ =& \beta_T g \rho_0 \int_{\mathcal{M}} \left( \int^0_z {\partial_{x}}(\hat{T} + \check{T}) d\bar{z} \right) {\partial_{xx}}\hat{u} \, d\mathcal{M}\\ &- \int_{\mathcal{M}}F_u{\partial_{xx}}\hat{u} \, d\mathcal{M}\\ &- \int_{\mathcal{M}}2f(\hat{v} +\check{v}) {\partial_{xx}}\hat{u} \, d\mathcal{M}\\ &+ \int_{\mathcal{M}} (B^1(\hat{u}, \hat{u}) + B^1(\check{u},\check{u})+ B^1(\check{u},\hat{u}) + B^1(\hat{u}, \check{u})) {\partial_{xx}}\hat{u} \, d\mathcal{M}\\ =& J_1 + J_2 + J_3 + J_4 + J_5 + J_6 + J_7. \end{split}$$ Notice that in this case the pressure term disappears by integration in $z$, since $P {\partial_{xx}} \hat{u} = 0$ As above the first three terms are direct $$\label{eq:J1234Estdx} \begin{split} |J_1| \leq& c(\|\hat{U}\|^2 + \|\check{U}\|^2) + \frac{\nu}{14} \|{\partial_{x}}\hat{u} \|^2,\\ |J_2| \leq& c |F|^2 + \frac{\nu}{14} \| {\partial_{x}}\hat{u} \|^2,\\ |J_3| \leq & c (\|\hat{U}\|^2 + \|\check{U}\|^2) + \frac{\nu}{14} \| {\partial_{x}}\hat{u} \|^2.\\ \end{split}$$ We may handle the term $J_4$ as in [@TemamZiane1], however we may also directly apply Lemma \[thm:Best\], (\[eq:genericFirstCompEstL2ClassComp2\]), (\[eq:genericFirstCompEstL2FuckedComp1\]) to infer $$\label{eq:nonlinear1Bux} \begin{split} |J_4| \leq& c ( |\hat{u}|^{1/2} \| \hat{u} \|^{1/2} |{\partial_{x}}\hat{u}|^{1/2} \| {\partial_{x}}\hat{u} \|^{3/2} + |{\partial_{x}} \hat{u} | |{\partial_{z}} \hat{u} |^{1/2} \|{\partial_{z}} \hat{u} \|^{1/2} \|{\partial_{x}}\hat{u}\| )\\ \leq&c(|\hat{u}|^{2} \|\hat{u} \|^{2} |{\partial_{x}}\hat{u}|^{2} + |{\partial_{x}} \hat{u} |^2 |{\partial_{z}} \hat{u} | \|{\partial_{z}} \hat{u} \|) + \frac{\nu}{14} \| {\partial_{x}}\hat{u} \|^2\\ \leq&c(|\hat{U}|^2 \|\hat{U} \|^2 + \|{\partial_{z}} \hat{u} \|^2)|{\partial_{x}} \hat{u} |^2 + \frac{\nu}{14} \| {\partial_{x}}\hat{u} \|^2.\\ \end{split}$$ The estimates (\[eq:genericFirstCompEstL2ClassComp1\]) - (\[eq:genericFirstCompEstL2FuckedComp2\]) allow us to treat the remaining terms $J_5, J_6, J_7$ as well. Indeed $$\label{eq:nolinearuxJ5} \begin{split} |J_5| \leq& c ( | \check{u} |^{1/2} | \check{u} |_{(2)}^{1/2} \| \check{u} \| + \| \check{u} \|^{3/2} |\check{u} |_{(2)}^{1/2}) \| {\partial_{x}}\hat{u} \|\\ \leq&c \| \check{U} \|^2 |\check{U}|^2_{(2)} + \frac{\nu}{14} \| {\partial_{x}}\hat{u} \|^2.\\ \end{split}$$ Also $$\label{eq:estimateuxJ6} \begin{split} | J_6| \leq& c ( |\check{u} |^{1/2} | \check{u} |_{(2)}^{1/2} |{\partial_{x}} \hat{u} | + \|\check{u} \|^{1/2} | \check{u} |_{(2)}^{1/2} |{\partial_{z}} \hat{u} | ) \| {\partial_{x}}\hat{u} \|\\ \leq&c |\check{U}|_{(2)}^2 (|{\partial_{x}} \hat{u} |^2 + |{\partial_{z}} \hat{u} |^2) + \frac{\nu}{14} \| {\partial_{x}}\hat{u} \|^2\\ \leq&c \| \hat{U} \|^2 |\check{U}|^2_{(2)} + \frac{\nu}{14} \| {\partial_{x}}\hat{u} \|^2.\\ \end{split}$$ Finally $$\label{eq:estimateuxJ7} \begin{split} |J_7| &\leq c ( | \hat{u} |^{1/2} \| \hat{u} \|^{1/2} \| \check{u} \|^{1/2} | \check{u} |_{(2)}^{1/2} + | {\partial_{x}} \hat{u} | \| \check{u} \|^{1/2} | \check{u} |_{(2)}^{1/2}) \| {\partial_{x}} \hat{u} \|\\ &\leq c \| \check{u} \| | \check{u} |_{(2)} (| \hat{u} | \| \hat{u} \| + | {\partial_{x}} \hat{u} |^2) + \frac{\nu}{14} \| {\partial_{x}} \hat{u} \|^2\\ &\leq c \| \hat{U} \|^2 | \check{U} |_{(2)}^2 + \frac{\nu}{14} \| {\partial_{x}} \hat{u} \|^2.\\ \end{split}$$ Gathering the estimates above, we conclude that: $$\label{eq:estuxsummary} \begin{split} \frac{d}{dt} | {\partial_{x}} \hat{u}|^2 &+ \nu \| {\partial_{x}} \hat{u}\|^2\\ \leq& c(|\hat{U}|^2 \|\hat{U} \|^2 + \|{\partial_{z}} \hat{u} \|^2) | {\partial_{x}} \hat{u} |^2 \\ & +c(\|\hat{U}\|^2 + \|\check{U}\|^2 + \| \check{U} \|^2 |\check{U}|^2_{(2)} +\| \hat{U} \|^2 |\check{U}|^2_{(2)} ) + c |F|^2\\ \leq& R_3 | {\partial_{x}} \hat{u} |^2+ R_4 + c|F|^2, \end{split}$$ where $R_3 := c(|\hat{U}|^2 \|\hat{U} \|^2 + \|{\partial_{z}} \hat{u} \|^2)$ and $R_4 := c(\|\hat{U}\|^2 + \|\check{U}\|^2 + \| \check{U} \|^2 |\check{U}|^2_{(2)} +\| \hat{U} \|^2 |\check{U}|^2_{(2)} )$. Dropping the term $\nu \| {\partial_{x}} \hat{u} \|^2$, applying the Gronwall inequality and then making use of the assumed bound we infer, using , and , that $$\label{eq:uniformbnduxl2} \begin{split} \sup_{0\leq t' \leq \xi\wedge t} |{\partial_{x}} \hat{u}|^2 \leq & \exp\left( \int_0^{\xi\wedge t} R_3 dt'\right) \left( |{\partial_{x}} u_0 |^2 + \int_0^{\xi\wedge t} (R_4 + C|F|^2)dt'\right) \\ \leq & K(t,\omega) < + \infty . \end{split}$$ We then integrate from $0, \xi\wedge t$ and infer, using again , and , that $$\label{eq:uniformBnduxH1} \begin{split} \int_0^{\xi\wedge t} \|{\partial_{x}}\hat{u}\|^2 dt' &\leq \|U_0\|^2 + \int_0^{\xi \wedge t} ( R_3 |{\partial_{x}} \hat{u}|^2 + R_4 + |F|^2 )dt'\\ &\leq K(t,\omega) < \infty , \end{split}$$ where the final inequality follows from the previous bound . This completes the proof of Lemma \[thm:uxestimatesLemma\]. \[rmk:BndMods\] With some minor modifications to the proof, Lemma \[thm:uxestimatesLemma\] may established if we merely assume that, $$\label{eq:ptwiseControlBndsMod} \begin{split} \sup_{t' \leq \tau_n} \left( | \hat{U} |^2 + \| \check{U} \|^2 + |{\partial_{z}} \hat{u} |^2 \right) + \int_0^{\tau_n} (\|\hat{U} \|^2 + |\check{U}|_{(2)}^2+ \|{\partial_{z}} \hat{u} \|^2) dt' \leq K < \infty \quad a.s. \end{split}$$ On the other hand the proof of Lemma \[thm:uzestimatesLemma\] seems to require that $$\label{eq:StrongerCondOnUCheck} \sup_{t' \leq \xi}| \check{U}|_{(2)}^2 \leq K < \infty$$ This condition is needed to order handle $J_{8}$ appearing in . The requirement is achieved due to but at the cost of a slightly more restrictive condition on $\sigma$, , as compared to previous work. We underline here that this is the only point in this work where we require the final condition in (\[eq:lipCond\]). \[r4.3\] We observe that the $H^1$ -norm $||\varphi||$ of a function $\varphi$ is equivalent to the norm $(|\varphi|^2 + |\partial_x\varphi|^2 + |\partial_z\varphi|^2)^{1/2},$ and the $H^2$ - norm $|\varphi|_{(2)}$ of $\varphi$ is equivalent to the norm $(\|\partial_x\varphi \|^2 + \| \partial_z\varphi \|^2 + ||\varphi||^2)^{1/2}.$ We then infer from and Lemmas \[thm:uzestimatesLemma\] and \[thm:uxestimatesLemma\], that $\hat{u}$ being as in these lemmas, that for every $t>0,$ there exists a constant $K=K^4(t,\omega)$ depending on $t,\omega$ and the data, such that $$\label{e4.37} \sup_{0\leq t'\leq\xi\wedge t} ||\hat{u}||^2 + \int^{\xi\wedge t}_0 |\hat{u}|^2_{(2)} ds\leq K^4 < \infty\quad a.s.$$ Strong estimates for U {#sec:strong-type-estim} ---------------------- With the above preliminaries now in hand we may now proceed to study $U$ in the strong norms, the final step of the proof of global existence. \[thm:StrongEstU\] Suppose that $0 < n < \infty$ is a deterministic constant and let $\tau_n \leq \xi$ be the stopping time defined by $$\label{eq:controlBoundStrongFirstComp} \tau_n = \inf \left\{ t \geq 0: \int_0^{\xi\wedge t} |u|_{(2)}^2 dt' > n\right\} \wedge\xi .$$ Then, for any $t > 0$ there exists a deterministic constant $K=K^5_n(t)$ depending on $n$,$t$ and the data, such that: $$\label{eq:meanboundWControl} {\mathbb{E}}\left( \sup_{0\leq t' \leq \tau_n\wedge t} \| U \|^2 + \int_{0}^{\tau_n\wedge t} |A U |^2 dt' \right)\leq K^5_n(t).$$ By the Itō formula and truncation argument (see [@Bensoussan1]) we derive an equation for $t \mapsto \|U(t)\|$: $$\label{eq:AhalfUdifferential} \begin{split} d \|U\|^2 +& 2|AU|^2dt \\ =& (2\langle F - A_pU - B(U) - EU, AU \rangle + \|\sigma(U)\|^2_{L_2(\mathfrak{U}, V)} )dt\\ &+ 2\langle A^{1/2}\sigma(U) , A^{1/2} U \rangle dW. \end{split}$$ Note that due to Proposition \[thm:MaxExist\] this equality holds on the interval $[0, \xi)$. Fix arbitrary stopping times $0 \leq \tau_a \leq \tau_b \leq \tau_n \wedge t$. We now make estimates of on this interval in order to apply the stochastic version of the Gronwall lemma in [@GlattHoltzZiane2 Lemma 5.3]. As typical, the stochastic terms are majorized by applying the Burkholder-Davis-Gundy inequality , $$\begin{split} \mathbb{E} \sup_{\tau_a \leq t' \leq \tau_b} & \left|\int_{\tau_a}^{t'} \langle A^{1/2}\sigma(U) , A^{1/2} U \rangle dW \right|\\ &\leq c\ \mathbb{E} \left( \int_{\tau_a}^{\tau_b} \langle A^{1/2}\sigma(U) , A^{1/2} U \rangle_{L_2(\mathfrak{U}, H)}^2 dt'\right)^{1/2}\\ &\leq \frac{1}{2} \mathbb{E} \left( \sup_{\tau_a \leq t' \leq \tau_b } \|U\|^2 \right) + c\ \mathbb{E} \int_{\tau_a}^{\tau_b} (1 + \|U\|^2)ds. \end{split}$$ By applying we may estimate the nonlinear part of the equation $$|\langle B(U), AU \rangle| \leq c \|u\|^{1/2} |u|^{1/2}_{(2)} \|U\| |AU| \leq c |u|_{(2)}^2 \|U\|^2 + \frac{1}{4}|AU|^2$$ Making use of these two observations and obvious applications of Young’s inequality for the lower order terms (see , ) we may estimate $$\label{eq:FinalEstU} \begin{split} \mathbb{E}& \left( \sup_{ \tau_a \leq t' \leq \tau_b} \|U\|^2 + \int_{\tau_a}^{\tau_b} |AU|^2 dt' \right)\\ &\leq c\ {\mathbb{E}}\| U(\tau_a)\|^2 + c\ {\mathbb{E}}\int_{\tau_a}^{\tau_b} (1+|F|^2 + (1 + |u|_{(2)}^2) \|U\|^2)dt'. \end{split}$$ The Gronwall lemma in [@GlattHoltzZiane2] applies to real valued, non-negative processes $X,Y,Z,R$ defined on an interval of time $[0,T),$ and such that, for a stopping time $0<\tau <T,$ $$\mathbb{E}\int^\tau_0 (RX + Z) ds < \infty,$$ and such that $\int^\tau_0 R ds \leq k$ a.s. Assuming that, for all stopping times $0\leq\tau_a <\tau_b<\tau$ $$\mathbb{E}(\sup_{\tau_{a} < t <\tau_{b}} X + \int^{\tau_b}_{\tau_a} Yds) \leq C_0 \left( \mathbb{E}( X(\tau_a) + \int^{\tau_b}_{\tau_a} (RX + Z)ds\right)$$ where $C_0$ is a constant independent of the choice of $\tau_a$ and $\tau_b$, then $$\mathbb{E}\left( \sup_{0 < t < \tau} X + \int^\tau_0 Yds\right) \leq C\mathbb{E}\left( X(0) + \int^\tau_0 Zds\right),$$ where $C = C (C_0,T,K).$ We now just apply this lemma with $\tau = \tau_n, X =||U||^2, Y=|AU|^2, R= c(1+|u|^2_{(2)}), Z = c (1+|F|^2)$ and the result follows. Stopping time arguments {#sec:stopp-time-argum} ----------------------- We now implement the stopping time arguments that, applied in combination with Lemmas \[thm:AuxSystemEst\] - \[thm:StrongEstU\], imply that $\xi = \infty$. We define the stochastic processes $$\label{eq:normsofPossibleBlowup} \begin{split} X_1(t) &:= \sup_{0 \leq t' \leq t \wedge \xi} | {\partial_{z}} \hat{u}|^2 + \int_0^{t \wedge \xi} \| {\partial_{z}} \hat{u}\|^2 dt'\\ X_2(t) &:= \sup_{0 \leq t' \leq t \wedge \xi} | {\partial_{x}} \hat{u}|^2 + \int_0^{t \wedge \xi} \| {\partial_{x}} \hat{u}\|^2 dt'\\ X(t) &:=\sup_{0 \leq t' \leq t \wedge \xi} \| U\|^2 + \int_0^{t \wedge \xi} |A U |^2 dt'\\ \end{split}$$ and recall, with Lemmas \[thm:uzestimatesLemma\] and \[thm:uxestimatesLemma\] that $X_1(t)$ and $X_2(t)$ are almost surely finite for all $t \geq 0$. For $X(t),$ it follows from Lemma \[thm:StrongEstU\] that $X(t)$ is a.s. finite for every $t \in [0, \tau_{n}]$ where $\tau_n$ is defined by . We first aim to show that $\tau_n\uparrow\infty$ a.s. as $n\rightarrow\infty.$ Recalling that $u=\hat{u} + \check{u}$, we observe that $|u|_{(2)}^2\leq 2|\hat{u}|_{(2)} + 2|\check{u}|^2_{(2)}$ and infer, with Chebyshev’s inequality, that for any $t > 0$, $$\begin{split} \mathbb{P} (\tau_n < t) &\leq \mathbb{P}\left(\int^{\xi\wedge t}_0|u|^2_{(2)} ds > n\right)\\ &\leq \mathbb{P}\left(\int^{\xi\wedge t}_0 |\hat{u}|^2_{(2)} ds >\frac{n}{2}\right) + \mathbb{P} \left(\int^{\xi\wedge t}_0 |\check{u}|^2_{(2)} ds > \frac{n}{2}\right)\\ &\leq\mathbb{P}\left( X_1(t) + X_2(t) > cn\right) + \frac{c}{n}\mathbb{E}\int^t_0|\check{u}|^2_{(2)}ds. \end{split}$$ Thanks to this implies that $$\lim_{n\rightarrow\infty} \mathbb{P}(\tau_n < t ) \leq \mathbb{P}(X_1(t) + X_2(t) = \infty)=0.$$ Observing that the sequence $\tau_n$ is a.s. increasing, we have $$\mathbb{P} \left(\lim_{n\rightarrow\infty}\tau_n < t\right) = \lim_{n\rightarrow\infty}\mathbb{P} (\tau_n < t) = 0,$$ and hence $\tau_n\uparrow\infty$ a.s. as $n\rightarrow\infty.$ We now consider, for any $M > 0,$ the stopping time $$\sigma_M = \inf \left\{ r \geq 0 : X(r) > M\right\}$$ and, in view of applying Proposition 5.1 below we want to evaluate $\mathbb{E} X (\tau_n\wedge\sigma_M\wedge t).$ To this end, we employ Lemma \[thm:StrongEstU\] and infer that $$\sup_M {\mathbb{E}}X( \tau_n \wedge \sigma_M \wedge t) \leq K^5_n(t) < \infty.$$ We finally conclude, by invoking Proposition \[thm:stArg1\], that $X(t) < \infty$ for any $t >0$. This implies $$\label{eq:finiteBlowUpCondOnStrongNorms} X(\xi(\omega)) < \infty \textrm{ for a.a. } \omega \in \{ \xi < \infty \}$$ but since $(U,\xi)$ is a maximal strong solution (cf. (\[eq:FiniteTimeBlowUp\])), we perforce conclude that $\xi = \infty$ a.s. The proof of Theorem \[thm:MainReslt\] is thus complete. Appendix I: An Abstract Stopping Time Result {#sec:append-abstr-lemm} ============================================ We have made use of the following new result in the final steps of the proof above of global existence. \[thm:stArg1\] Fix $(\Omega,\mathcal{F}, \mathbb{P}, \{ \mathcal{F}_t\}_{t \geq 0})$, a filtered probability space. Let $X: \Omega \times [0,\infty) \rightarrow \mathbb{R}^+ \cup \{\infty \}$ be an increasing cádlág stochastic process and define $$\sigma_M = \inf\{r \geq 0: X(r) \geq M \}.$$ Suppose that there exists an increasing sequence of stopping times $\tau_n$ such that $\tau_n \uparrow \infty$ a.s. and such that for any fixed $n > 0$, $t >0$: $$\kappa_{n,t} := \sup_M \mathbb{E} X(\tau_n\wedge\sigma_M\wedge t) < \infty.$$ Then, for a set $\tilde{\Omega} \subset \Omega$ of full measure, $$\label{eq:noBLOWUPBABY} X(t, \omega) < \infty, \quad \textrm{ for all } t \in [0, \infty), \omega \in \tilde{\Omega}.$$ It is sufficient to show that $\lim_{M \rightarrow \infty} \mathbb{P}(\sigma_M < t) = 0$. Indeed since $$\{ X(t) < M \} \subseteq \{\sigma_M \geq t \}$$ and since $\sigma_M$ is an increasing function of $M$, for any $M' > M$, $$\{\sigma_{M} \geq t\} \subseteq \{ \sigma_{M'} \geq t\},$$ we have that $$\begin{split} \mathbb{P} ( X(t) < \infty) &= \mathbb{P} ( \cup_{M > 0} \{X(t) < M\})\\ &\leq \mathbb{P} ( \cup_{M > 0} \{\sigma_M \geq t \})\\ &= \lim_{M \rightarrow \infty} \mathbb{P} ( \sigma_M \geq t )\\ &= \lim_{M \rightarrow \infty} (1 - \mathbb{P} ( \sigma_M < t )).\\ \end{split}$$ Give any $M,n$, observe that since $X$ is right continuous and increasing, $$\begin{split} \{\sigma_M < t, \tau_n \geq t\} &= \{X(\sigma_{M} \wedge t) \geq M, \sigma_{M} < t, \tau_n \geq t \}\\ &\subseteq \{X(\sigma_{M} \wedge t) \geq M, \tau_n \geq t \}\\ &\subseteq \{X(\sigma_{M} \wedge \tau_n \wedge t) \geq M\},\\ \end{split}$$ and therefore $$\begin{split} \mathbb{P} (\sigma_M < t) & \leq \mathbb{P} (\sigma_M < t, \tau_n \geq t) + \mathbb{P} (\tau_n < t)\\ & \leq \mathbb{P}(X(\sigma_M \wedge \tau_n \wedge t) \geq M) + \mathbb{P} (\tau_n < t)\\ & \leq \frac{1}{M} \mathbb{E}(X(\sigma_M \wedge \tau_n \wedge t)) + \mathbb{P} (\tau_n < t)\\ & \leq \frac{\kappa_{n,t}}{M} + \mathbb{P} (\tau_n < t).\\ \end{split}$$ Thus, for any fixed $n$ and $t$ $$\lim_{M \rightarrow \infty}\mathbb{P}(\sigma_M < t) \leq \mathbb{P}(\tau_n < t).$$ However, given the assumptions on $\tau_n$, we have that $$\lim_{n \rightarrow \infty}\mathbb{P}(\tau_n < t) =0,$$ which shows that $X(t,\omega)<\infty$ a.s. for $\omega\in\Omega.$ To determine the set $\tilde{\Omega}$ in \[eq:noBLOWUPBABY\] and complete the proof, we observe that $X$ is an increasing function of $t$ and call, for each $j\in\mathbb{N},\Omega_j$ the set of full measure such that $X(j,\omega) <\infty,\enspace\forall\omega\in\Omega_j.$ Then $X(t,\omega) <\infty$ for every $t,0\leq t\leq j,$ and we can take for $\tilde{\Omega}$, the intersection $\cap_{j\geq 1}\Omega_j$ which is a set of full measure as well. Acknowledgments {#acknowledgments .unnumbered} =============== This work was partially supported by the National Science Foundation under the grants DMS-0604235, DMS-0906440, and DMS- 1004638 and by the Research Fund of Indiana University. [^1]: The Boussinesq approximation concerns the oceans. For the atmosphere we arrive at very similar equations by considering the pressure as the vertical coordinate, but, for the sake of simplicity, the emphasis here will be on the case of the oceans. [^2]: Indeed, $w$, $p$ and $\rho$ are called *diagnostic* variables in geophysical fluid mechanics. By opposition $u$, $v$ and $T$ are referred to as *prognostic* variables and are the unknowns in an initial value problem which we set up below. [^3]: One sometimes also finds the more general definition $(U, U^{\sharp}) := \int_{\mathcal{M}} \mathbf{v}\cdot \mathbf{v}^\sharp d \mathcal{M} + \kappa \int_{\mathcal{M}} T T^\sharp d \mathcal{M}$ with $\kappa > 0$ fixed. This $\kappa$ is useful for the coherence of physical dimensions and for (mathematical) coercivity. Since this is not needed here we take $\kappa =1$. [^4]: For a given stochastic basis $\mathcal{S}$, let $\Phi = [0,\infty)\times\Omega$ and take $\mathcal{G}$ to be the $\sigma$-algebra generated by sets of the form $$(s,t] \times F, \quad 0 \leq s< t< \infty, F \in \mathcal{F}_s; \quad \quad \{0\} \times F, \quad F \in \mathcal{F}_0.$$ Recall that a $X$ valued process $U$ is called predictable (with respect to the stochastic basis $\mathcal{S}$) if it is measurable from $(\Phi,\mathcal{G})$ into $(X, \mathcal{B}(X))$, $\mathcal{B}(X)$ being the family of Borel sets of $X$. [^5]: To recover the formulation of the stochastic forcings in , we may consider the special case where $$\begin{split} \sigma^k_u \equiv 0 &\textrm{ when } k = 0 \, (mod \, 3) \\ \sigma^k_v \equiv 0 &\textrm{ when } k = 1 \, (mod \, 3) \\ \sigma^k_T \equiv 0 &\textrm{ when } k = 2 \, (mod \, 3) \\ \end{split}$$ and take $\dot{W}_1 = \sum_k \dot{W}^{3k} e_{3k}$, $\dot{W}_2 = \sum_k \dot{W}^{3k+1} e_{3k +1}$, $\dot{W}_3 = \sum_k \dot{W}^{3k+1} e_{3k +2}$. [^6]: We use the special case $\|Q_{n}U^{\sharp}\|^{2} \leq \tfrac{1}{\lambda_{n}} |A U^{\sharp}|^{2}$, which holds for any $U^{\sharp} \in D(A)$.
--- author: - Małgorzata Wierzbowska date: 'Received: date / Revised version: date' title: 'Effect of spin fluctuations on T$_{c}$ from density-functional theory for superconductors.' ---
--- abstract: 'We consider a Dirichlet problem for the Allen–Cahn equation in a smooth, bounded or unbounded, domain $\Omega\subset{\mathbb{R}}^n.$ Under suitable assumptions, we prove an existence result and a uniform exponential estimate for symmetric solutions. In dimension $n=2$ an additional asymptotic result is obtained. These results are based on a pointwise estimate obtained for local minimizers of the Allen–Cahn energy.' author: - 'Giorgio Fusco[^1],  Francesco Leonetti[^2]   and Cristina Pignotti[^3]' title: 'On the asymptotic behavior of symmetric solutions of the Allen-Cahn equation in unbounded domains in ${\mathbb{R}}^2$\' --- Introduction ============ We consider the Allen-Cahn equation $$\begin{aligned} \label{equation} \left\{\begin{array}{l}\Delta u = W^{\prime}(u), \quad x\in\Omega,\\ u=g, \hspace{1.5 cm} x\in \partial\Omega. \end{array}\right.\end{aligned}$$ where $\Omega\subset{\mathbb{R}}^n$ is a bounded or unbounded domain, $g:\partial\Omega\rightarrow{\mathbb{R}}$ is continuous and bounded and $W:{\mathbb{R}}\rightarrow{\mathbb{R}}$ is a $C^3$ potential. We are interested in symmetric solutions: $$u(\hat{x})=-u(x),\;\text{ for }\;x\in\Omega$$ where for $z\in{\mathbb{R}}^d$ we let $\hat{z}=(-z_1, z_2,\dots,z_d)$ the reflection in the plane $z_1=0$. We assume: h$_1-$ : $W:{\mathbb{R}}\rightarrow{\mathbb{R}}$ is an even function: $$\begin{aligned} W(-u)=W(u),\; \text{ for }\;u\in{\mathbb{R}},\end{aligned}$$ which has a unique non-degenerate positive minimizer: $$\begin{aligned} \label{potential} &&0=W(1)<W(u),\; \text{ for }\;u\geq 0,\\\nonumber &&W^{\prime\prime}(1)>0.\end{aligned}$$ h$_2-$ : There is $M>0$ such that $$\begin{aligned} \label{m-exists} W^\prime(u)\geq 0,\;\text{ for }\;u\geq M.\end{aligned}$$ h$_3-$ : $\Omega\subset{\mathbb{R}}^n$ is a domain with nonempty boundary which is symmetric: $$\begin{aligned} \label{omega-symmetric} x\in\Omega\Rightarrow\hat{x}\in\Omega,\end{aligned}$$ and of class $C^{2+\alpha}$. If $\Omega$ is unbounded we require that $\Omega$ satisfies a uniform interior sphere condition and that the curvature of $\partial\Omega$ is bounded in the $C^\alpha$ sense. If $S\subset{\mathbb{R}}^d$ is a symmetric set, we define $S^+:=\{x\in S:x_1>0\}$. We first consider the case of general $n\geq 1$ and prove the existence of a symmetric solution which is near $1$ in $\Omega^+$. Note that, in general, $\partial(\Omega^+)\neq(\partial\Omega)^+$. \[main\] Assume that $W$ and $\Omega\subset{\mathbb{R}}^n$ satisfy h$_1$, h$_2$ and h$_3$. Assume that $g:\partial\Omega\rightarrow{\mathbb{R}}$ is symmetric and bounded as a $C^{2,\alpha}(\partial\Omega;{\mathbb{R}})$ function and satisfies $$g(x)\geq 0,\;\text{ for }\; x\in(\partial\Omega)^+.$$ Then, problem $(\ref{equation})$ has a symmetric classical solution $u\in C^2(\overline{\Omega};{\mathbb{R}})$ such that $$\label{u-properties} \begin{array}{l} u(x)\geq 0,\;\text{ for }\; x\in\Omega^+ ,\\ \displaystyle{ \vert u(x)-1\vert\leq K e^{-k d(x,\partial(\Omega^+))}, \quad x\in\Omega^+,} \end{array}$$ for some positive constants $k$, $K$ that depend only on $W, n$ and on the $C^1(\overline{\Omega};{\mathbb{R}})$ norm of $u$. (We assume that $g$ is extended to $\Omega$ as a symmetric $C^{2,\alpha}$ map). A similar statement is valid in the case of Neumann boundary conditions. We then restrict to the case $n=2$ and prove the following asymptotic result \[teo\] Assume $W$ as in Theorem $\ref{main}$ and assume that $\Omega\subset{\mathbb{R}}^2$ satisfies h$_3$ and is convex in $x_1$ i.e. $$\begin{aligned} \label{x1-convex} (x_1,x_2)\in\Omega\Rightarrow(t x_1,x_2)\in\Omega,\;\text{ for }\;\vert t\vert\leq 1.\end{aligned}$$ Let $u$ be the solution of the Allen-Cahn equation $(\ref{equation})$ given by Theorem $\ref{main}.$ Then there exists a continuous decreasing map $R\rightarrow q(R),\;\;\lim_{R\rightarrow+\infty}q(R)=0$, such that $$\begin{aligned} \label{near-ubar} \vert u(x_1,x_2)-\bar{u}(x_1)\vert\leq q\Big(d(x,\partial\Omega)\Big),\;\text{ for }\;x\in\Omega,\end{aligned}$$ where $\bar{u}:{\mathbb{R}}\rightarrow{\mathbb{R}}$ is the odd solution of $$\begin{aligned} \label{odd-solution} && v^{\prime\prime}=W^\prime(v),\;\;s\in{\mathbb{R}}\\\nonumber && \lim_{s\rightarrow+\infty}v(s)=1.\end{aligned}$$ The map $q$ depends only on $W, n$ and on the $C^1(\overline{\Omega};{\mathbb{R}})$ norm of $u$. A convergence result for odd solutions of (\[equation\]) similar to (\[near-ubar\]) valid in the case $\Omega\subset{\mathbb{R}}^n$ is a half space was obtained, among other things, in [@eh] (cfr. Theorem 1.1). The point in Theorem \[teo\] is that, even though is restricted to $n=2$, applies to general domains that satisfy (\[x1-convex\]). Some of the ideas in the proof of Theorem \[teo\] have been extended and utilized in [@af] where the restriction to $n=2$ is removed and $u$ is allowed to be a vector. The proof of Theorems \[main\] and Theorem \[teo\] is variational and is based on a pointwise estimate for [*local minimizers*]{} of the Allen-Cahn energy $$\begin{aligned} \label{ac-energy} J_A(u):=\int_A(\frac{1}{2}\vert\nabla u\vert^2+W(u))dx,\end{aligned}$$ defined for all bounded domain $A\subset{\mathbb{R}}^n$ and $u\in W^{1,2}(A;{\mathbb{R}})$. \[local-min\] Let $\Omega\subset{\mathbb{R}}^n$ be a domain. A map $u\in W_{\rm loc}^{1,2}(\Omega;{\mathbb{R}})$ is a local minimizer of the Allen-Cahn energy if $$\begin{aligned} \label{minimization} J_A(u)=\min_{v\in W_0^{1,2}(A;{\mathbb{R}})}J_A(u+v),\end{aligned}$$ for every bounded Lipschitz domain $A\subset\Omega$. In the following we denote by $k, K$ and $C$ generic positive constants that can change from line to line. The pointwise estimate alluded to above is stated in the following \[teo2\] Assume $W:{\mathbb{R}}\rightarrow{\mathbb{R}}$ is a $C^2$ function such that (i) : $0=W(0)<W(v),\;\text{ for }\;v\in{\mathbb{R}}$, (ii) : $\liminf_{\vert v\vert\rightarrow+\infty}W(v)>0$, (iii) : $W^{\prime\prime}(0)>0$. Let $\Omega\subset{\mathbb{R}}^n$ a domain and $u\in C^1(\Omega;{\mathbb{R}})$ a local minimizer of the Allen-Cahn energy that satisfies $$\begin{aligned} \label{bounds-udu} \vert u\vert+\vert\nabla u\vert\leq M_0,\;\text{ for }\;x\in \Omega\end{aligned}$$ and some $M_0>0$. Then there is $q^*>0$ with the property that for each $q\in(0,q^*]$ there is $R(q)>0$ such that $$\begin{aligned} \label{pointw-cond} B_{x,R(q)}\subset \Omega\Rightarrow \vert u(x)\vert< q.\end{aligned}$$ Moreover $R(q)$ can be chosen strictly decreasing and continuos in $(0,q^*]$. The inverse map $q(R)$ satisfies $$\begin{aligned} \label{exp-bound} q(R)\leq Ke^{-kR},\;\;R\in[R(q^*),+\infty),\end{aligned}$$ for some positive constants $k, K$ that depend only on $W, n$ and the bound $M_0$. The paper is organized as follows. In Sect. 2 we use Theorem \[teo2\] to prove Theorem \[main\]. In Sect. 3 we prove Theorem \[teo\]. Finally in Sect. 4 we present a proof of Theorem \[teo2\]. The proof is an adaptation to the scalar case of arguments developed in [@f] and [@f1] for the vector Allen-Cahn equation. The proof of Theorem \[main\] ============================= We first consider the case of $\Omega$ bounded. Then standard arguments from variational calculus yield the existence of a symmetric minimizer $u\in g+W_{0,S}^{1,2}(\Omega;{\mathbb{R}})$ of $J_\Omega$; we denote by $W_{0,S}^{1,2}(\Omega;{\mathbb{R}})$ the subspace of symmetric maps of $W_0^{1,2}(\Omega;{\mathbb{R}})$. Let $g_m=\max_{(\partial\Omega)^+}g$. We can assume $$\begin{aligned} \label{can-assume-u} \vert u\vert\leq M^\prime:=\max\{M,g_m\},\end{aligned}$$ and $$\begin{aligned} \label{can-assume-u1} u\geq 0\;\text{ on }\;\Omega^+.\end{aligned}$$ To prove (\[can-assume-u\]) we let $v\in g+W_{0,S}^{1,2}(\Omega;{\mathbb{R}})$ the symmetric function defined by $$\begin{aligned} \label{def-vu} v=\left\{\begin{array}{l} u,\;\text{ on }\;\Omega^+\cap\{u\leq M^\prime\},\\ M^\prime,\;\text{ on }\;\Omega^+\cap\{u> M^\prime\}, \end{array}\right.\end{aligned}$$ and observe that if $\Omega^+\cap\{u> M^\prime\}$ has positive measure, then h$_2$ implies $$\begin{aligned} J_\Omega(u)-J_\Omega(v)=\int_{\Omega^+\cap\{u> M^\prime\}}(\vert\nabla u\vert^2+2(W(u)-W(v))) dx>0,\end{aligned}$$ in contradiction with the minimality of $u$. The proof of (\[can-assume-u1\]) is similar. From the bound (\[can-assume-u\]), the smoothness assumption on $\partial\Omega$ in h$_3$ and elliptic regularity we obtain that $u$ is a classical solution of (\[equation\]) and $$\begin{aligned} \label{emme-bound} \|u\|_{C^{2,\alpha}(\overline{\Omega};{\mathbb{R}})}\leq M^{\prime\prime},\end{aligned}$$ for some constant $M^{\prime\prime}>0$. The restriction of $u$ to $\Omega^+$ trivially satisfies the definition of minimizer of the Allen-Cahn energy in $\Omega^+$ with potential $\tilde{W}$ that, by (\[can-assume-u\]) and (\[can-assume-u1\]), can be identified with any smooth function that satisfies $\tilde{W}(s)=W(s)$, for $s\geq 0$ and $\tilde{W}(s)>W(\vert s\vert)$, for $s<0$. From this and (\[emme-bound\]) it follows that we can apply Theorem \[teo2\] to $\hat{u}=u-1$ with potential $\tilde{W}(\cdot +1)$ and conclude that $u$ satisfies the exponential estimate $$\begin{aligned} \label{e} \vert u(x)-1\vert\leq K e^{-k d(x,\partial\Omega^+)},\;\text{ for }\;x\in\Omega^+,\end{aligned}$$ with $k, K$ depending only on $W$ and $M^{\prime\prime}$. This concludes the proof for $\Omega$ bounded. If $\Omega$ is unbounded we consider a sequence of bounded domains $\Omega_j,\;j\in{{{\rm I} \kern -.15em {\rm N} }},$ such that $\Omega_j\subset\Omega_{j+1}$ and $\Omega=\cup_j\Omega_j$. From h$_3$ we can assume that the boundary of $\Omega_j$ is of class $C^{2,\alpha}$ and satisfies an interior sphere condition uniformly in $j\in{{{\rm I} \kern -.15em {\rm N} }}.$ Therefore the same reasoning developed for the case of bounded $\Omega$ yields $$\begin{aligned} \label{emme-bound-j} \|u_j\|_{C^{2,\alpha}(\overline{\Omega_j};{\mathbb{R}})}\leq M^{\prime\prime},\;\text{ for }\;j\in{{{\rm I} \kern -.15em {\rm N} }},\end{aligned}$$ and $$\begin{aligned} \label{e-j} \vert u_j(x)-1\vert\leq K e^{-k d(x,\partial\Omega_j^+)},\;\text{ for }\;x\in \Omega_j^+,\;j\in{{{\rm I} \kern -.15em {\rm N} }}.\end{aligned}$$ The estimate (\[emme-bound-j\]) implies that, passing to a subsequence if necessary, we can assume that $u_j$ converges locally in $C^2$ to a classical solution $u:\Omega\rightarrow{\mathbb{R}}$ of (\[equation\]) and (\[e-j\]) implies that $u$ satisfies the exponential estimate in Theorem \[main\]. The proof of Theorem \[main\] is complete. \[remark\] Elliptic regularity implies that we can upgrade the exponential estimate in Theorem \[main\] to $$\begin{aligned} \label{exp-upgrade} \vert u(x)-1\vert+\vert\nabla u(x)\vert\leq K e^{-k d(x,\partial\Omega^+)},\;\text{ for }\;x\in \Omega^+.\end{aligned}$$ In the proof of Theorem \[teo\] below we make systematic use of the fact that the solution of (\[equation\]) given by Theorem \[main\] is a local minimizer in the sense of Definition \[local-min\]. This is obvious when $\Omega$ is a bounded. If $\Omega$ is unbounded it follows from the fact that $u=\lim_{j\rightarrow+\infty}u_j$ is the limit of a sequence of minimizers $u_j:\Omega_j\rightarrow{\mathbb{R}}$, $\Omega_j$ bounded, that converges to $u$ uniformly in compacts [@s]. The proof of Theorem \[teo\] ============================ We divide the proof in several lemmas. For $l\in (0,+\infty]$ let $$\label{Bl} {\mathcal B}_l:=\{\, v\in W^{1,2}_{\rm odd}((-l,l);{\mathbb{R}})\, :\, v(\pm l)=0;\; \Vert v\Vert_{1,l}\le M^{\prime\prime}\,\},$$ where $W^{1,2}_{\rm odd}((-l,l);{\mathbb{R}})\subset W^{1,2}((-l,l);{\mathbb{R}})$ is the subset of the odd functions and $\Vert v\Vert_{1,l}$ is the usual $W^{1,2}$ norm of $v.$ Let ${\mathcal S}\subset W^{1,2}_{\rm odd}((-l,l);{\mathbb{R}})$ be defined by $$\label{ESSE} {\mathcal S}:=\{\, \nu\in W^{1,2}_{\rm odd}((-l,l);{\mathbb{R}})\,:\, \Vert \nu\Vert_l=1\,\},$$ where, for $l\in(0,+\infty]$, $\Vert v\Vert_l$ denotes the $L^2((-l,l);{\mathbb{R}})$ norm of $v.$ In particular $\|v\|_\infty=\|v\|_{L^2({\mathbb{R}})}$. For $v\in\mathcal{B}_l$ we write $$v=q\nu,\;\text{ with }\;q=\|v\|_l\;\text {and }\;\nu\in\mathcal{S}.$$ For $w\in W^{1,2}((-l,l);{\mathbb{R}})$ we set $$\label{smallE} e_l(w)=\int_{-l}^l\Big(\frac{1}{2}\vert w_{x_1}\vert^2+W(w)\Big) dx_1.$$ Define $E_l:{\mathcal B}_l\rightarrow {\mathbb{R}}$ by setting $$\label{Econl} \begin{split} & E_l(v)=e_l(\bar{u}+v)-e_l(\bar{u})\\&=\frac 1 2\int_{-l}^l (\vert \overline{u}_{x_1}+v_{x_1}\vert^2-\vert \overline{u}_{x_1}\vert^2) dx_1 +\int_{-l}^l[W(\overline{u}+v)-W(\overline{u})] dx_1. \end{split}$$ \[lemma1\] There exist $q_0>0, c>0,$ independent of $l>1$, such that for $v\in\mathcal{B}_l$, $v=q\nu$, we have $$\label{stimaa} E_l(q\nu)\ge \frac 1 2 c^2q^2,\hspace{2 cm} 0<q\le q_0,\ \nu\in {\mathcal S},$$ $$\label{stimab} E_l(q\nu)\ge \frac 1 2 c^2q_0^2,\hspace{2.6 cm} q_0\le q,\ \nu\in {\mathcal S}.$$ Moreover it results $$\label{stimac} D_{qq}E_l(q\nu)\ge c^2,\hspace{2.6 cm} 0<q\le q_0,\ \nu\in {\mathcal S}.$$ From (\[odd-solution\]) and $v(\pm l)=0$ it follows $$\label{E2} \int_{-l}^l(\overline {u}_{x_1} v_{x_1}+W^\prime(\overline{u}) v)dx_1= \int_{-l}^l(-\overline{u}_{x_1x_1}+W^\prime(\overline{u}))v dx_1 =0.$$ Therefore, for $v\in {\mathcal B}_l,$ we can rewrite $E_l(v)$ in the form $$\label{E3} \begin{array}{l} \displaystyle{ E_l(v)=\int_{-l}^l \Big (\frac 1 2 W^{\prime\prime}(\overline{u})v^2+\frac{v_{x_1}^2} 2\Big )dx_1 }\\ \hspace{1 cm}\displaystyle{ +\int_{-l}^l\Big [W(\overline {u}+v)-W(\overline{u}) -W^\prime(\overline{u})v-\frac 1 2 W^{\prime\prime}(\overline{u})v^2 \Big ] dx_1,} \end{array}$$ where we have also added and subtracted $\frac 1 2 W^{\prime\prime}(\overline{u})v^2$. By differentiating (\[odd-solution\]) we see that $\bar{u}_{x_1}$ is an eigenfunction corresponding to the eigenvalue $\lambda=0$ for the operator $L$ defined by $$L:W^{1,2}( {\mathbb{R}})\subset L^2( {\mathbb{R}})\rightarrow L^2( {\mathbb{R}}),$$ $$Lw:= -w_{x_1x_1} +W^{\prime\prime}(\overline{u})w.$$ Since $\bar{u}$ is increasing and odd, $\bar{u}_{x_1}$ is positive and even. On the other hand the assumption $W^{\prime\prime}(\pm 1)>0$ implies, see e.g. Theorem A.2 of [@Henry] (pag. 140), that the essential spectrum of $L$ is contained in $[a,+\infty)$ for some $a>0$. Therefore, if we restrict to the subset of odd functions we can conclude that there exists a positive constant $c_1$ such that $$\label{E4} \int_{-\infty}^{+\infty } \Big (\frac 1 2 W^{\prime\prime}(\overline{u})\phi^2+\frac{\phi_{x_1}^2} 2\Big )dx_1= \frac{1}{2}\int_{-\infty }^{+\infty} (L\phi ) \phi dx_1 \ge c_1^2\int_{-\infty}^{+\infty } \phi^2 dx_1,\;\text{ for all }\;\phi\in W_{\rm odd}^{1,2}({\mathbb{R}}).$$ In particular, given $v\in {\mathcal B}_l,$ we can apply (\[E4\]) to the trivial extension $\tilde{v}$ of $v$ to obtain $$\label{E4bis} \int_{-l}^{+l } \Big (\frac 1 2 W^{\prime\prime}(\overline{u})v^2+\frac{v_{x_1}^2} 2\Big )dx_1\ge c_1^2\int_{-l}^{+l } v^2 dx_1,\;\text{ for all }\;v\in{\mathcal B}_l.$$ Since $v\in{\mathcal B}_l$ implies $v(-l)=0$, we have $v^2(x_1)=2\int_{-l}^{x_1}v(s)v_{x_1}(s) ds$ and therefore $$\label{NG} \Vert v\Vert_{L^\infty(-l,l)}\leq \sqrt{2}\Vert v\Vert_l^{1/2}\Vert v\Vert_{1,l}^{1/2}\leq C\Vert v\Vert_l^{1/2},\;\text{ for }\;v\in{\mathcal B}_l,$$ with $C=\sqrt{2 M^{\prime\prime}}$. Fix $q_0>0$ and let $\overline{W}^{\prime\prime\prime}=\max_{\vert s\vert\leq 1+C q_0^{1/2}}\vert W^{\prime\prime\prime}(s)\vert$. Then, for some map $x_1\rightarrow\theta(x_1)\in(0,1)$, we have $$\begin{aligned} \label{E5}\\\nonumber &&\left\vert\int_{-l}^l(W(\overline {u}+v)-W(\overline{u}) -W^\prime(\overline{u})v-\frac 1 2 W^{\prime\prime}(\overline{u})v^2) d x_1\right \vert =\left \vert\int_{-l}^l W^{\prime\prime\prime}(\overline{u}+\theta v)\frac {v^3} 6 d x_1\right\vert\\\nonumber &&\hskip6cm\leq\frac{1}{6}C\overline{W}^{\prime\prime\prime}q_0^{1/2}\int_{-l}^lv^2 d x_1,\;\text{ for }\;\Vert v\Vert_l\leq q_0.\end{aligned}$$ From (\[E3\]), (\[E4bis\]) and (\[E5\]), if we choose $q_0>0$ so small that $C\overline{W}^{\prime\prime\prime}q_0^{1/2}\leq 3c_1^2$, it follows $$E_l(q\nu)=E_l(v)\ge \frac 1 2 c_1^2\int_{-l}^l v^2 dx_1,\;\text{ for }\;v\in{\mathcal B}_l,\;0<q=\Vert v\Vert_l\leq q_0 ,$$ that is (\[stimaa\]). To show (\[stimab\]) let us consider the minimization problem $$\label{Pstar} \min_{ \begin{array}{l} v\in {\mathcal B}_l \\ \Vert v\Vert_l\ge q_0 \end{array} } E_l(v)\,.$$ It is easy to construct a smooth odd map $w\in{\mathcal B}_l$ that satisfies the constraint $\Vert w\Vert_l\ge q_0$. Therefore there exists a minimizing sequence $\{v_j\}\subset{\mathcal B}_l$ that satisfies $\Vert v_j\Vert_l\ge q_0,\;j\in{{{\rm I} \kern -.15em {\rm N} }},$ and $$\begin{aligned} \label{bound-constraint} E_l(v_j)\leq E_l(w),\;j\in{{{\rm I} \kern -.15em {\rm N} }}.\end{aligned}$$ From (\[bound-constraint\]) and standard arguments from variational calculus it follows that there is $\overline{v}_l\in{\mathcal B}_l$ and a subsequence $\{v_{j_h}\}$ such that $$\begin{aligned} \label{liminf-seq} &&\liminf_{h\rightarrow+\infty}E_l(v_{j_h})\geq E_l(\overline{v}_l),\\\nonumber &&\lim_{h\rightarrow+\infty}\Vert v_{j_h}- \overline{v}_l\Vert_l=0.\end{aligned}$$ It follows $\Vert \overline{v}_l\Vert_l\ge q_0$ and $\overline{v}_l$ is a minimizer of (\[Pstar\]). Since $E_l(0)=0$ and $v=0$ is the unique minimizer of $E_l$ on ${\mathcal B}_l,$ this implies $E_l(\overline {v}_l)=\alpha_l>0,$ and therefore $$\label{E6} E_l(q\nu )\ge \alpha_l, \quad\mbox{for}\ q\ge q_0.$$ Note that $\alpha_l$ is non increasing with $l.$ Indeed, if $l_1<l_2$ and $v\in {\mathcal B}_{l_1},$ then the trivial extension $\tilde{v}$ of $v$ to $[-l_2,l_2]$ satisfies $E_{l_2}(\tilde{v})=E_{l_1}(v)$ and belongs to ${\mathcal B}_{l_2}.$ Therefore, there exists $\lim_{l\rightarrow +\infty} \alpha_l.$ We claim that $$\label{limitpos} \lim_{l\rightarrow +\infty} \alpha_l=\alpha >0.$$ Let $\{l_k\}_k$ be a sequence of positive numbers such that $l_k\rightarrow +\infty$ for $k\rightarrow +\infty$. Let $\overline{v}_k$ be a minimizer of problem (\[Pstar\]) for $l=l_k$ and let $\tilde{\overline{v}}_k:{\mathbb{R}}\rightarrow{\mathbb{R}}$ be the trivial extension of $\overline{v}_k,$ we may assume that the sequence $\{\tilde{\overline{v}}_k\}_k$ converges in $L^2({\mathbb{R}})$ and weakly in $W^{1,2}({\mathbb{R}})$ to a map $\overline{v}$ which satisfies, by lower semicontinuity of $E_\infty$, $\Vert\overline{v}\Vert_\infty\geq q_0$ and $\alpha\geq E_\infty(\overline{v})$. Since $v=0$ is the unique minimizer of $E_\infty ,$ this implies $\alpha\geq E_\infty(\overline{v})>0$ and proves (\[limitpos\]). From (\[E6\]) we then deduce $$E_l (q\nu)\ge \alpha =\frac 1 2 c_2^2 q_0^2,$$ for a suitable constant $c_2$ independent of $l.$ Then both (\[stimaa\]) and (\[stimab\]) hold with $c:=\min \{c_1, c_2\}.$ To prove (\[stimac\]) we note that setting $v=q\nu$ in (\[E3\]) yields $$\label{E31} \begin{array}{l} \displaystyle{ E_l(q\nu)=q^2\int_{-l}^l \Big (\frac 1 2 W^{\prime\prime}(\overline{u})\nu^2+\frac{\nu_{x_1}^2} 2\Big )dx_1 }\\ \hspace{1 cm}\displaystyle{ +\int_{-l}^l\Big [W(\overline {u}+q\nu)-W(\overline{u}) -q W^\prime(\overline{u})\nu-\frac 1 2 q^2W^{\prime\prime}(\overline{u})\nu^2 \Big ] dx_1,} \end{array}$$ which via (\[E4bis\]) implies $$\begin{aligned} D_{qq}E_l(q\nu)|_{q=0}=\int_{-l}^l \Big ( W^{\prime\prime}(\overline{u})\nu^2+\nu_{x_1}^2\Big )dx_1\geq 2c_1^2.\end{aligned}$$ This concludes the proof. For $r>0, l>0$ and $\eta\in{\mathbb{R}},$ we denote by $C_l^r(\eta)$ the set $$\label{Clr} C_l^r(\eta):= \{(x_1, x_2)\in{\mathbb{R}}^2\,:\, \vert x_1\vert< l,\, \vert x_2-\eta\vert< r\}.$$ In the following, whenever possible, we assume that by a translation we can reduce to the case $\eta=0$ and write simply $C_l^r$ instead of $C_l^r(0)$. The introduction of the map $E_l$ allows to represent the energy $J_{C_l^r}(v)$ of an odd map $v:C_l^r\rightarrow{\mathbb{R}}$ that satisfies $v(x)=\bar{u}(x_1),\;\text{ for }\;x_1=\pm l,\;\vert x_2\vert<r$ in a particular form that we now derive. We have $$\begin{aligned} \label{first-energy} J_{C_l^r}(v)=\frac{1}{2}\int_{-r}^r\int_{-l}^l\vert v_{x_2}\vert^2 dx_1 dx_2+\int_{-r}^r E_l(v-\bar{u}) dx_2+\int_{-r}^re_l(\bar{u}) dx_2.\end{aligned}$$ If we set $$q^v(x_2):=\|v(\cdot,x_2)-\bar{u}(\cdot)\|_l>0$$ then $v-\bar{u}=q^v\nu^v$ with $\nu^v=\frac{v-\bar{u}}{\|v-\bar{u}\|_l}$. We observe that $v_{x_2}=q_{x_2}^v\nu^v+q^v\nu_{x_2}^v$ implies $$\begin{aligned} \label{kinetic-decomposition} \int_{-l}^l\vert v_{x_2}\vert^2 dx_1=\vert q_{x_2}^v\vert^2+(q^v)^2\int_{-l}^l\vert\nu_{x_2}^v\vert^2 dx_1,\end{aligned}$$ where we have also used $\int_{-l}^l\nu_{x_2}^v\nu^v dx_1=0$. It follows $$\begin{aligned} \label{last-energy0} \hspace{0.8cm} J_{C_l^r}(v)=\frac{1}{2}\int_{-r}^r(\vert q_{x_2}^v\vert^2+(q^v)^2\|\nu_{x_2}^v\|_l^2) dx_2+\int_{-r}^rE_l(q^v\nu^v) dx_2+\int_{-r}^re_l(\bar{u}) dx_2.\end{aligned}$$ Assume now that $w:C_l^r\rightarrow{\mathbb{R}}$ is of the form $$w(x_1,x_2)=\bar{u}(x_1)+q^w(x_2)\nu^v(x_1,x_2) ;$$ then we have $$\begin{aligned} \label{last-energy} \hspace{0.5cm} J_{C_l^r}(w)=\frac{1}{2}\int_{-r}^r(\vert q_{x_2}^w\vert^2+(q^w)^2\|\nu_{x_2}^v\|_l^2) dx_2+\int_{-r}^rE_l(q^w\nu^v) dx_2+\int_{-r}^re_l(\bar{u}) dx_2.\end{aligned}$$ For later reference we state \[linfty\] Let $f:[-l,l]\rightarrow{\mathbb{R}}$ be a Lipschitz continuous function satisfying $$\begin{aligned} \label{assumption} \vert f(s)\vert+\vert f^\prime(s)\vert\leq K e^{-k\vert s\vert},\;\text{ for }\; s\in(-l,l).\end{aligned}$$ Then, there is a constant $C_2>0$ independent of $l\geq1$ such that $$\begin{aligned} \label{linfty1} \|f\|_{L^\infty}\leq C_2\|f\|_l^\frac{2}{3}\end{aligned}$$ From (\[assumption\]) there is $\bar{s}\in[-l,l]$ such that $\vert f(s)\vert\leq m:=\vert f(\bar{s})\vert,\;s\in[-l,l]$. From this and $\vert f^\prime(s)\vert\leq K$ it follows $$\vert f(s)\vert\geq m-K\vert s-\bar{s}\vert,\;\text{ for }\;s\in[-l,l]\cap[\bar{s}-m/K,\bar{s}+m/K]$$ and a simple computation gives (\[linfty1\]). \[elle-zero\] There exist $J_0>0,\ C>0,$ $\ k>0$ and a map $(0,\infty)\ni r\rightarrow l_r>0$ such that, given $r>0$, if $l\ge l_r$ and $$\label{Rlarge} C_l^r\subset\Omega,\;\;d(C_l^r, \partial \Omega)>l ,$$ then there is a Lipschitz continuous function $v$ with the following properties: (i) : $v(x)=\overline{u} (x_1),\quad$ for $x\in \partial C_{l+\delta/2}^{r+\delta/2};$ (ii) : $v(x)=u(x),\quad$ for $x\in \overline{C}_{l, r}$ and $x\in \Omega\setminus C_{l+\delta}^{r+\delta};$ (iii) : $\|v(\cdot,x_2)-u(\cdot,x_2)\|_{l+\delta/2}\leq C e^{-k l},\;\text{ for }\;x_2\in[-r,r],$ (iv) : $J_{\overline{C_{l+\delta}^{r+\delta}}\setminus C_l^r}(v)-J_{\overline{C_{l+\delta}^{r+\delta}}\setminus C_l^r}(u)\le J_0,$ where $\delta>0$ is a fixed constant. We set $$\begin{aligned} \label{cr-cr} v=u\;\text{ for }\;x\in \overline{C_l^r}\cup(\Omega\setminus C_{l+\delta}^{r+\delta}).\end{aligned}$$ To define $v$ in $C_{l+\delta}^{r+\delta}\setminus\overline{C_l^r}$ let $S_1\subset{\mathbb{R}}^2$ be the sector $S_1=\{x:x_1\geq l-r,\;\vert x_2\vert< x_1-l+r\}$ and let $(\rho,\theta)$ polar coordinates in $S_1$ with origin in the vertex $(l-r,0)$ of $S_1$ and polar axis parallel to $x_1$. We let $x(\rho,\theta)$ denote the point of $S_1$ with polar coordinates $(\rho,\theta)$. We define $v$ in the trapezoid $T_1:=(C_{l+\delta}^{r+\delta}\setminus\overline{C_l^r})\cap S_1$ by setting $$\label{defv1} \begin{array}{l} v(x(\rho,\theta)):= \Big ( 1-\Big\vert 1-2\frac {\rho-\rho_1(\theta)} {\rho_2(\theta)-\rho_1(\theta)}\Big\vert \Big )\overline{u} (l+\delta/2) +\Big\vert 1-2\frac {\rho-\rho_1(\theta)} {\rho_2(\theta)-\rho_1(\theta)}\Big\vert u(x(\rho,\theta)),\\\\ \medskip \quad \hspace{6.5 cm}\mbox{\rm for }\ \rho\in (\rho_1(\theta),\rho_2(\theta)),\: \vert\theta\vert\leq\frac{\pi}{4}, \end{array}$$ where $\rho_1(\theta)$, and $\rho_2(\theta)$ are defined by the conditions $x_1(\rho_1(\theta),\theta)=l$ and $x_1(\rho_2(\theta),\theta)=l+\delta$. In the trapezoid $T_2:=(C_{l+\delta}^{r+\delta}\setminus\overline{C_l^r})\cap S_2$, $S_2=\{x:x_2\geq r-l,\;\vert x_1\vert< x_2-r+l\}$ we define $$\label{defv2} \begin{array}{l} v(x(\varrho,\phi)):= \Big ( 1-\Big\vert 1-2\frac {\varrho-\varrho_1(\phi)} {\varrho_2(\phi)-\varrho_1(\phi)}\Big\vert \Big )\overline{u} (x_1(\frac{\varrho_1(\phi)+\varrho_2(\phi)}{2},\phi))\\\\ \hspace{4cm} +\Big\vert 1-2\frac {\varrho-\varrho_1(\phi)} {\varrho_2(\phi)-\varrho_1(\phi)}\Big\vert u(x(\varrho,\phi)), \quad \mbox{\rm for }\ \varrho\in (\varrho_1(\phi),\varrho_2(\phi)),\: \vert\phi\vert\leq\frac{\pi}{4}, \end{array}$$ where $(\varrho,\phi)$ are polar coordinates in $S_2$ and $\varrho_1(\phi)$, and $\varrho_2(\phi)$ are defined by the conditions $x_2(\varrho_1(\phi),\phi)=r$ and $x_2(\varrho_2(\phi),\phi)=r+\delta$. In the remaining two trapezoids we define $v$ in a similar way. The maps defined by (\[cr-cr\]), (\[defv1\]) and (\[defv2\]) are Lipschitz continuous in the closure of their domains of definition and join continuously on the boundary of $C_{l+\delta}^{r+\delta}\setminus\overline{C_l^r}$ and along the line $\theta=\pi/4$. Indeed (\[defv1\]) and (\[defv2\]) yield $$\begin{aligned} \left.\begin{array}{l} \rho=\rho_i(\theta)\\ \varrho=\varrho_i(\phi) \end{array}\right. \Rightarrow\left.\begin{array}{l} v(x(\rho_i(\theta),\theta))=u(x(\rho_i(\theta),\theta)),\\ v(x(\varrho_i(\phi),\phi))=u(x(\varrho_i(\phi),\phi)) \end{array}\right.\;i=1,2.\end{aligned}$$ and $$\begin{aligned} && x(s+\rho_1(\pi/4),\pi/4)=x(s+\varrho_1(\pi/4),\pi/4),\\\Rightarrow\\ &&v(x(s+\rho_1(\pi/4),\pi/4))=v(x(s+\varrho_1(\pi/4),\pi/4)),\;\;s\in[0,\sqrt{2}\delta].\end{aligned}$$ Therefore we conclude that, as defined, the map $v$ is uniformly Lipschitz continuous on $\Omega$. The fact that $v$ satisfies (i) follows from (\[defv1\]) and (\[defv2\]) that imply $$\begin{aligned} \left.\begin{array}{l} \rho=(\rho_1(\theta)+\rho_2(\theta))/2\\ \varrho=(\varrho_1(\phi)+\varrho_2(\phi))/2 \end{array}\right. \Rightarrow\left.\begin{array}{l} v(x((\rho_1(\theta)+\rho_2(\theta))/2,\theta))=\overline{u}(l+\delta/2),\\ v(x((\varrho_1(\phi)+\varrho_2(\phi))/2,\phi))=\overline{u}(x_1((\varrho_1(\phi)+\varrho_2(\phi))/2)). \end{array}\right.\end{aligned}$$ To prove (iii) and (iv) we use the estimate $$\begin{aligned} \label{exp-ubar} \vert\overline{u}(s)-1\vert+\vert\overline{u}^\prime(s)\vert\leq K e^{-k s},\;\text{ for }\;s\geq 0,\end{aligned}$$ and the estimate for the solution $u$ established in (\[exp-upgrade\]). Set $\lambda:=\Big\vert 1-2\frac {\rho-\rho_1(\theta)} {\rho_2(\theta)-\rho_1(\theta)}\Big\vert\in[0,1]$, then (\[defv1\]) implies $$\begin{aligned} \label{with-lambda} v-1=(1-\lambda)(\overline{u}(l+\delta/2)-1)+\lambda (u(x(\rho,\theta))-1).\end{aligned}$$ This, $x_1(\rho,\theta)>l\;\text{ on }\;T_1$, (\[exp-ubar\]) and (\[exp-upgrade\]) imply $\vert v-1\vert\leq K e^{-kl}\;\text{ on }\;T_1$ and therefore (iii) follows. Moreover, since $W(s)=O((s-1)^2)$ for $s-1$ small, it results $$\begin{aligned} \label{wv-wu} \int_{T_1}(W(v)-W(u)) dx\leq \int_{T_1}W(v) dx\leq C r \delta e^{-\gamma l},\end{aligned}$$ where $\gamma, C$ denote a generic positive constants independent of $r$ and $l$. Differentiating (\[defv1\]) in $x$ yields $$\begin{aligned} \label{dv-du} \nabla v = (1-\lambda)\overline{u}^\prime(l+\delta/2)e_1+\lambda\nabla u(x(\rho,\theta)) -(\overline{u}(l+\delta/2)-u(x(\rho,\theta)))\nabla\lambda,\end{aligned}$$ where $e_i,\;i=1,2$ is the standard basis of ${\mathbb{R}}^2$. Since $\nabla\lambda$ is bounded on $T_1$ with a bound independent of $r$ and $l$, using again (\[exp-ubar\]) and (\[exp-upgrade\]) we see that (\[dv-du\]) implies $$\label{dv-dv1} \int_{T_1}\frac{1}{2}(\vert\nabla v\vert^2-\vert\nabla u\vert^2) dx\leq\int_{T_1}\frac{1}{2}\vert\nabla v\vert^2 dx \leq C r \delta e^{-\gamma l}.$$ To estimate $J_{T_2}(v)-J_{T_2}(u)$ we proceed in a similar way. We set $\lambda=\Big\vert 1-2\frac {\varrho-\varrho_1(\phi)} {\varrho_2(\phi)-\varrho_1(\phi)}\Big\vert$ and write equations analogous to (\[with-lambda\]) and (\[dv-du\]). From these equations, using as before the estimates (\[exp-ubar\]) and (\[exp-upgrade\]), and observing that $$\label{x1-large} \varrho\in(\varrho_1(\phi), \varrho_2(\phi))\;\Rightarrow\;\vert x_1(\varrho,\phi)\vert\geq\vert x_1(\varrho_1(\phi),\phi)\vert=l\vert\tan{\phi}\vert,$$ it follows that there is a constant $C_0$ independent of $r$ and $l$ such that $J_{T_2}(v)-J_{T_2}(u)\leq J_{T_2}(v)\leq C_0$. This, (\[wv-wu\]) and (\[dv-dv1\]) imply $$\label{ver1} J_{\overline{C_{l+\delta}^{r+\delta}}\setminus C_l^r}(v)-J_{\overline{C_{l+\delta}^{r+\delta}}\setminus C_l^r}(u)\leq J_{\overline{C_{l+\delta}^{r+\delta}}\setminus C_l^r}(v)\leq 2(C_0+C r \delta e^{-\gamma l})$$ and (iv) follows with $J_0=4 C_0$ and $l_r=-\frac{1}{\gamma}\ln{\frac{C r \delta}{C_0}}$. Arguments analogous to the ones in the proof of Lemma \[elle-zero\] prove \[elle-zero1\] Assume that $C_l^r$ satisfies $(\ref{Rlarge}).$ Then there is a Lipschitz continuous function $v$ with the following properties: (i) : $v(x)=\overline{u} (x_1),\quad$ for $x\in\{-l-\delta/2,l+\delta/2\}\times[-r-\delta/2,r+\delta/2],$ (ii) : $v(x)=u(x),\quad$ for $x\in\Omega\setminus((-l-\delta,-l)\cup(l,l+\delta))\times[-r-\delta,r+\delta], $ (iii) : $\|v(\cdot,x_2)-u(\cdot,x_2)\|_{l+\delta/2}\leq C e^{-\gamma l},\;\text{ for }\;x_2\in[-r-\delta/2,r+\delta/2],$ (iv) : $J(v)-J(u)\le C r e^{-\gamma l},$ for some constants $C, \gamma>0$. \[a-set\] Let $q_0$ and $c$ be as in Lemma $\ref{lemma1}.$ Given $\overline{q}< q_0,$ fix $r>0$ such that $$\label{venti1} \frac 1 2 c^2 \overline{q}^2 r>8 J_0,$$ where $J_0$ is the constant in (iv) in Lemma $\ref{elle-zero}.$ There is $l(\bar{q})>0$ such that, provided $(\ref{Rlarge})$ is satisfied with $l\geq\max\{l_r,l(\bar{q})\}$, then there exist $a_-\in(-r,-r/2)$ and $a_+\in(r/2,r)$ such that $$\begin{aligned} \label{the-lines} \|u(\cdot,a_\pm)-\bar{u}\|_{l+\delta/2}<\bar{q}.\end{aligned}$$ Let $v$ the map constructed in Lemma \[elle-zero\]. For each $\eta\in[-r,r/2]$ let $\mathcal{A}_\eta\subset{\mathbb{R}}$ be the set $$\label{calA} {\mathcal A}_\eta :=\Big \{ x_2\in(\eta,\eta+r/2)\,:\,q^v(x_2)=\|v(\cdot,x_2)-\overline{u}\|_{l+\delta/2} \geq\frac{\bar{q}}{2}\Big \}.$$ Then, we have $$\label{venti2} J_{C_{l+\delta /2}^{r+\delta /2}}(v)-J_{C_{l+\delta /2}^{r+\delta /2}}(\hat v )\ge \left\vert {\mathcal A}_\eta\right\vert \frac 1 2 c^2\frac{\overline{q}^2}{4},\;\text{ for }\;\eta\in[-r,r/2] \,,$$ where $\hat v$ be the function that coincides with $v$ outside $C_{l+\delta /2}^{r+\delta /2}$ and with $\bar{u}$ inside $C_{l+\delta /2}^{r+\delta /2}.$ Note that, since $v$ coincides with $\bar{u}$ on the boundary of $C_{l+\delta /2}^{r+\delta /2}$, $\hat{v}$ is a Lipschitz map. To prove (\[venti2\]), we observe that from the definition of $\hat{v}$ and of $E_l$ in (\[Econl\]) we have (with $w=v-\bar{u}$) $$\begin{array}{l} \displaystyle{J_{C_{l+\delta /2}^{r+\delta /2}}(v)-J_{C_{l+\delta /2}^{r+\delta /2}}(\hat v)= \frac 1 2\int_{C_{l+\delta /2}^{r+\delta /2}}\vert w_{x_2}\vert^2 dx_1 dx_2+\int_{-r-\delta /2}^{r+\delta /2}E_{l+\delta /2}(w) dx_2 }\\ \hspace{4.5 cm}\displaystyle{ \geq\int_{-r}^r E_{l+\delta /2}(w) dx_2\geq\frac{1}{2}\vert{\mathcal A}_\eta\vert c^2\frac{\overline{q}^2}{4},\;\text{ for }\;\eta\in[-r,r/2] } \end{array}$$ where we have also used (\[calA\]) and (\[stimaa\]), (\[stimab\]) in Lemma \[lemma1\]. Then, from Lemma \[elle-zero\] and (\[venti2\]), it follows $$\label{venti3} 0\ge J_{C_{l+\delta /2}^{r+\delta /2}}(v)-J_{C_{l+\delta /2}^{r+\delta /2}}(\hat v)\ge \left\vert {\mathcal A}_\eta\right\vert \frac 1 2 c^2\frac{\overline{q}^2}{4}-J_0> (\left\vert {\mathcal A}_\eta\right\vert-\frac{r}{2} )\frac 1 2 c^2 \frac{\overline{q}^2}{4},\;\text{ for }\;\eta\in[-r,r/2]$$ and therefore $$\label{venti5} \left\vert {\mathcal A}_\eta\right\vert <\frac{r}{2},\;\text{ for }\;\eta\in[-r,r/2]\,.$$ This inequality and the definition (\[calA\]) of ${\mathcal A}_\eta$ imply the existence of $a_-\in(-r,-r/2)\setminus{\mathcal A}_0$ and $a_+\in(r/2,r)\setminus{\mathcal A}_{3r/2}$ such that $$\begin{aligned} \|v(\cdot,a_\pm)-\bar{u}\|_{l+\delta /2}<\frac{\bar{q}}{2}.\end{aligned}$$ This and (iii) in Lemma \[elle-zero\] imply (\[the-lines\]) provided $l\geq l(\bar{q}):=\frac{1}{k}\ln{\frac{2 C}{\bar{q}}}$. \[de-exists\] Given $\epsilon>0$ there is $l_\epsilon$ such that $$\begin{aligned} x\in\Omega,\;\;d(x,\partial\Omega)\geq l_\epsilon\;\Rightarrow\;\vert u(x)-\bar{u}(x_1)\vert\leq\epsilon.\end{aligned}$$ Set $d_\epsilon:=\frac{1}{k}\ln{\frac{2K}{\epsilon}}$ and assume that $d(x,\partial\Omega^+)\geq d_\epsilon$. Then (\[u-properties\])$_2$ and (\[exp-ubar\]) imply $$\vert u(x)-\bar{u}(x_1)\vert\leq\vert u(x)-1\vert+\vert1-\bar{u}(x_1)\vert\leq\epsilon.$$ This and the oddness of $u$ imply that it suffices to consider the points $x\in\Omega^+$ which have $d(x,\partial\Omega)\geq d_\epsilon$ and $x_1\in[0,d_\epsilon]$. Assume $\tilde{x}\in\Omega^+$ is a point with these properties that satisfies $\vert u(\tilde{x})-\bar{u}(\tilde{x}_1)\vert>\epsilon$. Then from (\[emme-bound\]) and (\[exp-ubar\]) it follows $$\label{extra} \vert u(\tilde x)-\bar{u}(\tilde x_1)\vert - \vert u(x)-\bar{u}(x_1)\vert \le 2\mu (\vert x_1-\tilde x_1\vert +\vert x_2-\tilde x_2\vert ),$$ where $\mu :=\max\{M^{\prime\prime},K\}$. Then, $$\begin{aligned} \label{l2-big} \vert u(x)-\bar{u}(x_1)\vert >\frac {\epsilon } {2},\quad \text{ for }\ \vert x_1- \tilde{x}_1\vert<\epsilon/{8 \mu},\ \vert x_2-\tilde{x}_2\vert<\epsilon/{8 \mu}.\end{aligned}$$ From this inequality it follows $$\begin{aligned} \label{l2-big1} \|u(\cdot,x_2)-\bar{u}\|_{l+\delta /2}\geq \frac{1}{4\sqrt{\mu}}\epsilon^\frac{3}{2},\;\text{ for }\;\vert x_2-\tilde{x}_2\vert<\epsilon/{8\mu}\end{aligned}$$ and thus, recalling Lemma \[elle-zero1\] (iii) $$\begin{aligned} \label{l2-big2} \|v(\cdot,x_2)-\bar{u}\|_{l+\delta /2}\geq \frac{1}{8\sqrt{\mu}}\epsilon^\frac{3}{2},\;\text{ for }\;\vert x_2-\tilde{x}_2\vert<\epsilon/{8\mu}.\end{aligned}$$ Set $q^*:=\frac{1}{4\sqrt{ \mu}}\epsilon^\frac{3}{2}$ and $\bar{q}=q^*/N$ where $N>0$ is a fixed number to be chosen later. In the remaining part of the proof we consider a certain number of lower bounds for $l$ and we always assume that (\[Rlarge\]) is satisfied for $l> l_M$ where $l_M$ represents the maximum of the values $l_r, l(\bar{q}),\dots$ introduced up the the point considered in the proof. From Lemma \[a-set\], if $N$ is such that $\bar{q}<q_0,$ there is $r$ such that, for $l$ sufficiently large, there exist $a_-\in(\tilde{x}_2-r,\tilde{x}_2-r/2)$ and $a_+\in(\tilde{x}_2+r/2,\tilde{x}_2+r)$ with the property $$\begin{aligned} \label{small-at-a} \|u(\cdot,a_\pm)-\bar{u}\|_{l+\delta/2}<\bar{q}.\end{aligned}$$ Moreover, from Lemma \[elle-zero1\], for $l$ sufficiently large, the map $v$ defined in the lemma satisfies $\|u(\cdot,a_\pm)-v(\cdot,a_\pm)\|_{l+\delta/2}<\bar{q}$ and therefore we have $$\begin{aligned} \label{small-at-a1} \|v(\cdot,a_\pm)-\bar{u}\|_{l+\delta/2}<2\bar{q}.\end{aligned}$$ Let $Q:=(-l-\delta/2,l+\delta/2)\times(a_-,a_+)$ and let $w$ the map defined by $$\begin{aligned} \label{w-def} w=\left\{\begin{array}{l} v,\;\text{ on }\;\Omega\setminus Q,\\ v,\;\text{ on }\;(-l-\delta/2,l+\delta/2)\times\{x_2\},\;x_2\in(a_-,a_+)\\ \hskip4cm\text{ if }\;q^v(x_2)\leq 2\bar{q},\\ \bar{u}+2\bar{q}\nu^v,\;\text{ on }\;(-l-\delta/2,l+\delta/2)\times\{x_2\},\;x_2\in(a_-,a_+)\\ \hskip4cm\text{ if }\;q^v(x_2)> 2\bar{q}. \end{array}\right.\end{aligned}$$ This definition implies in particular $$\begin{aligned} \|w(\cdot,\tilde{x}_2)-\bar{u}\|_{l+\delta/2}\leq 2\bar{q}=\frac{2}{N}\frac{1}{4\sqrt{\mu}}\epsilon^\frac{3}{2}.\end{aligned}$$ Then Lemma \[linfty\], provided $N$ is chosen sufficiently large, implies $$\begin{aligned} \label{small-at-x2} \vert w(\tilde{x})-\bar{u}(\tilde{x}_1)\vert\leq C_2(\frac{2}{N}\frac{1}{4\sqrt {\mu}})^\frac{2}{3}\epsilon<\epsilon.\end{aligned}$$ On the other hand (\[w-def\]), (\[last-energy0\]) and (\[last-energy\]) imply $$\begin{aligned} \label{e-comparison}\\\nonumber J_Q(v)-J_Q(w) &=&\int_{\{q^v>2\bar{q}\}}[ \frac{1}{2}(\vert q_{x_2}^v\vert^2+((q^v)^2-4\bar{q}^2)\|\nu^v\|_{l+\delta/2}^2) \\\nonumber &\ &\hspace{3cm}+E_{l+\delta/2}(q^v\nu^v)- E_{l+\delta/2}(2\bar{q}\nu^v)] d x_2 \\\nonumber &\geq &\int_{\{q^v>2\bar{q}\}} [E_{l+\delta/2}(q^v\nu^v)-E_{l+\delta/2}(2\bar{q}\nu^v)] dx_2 \\\nonumber &\geq &\int_{\{q^v>q^*\}}[E_{l+\delta/2}(q^*\nu^v)-E_{l+\delta/2}(2\bar{q}\nu^v)] d x_2 .\end{aligned}$$ From (\[stimac\]), for $q\leq q_0$, we have $D_qE_l(q\nu)\geq c^2 q$ and therefore, recalling also that $\bar{q}=q^*/N$, we have $$\begin{aligned} \label{e-e} E_{l+\delta/2}(q^*\nu^v)-E_{l+\delta/2}(2\bar{q}\nu^v) \geq\frac{1}{2}c^2(q^*)^2(1-\frac{4}{N^2})\end{aligned}$$ which via (\[l2-big2\]) yields $$\int_{\{q^v>q^*\}} [E_{l+\delta/2}(q^*\nu^v)-E_{l+\delta/2}(2\bar{q}\nu^v)] dx_2 \geq\frac{1}{2}c^2(q^*)^2(1-\frac{4}{N^2})\frac{\epsilon}{4\mu} .$$ Then, from (\[e-comparison\]) and $q^*=\frac{1}{4\sqrt{\mu}}\epsilon^\frac{3}{2}$ we obtain $$\begin{aligned} J_Q(v)-J_Q(w)\geq\frac{c^2}{128\mu^2}(1-\frac{4}{N^2})\epsilon^4.\end{aligned}$$ From this and Lemma \[elle-zero1\] (iv) it follows $$\begin{aligned} \label{e-e1} J_Q(u)-J_Q(w)=J_Q(u)-J_Q(v)+J_Q(v)-J_Q(w)>0,\end{aligned}$$ provided $l$ satisfies, beside previous lower bounds, $l>l^*$ where $l^*$ is defined by the condition $Cre^{-\gamma l^*}=\frac{c^2}{128\mu^2}(1-\frac{4}{N^2})\epsilon^4$. From the above part of the proof it follows that, if we set $l_\epsilon=2 l_M$ and if $\tilde{x}$ is such that $d(\tilde{x},\partial\Omega)\geq l_\epsilon$, then we can construct as before the set $Q$ and the map $w$ that coincides with $u$ outside $Q$ and satisfies (\[small-at-x2\]) and (\[e-e1\]) which contradicts the minimality of $u$. The proof is complete. Theorem \[teo\] follows from Lemma \[de-exists\]. The proof of Theorem \[teo2\] ============================= Basic lemmas ============ \[v-property\] There exist positive constants $c,\;q^*$ such that $$\label{prima} W^{\prime\prime}(q)\,\geq c^2,\text{ for } q\in (-q^*,q^*);$$ $$\label{seconda} \begin{array}{l} W(q)\,\geq\tilde{W}(q_0,q):= W(q_0)+W^\prime(q_0)(q-q_0) ,\\\medskip \hspace{3 cm}\text{ for } (q_0,q)\in (0,q^*)\times(q_0,q^*]\cup(-q^*,0)\times[-q^*,q_0); \end{array}$$ $$\label{terza} \text{\rm sign}(q)W^\prime (q)\ge\text{\rm sign}(q)c^2q\geq 0 ,\quad\text{ for } q\in (-q^*, q^*);$$ The inequality (\[prima\]) follows immediately from hypothesis (iii). Now, the convexity of $W$ in $(-q^*, q^*)$ implies (\[seconda\]). To prove (\[terza\]) note that, for $q\in (0,q^*),$ $$W^\prime (q)=\int_0^{q}W^{\prime\prime}(t) dt\ge c^2 q.$$ Analogously, for $q\in (-q^*, 0),$ $$W^\prime (q)=-\int_{q}^0W^{\prime\prime}(t) dt\le c^2 q.$$ By reducing the value of $q^*$ if necessary, we can also assume $$\label{vbar} W(q^*\cdot \text{\rm sign}q)\le W(q)\le\overline{W}, \quad\text{ for } \vert q\vert\in[q^*,M_0],$$ where $\overline{W}>0$ is a suitable constant. This follows from assumption (iii) and (\[bounds-udu\]). All the arguments that follow have a local character. Therefore, without loss of generality, in the remaining part of the proof we can assume that $\Omega$ is bounded. \[basic\] Assume $R>0$ and $B_{x_0,R}\subset\Omega$ and let $\varphi:B_{x_0,R}\rightarrow{\mathbb{R}}$ be the solution of $$\label{phi-comparison} \left\{ \begin{array}{l} \Delta \varphi = c^2\varphi, \text{ in } B_{x_0,R},\medskip\\ \varphi = \bar{q}, \text{ on } \partial B_{x_0,R}, \end{array}\right.$$ where $\bar{q}\in(0,q^*]$. Assume that $u\in W^{1,2}(\Omega)$ is a continuous map such that $$\begin{aligned} \vert u\vert\leq \bar{q}, \text{ for } x\in\overline{B}_{x_0,R}.\end{aligned}$$ Then there exists a map $v\in W^{1,2}(\Omega)$ that satisfies: $$\begin{aligned} \label{v-less-phi} &&v=u, \text{ for } x\in\Omega\setminus B_{x_0,R},\\\nonumber &&\vert v\vert\leq\varphi, \text{ for } x\in\overline{B}_{x_0,R}\end{aligned}$$ and $$\begin{aligned} \label{j-estimate} \hskip.5cm J_\Omega(u)- J_\Omega(v)&=& J_{B_{x_0,R}}(u)-J_{B_{x_0,R}}(v)\\\nonumber &\geq&\int_{B_{x_0,R}\cap\{\vert u\vert> \varphi\}}(W(u)-W(\varphi^u)-W^\prime(\varphi^u)(u-\varphi^u))dx,\end{aligned}$$ where $\varphi^u=\text{\rm sign}(u)\varphi$. Let $b>0$ be a number such that $b\leq\min_{x\in B_{x_0,R}}\varphi$. Since $u$ is continuous the set $A_b:=\{x\in B_{x_0,R}:u>b\}$ is open and there exists a function $\rho^+\in W^{1,2}(A_b)$ that minimizes the functional $J_{A_b}(p)=\int_{A_b}(\frac{1}{2}\vert\nabla p\vert^2+W(p))dx$ in the class of functions that satisfy the Dirichlet condition $p=u$ on $\partial A_b$. Since $\frac{\vert\rho^+\vert+\rho^+}{2}$ is also a minimizer we have $\rho^+\geq 0$. We also have $\rho^+\leq \bar{q}$. This follows from (\[terza\]) and (\[vbar\]) which imply that $\min\{\rho^+,\bar{q}\}$ is also a minimizer. The map $\rho^+$ satisfies the variational equation $$\begin{aligned} \label{rho-variation0} \int_{A_b}(\langle\nabla\rho^+,\nabla\eta\rangle+W^\prime(\rho^+)\eta)dx=0,\end{aligned}$$ for all $\eta\in W_0^{1,2}(A_b)\cap L^\infty(A_b)$. In particular, if we define $A_b^*:=\{x\in A_b: \rho^+>\varphi\}$, we have $$\begin{aligned} \label{rho-variation} \int_{A_b^*}(\langle\nabla\rho^+,\nabla\eta\rangle+W^\prime(\rho^+)\eta)dx=0,\end{aligned}$$ for all $\eta\in W_0^{1,2}(A_b)\cap L^\infty(A_b)$ that vanish on $A_b\setminus A_b^*$. If we take $\eta=(\rho^+-\varphi)^+$ in (\[rho-variation\]) and use (\[terza\]) we get $$\begin{aligned} \label{rho-variation1} \int_{A_b^*}(\langle\nabla\rho^+,\nabla(\rho^+-\varphi)\rangle+c^2\rho^+(\rho^+-\varphi))dx\leq 0,\end{aligned}$$ This inequality and $$\begin{aligned} \label{phi-variation1} \int_{A_b^*}(\langle\nabla\varphi,\nabla(\rho^+-\varphi)\rangle+c^2\varphi(\rho^+-\varphi))dx=0,\end{aligned}$$ that follows from (\[phi-comparison\]), imply $$\begin{aligned} \label{rho-variation2} \int_{A_b^*}(\vert\nabla(\rho^+-\varphi)\vert^2+c^2(\rho^+-\varphi)^2)dx\leq 0.\end{aligned}$$ That is $\mathcal{H}^n(A_b^*)=0$ which, together with $\rho^+\leq\varphi$ on $A_b\setminus A_b^*,$ shows that $$\begin{aligned} \label{rho-min-phi} \rho^+\leq\varphi, \text{ for } x\in A_b.\end{aligned}$$ If we set $A_b^-:=\{x\in B_{x_0,R}:u<-b\}$ and $\rho^-\in W^{1,2}(A_b^-)$ is a minimizer of $J_{A_b^-}$ in the set of $W^{1,2}(A_b^-)$ maps that have the same trace of $u$ on $\partial A_b^-$, the argument above can be applied to $\rho^-$ to obtain $$\begin{aligned} \label{rho-max-phi} \rho^-\geq -\varphi, \text{ for } x\in A_b^-.\end{aligned}$$ Let $v\in W^{1,2}(\Omega)$ be the map defined by setting $$\begin{aligned} \label{v-definition} v=\left\{\begin{array}{l} u,\text{ for } x\in\Omega\setminus A_b\cup A_b^-,\\ \min\{u,\rho^+\}, \text{ for } x\in A_b,\\ \max\{u,\rho^-\}, \text{ for } x\in A_b^-, \end{array}\right.\end{aligned}$$ This definition implies (\[v-less-phi\]). Moreover we have $$\begin{aligned} \label{diff-energy} J_\Omega(u)- J_\Omega(v)&=& J_{A_b\cup A_b^-}(u)- J_{A_b\cup A_b^-}(v)\\\nonumber &=& J_{A_b\cap\{\rho^+<u\}}(u)- J_{A_b\cap\{\rho^+<u\}}(\rho^+)\\\nonumber &&+J_{A_b^-\cap\{\rho^->u\}}(u)- J_{A_b^-\cap\{\rho^->u\}}(\rho^-). \end{aligned}$$ From (\[rho-variation0\]) with $\eta=(u-\rho^+)^+$ it follows $$\begin{aligned} \int_{A_b\cap\{\rho^+<u\}}\langle\nabla\rho^+,\nabla(u-\rho^+)\rangle &= &-\int_{A_b\cap\{\rho^+<u\}}W^\prime(\rho^+)(u-\rho^+)dx .\end{aligned}$$ This and the identity $$\begin{aligned} \frac{1}{2}(\vert\nabla u\vert^2-\vert\nabla \rho^+\vert^2)&=&\frac{1}{2}\vert\nabla u-\nabla \rho^+\vert^2 +\langle\nabla\rho^+,\nabla( u-\rho^+)\rangle,\end{aligned}$$ imply $$\begin{aligned} \label{j-difference+} J_{A_b\cap\{\rho^+<u\}}(u)- J_{A_b\cap\{\rho^+<u\}}(\rho^+)\hskip6.5cm\\\nonumber\\\nonumber = \int_{A_b\cap\{\rho^+<u\}}(\frac{1}{2}\vert\nabla u-\nabla\rho^+\vert^2+\langle\nabla\rho^+,\nabla( u-\rho^+)\rangle +W(u)-W(\rho^+))dx\\\nonumber \\\nonumber \geq\int_{A_b\cap\{\rho^+<u\}} (W(u)-W(\rho^+)-W^\prime(\rho^+)(u-\rho^+))dx\\\nonumber \\\nonumber \geq\int_{A_b\cap\{\varphi<u\}} (W(u)-W(\varphi)- W^\prime(\varphi)(u-\varphi))dx,\end{aligned}$$ where we have used $A_b\cap\{\varphi<u\}\subset A_b\cap\{\rho^+<u\}$ and the fact that the function $\tilde W(\cdot, u)$ defined in (\[seconda\]) is increasing on $(0, u).$ In the same way one proves $$\begin{aligned} \label{j-difference-} J_{A_b^-\cap\{\rho^->u\}}(u)- J_{A_b^-\cap\{\rho^->u\}}(\rho^-)\hskip6.5cm\\\nonumber\\\nonumber \geq\int_{A_b^-\cap\{-\varphi>u\}} (W(u)-W(-\varphi)-W^\prime(-\varphi)(u+\varphi))dx.\end{aligned}$$ This inequality and (\[j-difference+\]) imply (\[j-estimate\]). Given $\bar{q}\in(0,q^*)$ define $$\label{rbar} \overline{R}=\frac{q^*-\bar{q}}{M_0}$$ where $M_0$ is the constant in (\[bounds-udu\]). For later reference we quote \[energy-corollary\] Let $\lambda>0$ be fixed, assume that $R>\overline{R}$ is such that $B_{x_0,R+\lambda/2}\subset \Omega$ and let $u\in W^{1,2}(\Omega)$ a continuous map that satisfies the condition $$\begin{aligned} \vert u\vert\leq\bar{q}, \text{ for } x\in \partial B_{x_0,R+\lambda/2}.\end{aligned}$$ Then, there exist a constant $k>0$ independent of $R>\overline{R}$ and a map $v\in W^{1,2}(\Omega)$ such that $$\begin{aligned} \label{diff-energy-corollary} && v=u,\;\text{ on }\;\Omega\setminus B_{x_0,R+\lambda/2},\\\nonumber\\\nonumber && J_{\Omega} (u)-J_{\Omega} (v)=J_{B_{x_0,R+\lambda/2}}(u)- J_{B_{x_0,R+\lambda/2}}(v) \geq k\mathcal{H}^n(A_{\bar{q}}\cap B_{x_0,R}),\end{aligned}$$ where $A_{\bar{q}}:=\{x\in\Omega:\vert u\vert>\bar{q}\}$. Let $\hat{u}\in W^{1,2}(\Omega)$ be defined by $$\begin{aligned} \hat{u}=\left\{\begin{array}{l} \bar{q},\;\;\ \text{ on } B_{x_0,R+\lambda/2}\cap \{u>\bar{q}\},\\ -\bar{q},\;\;\text{ on } B_{x_0,R+\lambda/2}\cap \{u<-\bar{q}\}, \\ u,\;\;\text{ otherwise }. \end{array}\right.\end{aligned}$$ Then, using also (\[vbar\]), we have $$\begin{aligned} \label{diff-energy5} &&J_{\Omega}(u)- J_{\Omega}(\hat{u}) = \int_{ B_{x_0,R+\lambda/2}\cap \{u>\bar{q}\}}(\frac{1}{2}\vert\nabla u\vert^2+W(u)-W(\bar{q}))dx \\\nonumber &&\hskip2.5cm +\int_{B_{x_0,R+\lambda/2}\cap \{u<-\bar{q}\}}(\frac{1}{2}\vert\nabla u\vert^2+W(u)-W(-\bar{q}))dx\geq 0.\end{aligned}$$ The map $\hat{u}$ satisfies the assumptions of Lemma \[basic\]. Therefore if we let $v$ be the map associated to $\hat{u}$ by Lemma \[basic\] (for $R+\lambda/2$), from (\[diff-energy5\]) and (\[j-estimate\]) we obtain $$\label{j-estimate1} \begin{split} J_{\Omega}(u)- J_{\Omega}(v)&= J_{B_{x_0,R+\lambda/2}}(u)-J_{B_{x_0,R+\lambda/2}}(v)\\&\ge J_{B_{x_0,R+\lambda/2}}(\hat u)-J_{B_{x_0,R+\lambda/2}}(v) \\ &\geq\int_{A_{\bar{q}}\cap B_{x_0,R+\lambda/2}} (W(\hat u)-W(\varphi^{\hat u})-W^\prime(\varphi^{\hat u})(\hat u-\varphi^{\hat u}))dx, \end{split}$$ where we have also used $A_{\bar{q}}\cap B_{x_0,R+\lambda/2}\subset B_{x_0,R+\lambda/2}\cap\{\vert \hat{u}\vert>\varphi\}$. We have $\varphi(x)=\phi(\vert x-x_0\vert,R+\lambda/2)$ with $\phi(\cdot,R+\lambda/2):[0,R+\lambda/2]\rightarrow{\mathbb{R}}$ a positive function which is strictly increasing in $(0,R+\lambda/2]$. Moreover we have $\phi(R+\lambda/2,R+\lambda/2)=\bar{q}$ and $$\begin{aligned} \label{phi-l} R_1<R_2\;\;\Rightarrow\;\;\phi(R_1-\lambda,R_1)>\phi(R_2-\lambda,R_2).\end{aligned}$$ Note that $x\in B_{x_0,R}$ implies $\varphi(x)\leq \phi(R,R+\lambda/2)$. Therefore for $x$ in the subset of $A_{\bar{q}}\cap B_{x_0,R}$ where $u>\varphi$ we have $$\begin{aligned} \label{diff-potential} W(\bar{q})-W(\varphi)-W^\prime(\varphi)(\bar{q}-\varphi) =\int_{\varphi}^{\bar{q}}(W^\prime(q)-W^\prime(\varphi))dq\hspace{1 cm}\\\nonumber \geq c^2\int_{\varphi}^{\bar{q}}(q-\varphi)dq=\frac{1}{2}c^2(\bar{q}-\varphi)^2\geq \frac{1}{2}c^2(\phi(R+\lambda/2,R+\lambda/2)-\phi(R,R+\lambda/2))^2,\end{aligned}$$ where we have also used $(\ref{prima})$. In a similar way we derive the estimate $$\begin{aligned} \label{diff-potential-} W(-\bar{q})-W(-\varphi)-W^\prime(-\varphi)(-\bar{q}+\varphi) =\int_{-\varphi}^{-\bar{q}}(W^\prime(q)-W^\prime(-\varphi))dq\\\nonumber \geq -c^2\int_{-\bar{q}}^{-\varphi}(q+\varphi)dq=\frac{1}{2}c^2(\bar{q}-\varphi)^2 \geq \frac{1}{2}c^2(\phi(R+\lambda/2,R+\lambda/2)-\phi(R,R+\lambda/2))^2,\end{aligned}$$ valid in the subset of $A_{\bar{q}}\cap B_{x_0,R}$ where $u<-\varphi$. The corollary follows from this and (\[diff-potential\]), from (\[j-estimate1\]) and from the fact that, by (\[phi-l\]), the last expression in (\[diff-potential\]) and (\[diff-potential-\]) is increasing with $R$. Therefore we can assume $$\begin{aligned} k=\frac{1}{2}c^2(\phi(\overline{R}+\lambda/2,\overline{R}+\lambda/2)-\phi(\overline{R},\overline{R}+\lambda/2))^2.\end{aligned}$$ \[cutting-lemma\] Let $u\in W^{1,2}(\Omega)$ be a local minimizer as in Theorem \[teo2\]. Let $\lambda>0$ be fixed and assume that $B_{x_0,R+\lambda}\subset\Omega$ for some $R>\overline{R}$. Assume $$\label{serve} A_{\bar{q}}\cap B_{x_0,R}\neq\varnothing\;,$$ and let $S=A_{\bar{q}}\cap (B_{x_0,R+\lambda}\setminus{\overline{B_{x_0,R}}})$. Then, there exist a constant $K>0$ independent of $R>\overline{R}$ and a continuous map $v\in W^{1,2}(\Omega) $ that satisfies $$\label{v-def-1} \left\{\begin{array}{l} v = u, \text{ for } x\in\Omega\setminus S,\\ \text{\rm sign}(u)v > \bar{q}, \text{ for } x\in A_{\bar{q}}\cap B_{x_0,R+\frac{\lambda}{2}},\\ \text{\rm sign}(u)v = \bar{q}, \text{ for } x\in \partial(A_{\bar{q}}\cap B_{x_0,R+\frac{\lambda}{2}}), \end{array}\right.$$ and $$\label{secondaLemma2.4} J_{\Omega}(v)-J_{\Omega}(u)=J_S(v)-J_S(u)\leq K\mathcal{H}^n(S).$$ From Corollary \[energy-corollary\] and the minimality of $u$ we necessarily have $A_{\bar{q}}\cap \partial B_{x_0,R+\frac{\lambda}{2}}\neq\emptyset .$ Indeed, if on the contrary $A_{\bar{q}}\cap \partial B_{x_0,R+\frac{\lambda}{2}}=\emptyset,$ then $\vert u\vert\le \bar{q}$ on $\partial B_{x_0,R+\frac{\lambda}{2}}.$ Therefore, applying Corollary \[energy-corollary\] to $u$ on $B_{x_0,R+\frac{\lambda}{2}},$ we could find $v$ satisfying $$J_{\Omega} (u)-J_{\Omega} (v)=J_{B_{x_0,R+\frac{\lambda}2}}(u)- J_{B_{x_0,R+\frac{\lambda}2}}(v) \geq k\mathcal{H}^n(A_{\bar{q}}\cap B_{x_0,R}).$$ From (\[serve\]), this is in contradiction with the minimality of $u.$ Let $v\in W^{1,2}(\Omega)$ be defined by $v=u$ for $x\not\in S$ and by $$\begin{aligned} \label{v-def-2} v=(1-\vert 1-2\frac{r-R}{\lambda}\vert)\text{\rm sign}(u)\bar{q}+\vert 1-2\frac{r-R}{\lambda}\vert u, \text{ for } x\in S,\end{aligned}$$ where $r=\vert x-x_0\vert$. From this definition and (\[bounds-udu\]) it follows $$\begin{aligned} \label{q-bound-ins} \bar{q}<\text{\rm sign}(u)v\leq \vert u\vert\leq M_0, \text{ for }&& x\in S\setminus\partial B_{x_0,R+\frac{\lambda}{2}},\\\nonumber v=\text{\rm sign}(u)\bar{q},\quad\quad \text{ for }&& x\in S\cap\partial B_{x_0,R+\frac{\lambda}{2}}.\end{aligned}$$ Moreover, it is easy to verify that $v=u$ on $\partial S.$ Then, $v$ is continuous and satisfies $(\ref{v-def-1}).$ From (\[v-def-2\]) we also obtain $$\begin{aligned} \label{nabla-qv} \nabla v= \Big \vert 1-2\frac{r-R}{\lambda}\Big\vert \nabla u+\frac{2}{\lambda}(u-\text{\rm sign}(u)\bar{q})\nu, \text{ for } x\in S,\end{aligned}$$ where $\nu=-{\rm sign}(1-2\frac{r-R}{\lambda})\frac{x-x_0}{r}$. From (\[nabla-qv\]), (\[q-bound-ins\]) and (\[bounds-udu\]) it follows $$\begin{aligned} \label{nabla-qv-1} \frac{1}{2}(\vert\nabla v\vert^2-\vert\nabla u\vert^2)+W(v)-W(u)\hskip2.5cm\\\nonumber \leq \frac{1}{2}(\vert \nabla u\vert+\frac{2}{\lambda}\vert u-\text{\rm sign}(u)\bar{q}\vert )^2+\overline{W}-W(\text{\rm sign}(u)\bar{q}) \hskip1cm\\\nonumber \leq \frac{1}{2}( M_0+\frac{2}{\lambda}(M_0-\bar{q}))^2+\overline{W}, \text{ for } x\in S,\end{aligned}$$ where $\overline{W}$ is the constant in Lemma \[v-property\]. The estimate (\[nabla-qv-1\]) concludes the proof with $K$ given by the last expression in (\[nabla-qv-1\]). \[r0-existence\] Let $\bar{q}\in(0,q^*)$, $\lambda>0$ and ${\overline R}=\frac {q^*-\bar{q}}{M_0}$ as before. There exists $j_m\in{{{\rm I} \kern -.15em {\rm N} }}$ such that, if $R_0=\overline {R}+(j_m+1)\lambda$, then a local minimizer $u$ satisfies $$\begin{aligned} \label{below} x\in\Omega,\;\; d(x,\partial\Omega)\;\geq R_0\quad\Rightarrow\quad \vert u\vert\;<\; q^*.\end{aligned}$$ Moreover the number $j_m$ depends only on $\bar{q},\;\lambda\;$ and the constants $k,\;K$ in Corollary $\ref{energy-corollary}$ and Lemma $\ref{cutting-lemma}.$ Suppose that $\vert u(x_0)\vert\geq q^*$ for some $x_0\in\Omega.$ Then, from (\[bounds-udu\]), $$\vert u(x)\vert >\bar{q}, \quad \forall x\in B_{x_0,{\overline R}}.$$ Therefore, if $d(x_0, \partial\Omega )\ge {\overline R},$ (\[bounds-udu\]) implies $$\begin{aligned} \mathcal{H}^n(A_{\bar{q}}\cap B_{x_0,\overline {R}})=\mathcal{H}^n( B_{x_0,\overline {R}}):=\sigma_0.\end{aligned}$$ Now, set $$\begin{aligned} \sigma_j:=\mathcal{H}^n(A_{\bar{q}}\cap B_{x_0,\overline {R}+j\lambda}),\end{aligned}$$ for each $j\in{{{\rm I} \kern -.15em {\rm N} }}$ such that $d(x_0, \partial\Omega)\ge \overline {R}+(j+1)\lambda$. Let $v_j^1, v_j^2\in W^{1,2}(\Omega)$ be the maps defined as follows: : $v_j^1$ is the map $v$ defined in Lemma \[cutting-lemma\] for $B_{x_0,R+\lambda}$ with $R=\overline {R}+j\lambda$. : $v_j^2$ is the map $v$ given by Corollary \[energy-corollary\] when $u=v_j^1$ and $R=\overline {R}+j\lambda$. From these definitions, Corollary \[energy-corollary\] and Lemma \[cutting-lemma\], we deduce $$\begin{aligned} J_\Omega(u)-J_\Omega(v_j^1)&\geq& -K (\sigma_{j+1}-\sigma_j),\\\nonumber J_\Omega(v_j^1)-J_\Omega(v_j^2)&\geq& k\mathcal{H}^n(A_{\bar{q}}\cap B_{x_0, \overline{R}+j\lambda})= k \sigma_j.\end{aligned}$$ By adding these inequalities and using the minimality of $u$ we obtain $$\begin{aligned} \label{first-sigma-rel} 0\geq J_\Omega(u)-J_\Omega(v_j^2)\geq k \sigma_{j} -K (\sigma_{j+1}-\sigma_j)\end{aligned}$$ and therefore, $$\begin{aligned} \label{second-sigma-rel} \Big (1+\frac{k}{K}\Big )\sigma_{j-1}\leq\sigma_j&\leq&\frac{K}{k}(\sigma_{j+1}-\sigma_j) ,\;\; j\in{{{\rm I} \kern -.15em {\rm N} }},\\\nonumber \Rightarrow \Big (1+\frac{k}{K}\Big )^{j-1}\sigma_0\leq\sigma_j&\leq&\omega\frac{K}{k}\Big((\overline{R}+(j+1)\lambda)^n-(\overline{R}+j\lambda)^n\Big) ,\;\; j\in{{{\rm I} \kern -.15em {\rm N} }}.\end{aligned}$$ where $\omega$ is the measure of the unit ball in ${\mathbb{R}}^n$. For $j$ sufficiently large the last inequality is not satisfied and this contradicts the minimality of $u.$ We denote $j_m$ the minimum value of $j$ such that (\[second-sigma-rel\]) is violated. Then, (\[below\]) follows with $R_0=\overline{R}+(j_m+1)\lambda$. The existence of the map $(0,q^*]\ni q\rightarrow R(q)$ follows from the fact that all the above arguments can be repeated with a generic $q\in(0,q^*)$ in place of $q^*$. We can obviously assume that $R(q)$ is decreasing and, by modifying it if necessary, we can also assume that it is strictly decreasing and continuous. For completing the proof of Theorem \[teo2\] it remains to prove the estimate (\[exp-bound\]). Proposition \[r0-existence\] and in particular (\[below\]) imply that we can apply Lemma \[basic\] to $u$ and the ball $B_{x,R}$ for each $x\in \Omega$ such that $d(x,\partial\Omega)= R_0+R$ with $R\geq R_0$. Therefore we obtain $$\begin{aligned} \label{1} \vert u(x)\vert\leq\phi(0,R).\end{aligned}$$ We also have (see [@flp]) that $$\begin{aligned} \label{2} \phi(0,R)\leq q^* e^{-k_0 R}=q^* e^{k_0 R_0}e^{-k_0d(x,\partial\Omega)},\end{aligned}$$ for some $k_0>0$ independent of $R\in[\overline{R},+\infty)$. From (\[1\]), (\[2\]) we obtain $$\begin{aligned} \label{3} \vert u(x)\vert\leq q(R)\leq K_0e^{-k_0 d(x,\partial\Omega)},\;\text{ for }\;d(x,\partial\Omega)\geq 2R_0,\end{aligned}$$ This concludes the proof of Theorem \[teo2\]. [99]{} N. D. Alikakos and G. Fusco Asymptotic and rigidity results for symmetric solutions of the elliptic system $\Delta u=W_u(u)$. arXiv:1402.5085. M. Efendiev and F. Hamel. Asymptotic behavior of solutions of semilinear elliptic equations in unbounded domains: two approaches. (2011), pp. 1237–1261. G. Fusco. Equivariant entire solutions to the elliptic system $\Delta u=W_u(u)$ for general $G-$invariant potentials. No. 3-4 (2014), pp. 963–985. G. Fusco. On some elementary properties of vector minimizers of the Allen-Cahn energy. No. 3 (2014), pp. 1045–1060. G. Fusco, F. Leonetti and C. Pignotti. A uniform estimate for positive solutions of semilinear elliptic equations. , Vol. 363 (2011), pp. 4285–4307. D. Henry. , Springer-Verlag, Berlin, 1981. P. Smyrnelis. [^1]: Università degli Studi di L’Aquila, Via Vetoio, 67010 L’Aquila, Italy; e-mail:[`fusco@univaq.it`]{} [^2]: Dipartimento di Ingegneria e Scienze dell’Informazione e Matematica, Università degli Studi di L’Aquila, Via Vetoio, 67010 L’Aquila, Italy; e-mail:[`leonetti@univaq.it`]{} [^3]: Dipartimento di Ingegneria e Scienze dell’Informazione e Matematica, Università degli Studi di L’Aquila, Via Vetoio, 67010 L’Aquila, Italy; e-mail:[`pignotti@univaq.it`]{}
--- author: - | G. Scalari$^{1,\ast}$, C. Maissen$^1$, D. Turčinková$^1$, D. Hagenmüller$^2$,\ S. De Liberato$^2$, C.Ciuti$^2$, C. Reichl$^3$, D. Schuh$^4$,\ W. Wegscheider$^3$, M. Beck $^1$, J. Faist$^1$\ \ \ \ \ \ \ title: 'Ultrastrong coupling of the cyclotron transition of a two-dimensional electron gas to a THz metamaterial' --- Artificial cavity photon resonators with ultrastrong light-matter interactions are attracting interest both in semiconductor and superconducting systems, due to the possibility of manipulating the cavity quantum electrodynamic ground state with controllable physical properties. We report here experiments showing ultrastrong light-matter coupling in a terahertz metamaterial where the cyclotron transition of a high mobility two-dimensional electron gas is coupled to the photonic modes of an array of electronic split-ring resonators. We observe a normalized coupling ratio $\frac{\Omega}{\omega_c}=0.58$ between the vacuum Rabi frequency $\Omega$ and the cyclotron frequency $\omega_c$. Our system appears to be scalable in frequency and could be brought to the microwave spectral range with the potential of strongly controlling the magnetotransport properties of a high-mobility 2DEG. Enhancement and tunability of light-matter interaction is crucial for fundamental studies of cavity quantum electrodynamics (QED) and for applications in classical and quantum devices [@Haroche:RMP:01; @Walraff:Nat:04; @Christopoulos:prl:2007; @Hennessy:Nat:07] . The coupling between one cavity photon and one elementary electronic excitation is quantified by the vacuum Rabi frequency $\Omega$. The non-perturbative strong light-matter coupling regime is achieved when $\Omega$ is larger than the loss rates of the photons and electronic excitations. Recently, growing interest has been generated by the ultrastrong coupling regime[@Ciuti:PRB:05:115303-1; @Ciuti:pra:2006; @DeliberatoPRL2007; @Devoret2007; @Bourassa:pra:2009; @muraev:prb:2011; @Schwartz:prl:2011; @natafprl2010], which is obtained when the vacuum Rabi frequency becomes an appreciable fraction of the unperturbed frequency of the system $\omega$. In such a regime, it is possible to modify the ground and excited state properties obtaining non-adiabatic cavity QED effects [@Ciuti:PRB:05:115303-1]. Experimental progress has been achieved in two different solid-state systems : (i) microcavities embedding doped quantum wells[@dini:PRL:2003; @anapparaPRB; @Guenter:Nature:2009; @todorov:PRL:2010; @geiser:APL:2010], where the active electronic transition is between quantized subbands in the well; (ii) superconducting quantum circuits in transmission line resonators[@Niemczyk-NATPHYS-2010; @fornprl2010], where the photon field is coupled to artificial two-level atoms obtained with Josephson junctions. We present experimental results on a new system, namely a high-mobility two-dimensional electron gas (2DEG) coupled to terahertz (THz) metamaterial resonators. The photon mode is coupled to the magnetic cyclotron transition of the 2DEG, obtained by applying a magnetic field perpendicular to the plane of the quantum wells (Fig.1(A)). The cyclotron frequency is expressed by $\omega_c= \frac{e B}{m^*}$ where $B$ is the applied magnetic field, *e* is the elementary charge and $m^*$ represents the electron effective mass. This highly controllable system is ideal for the study of strong coupling because the material excitation can be continuously tuned by changing the value of the applied magnetic field. The key physical aspect to highlight is the dependence of the optical dipole moment $d$ for a cyclotron transition on the cyclotron orbit length. The dipole $d$ scales as $d \sim e l_0 \sqrt{\nu}$ [@Hagenmuller:2010p1619], where $l_0=\sqrt{\hbar/eB}$ is the magnetic length and $\nu=\rho_{\rm 2DEG} 2\pi l^{2}_{0}$ is the filling factor of the 2DEG , being $\rho_{\rm 2DEG}$ the electron areal density. This proportionality of the dipole with respect to $l_0$ allows to have gigantic dipole moments as soon as the cyclotron transition can be resolved. According to theoretical calculations valid for integer filling factors and for an optimized resonator geometry, the coupling ratio is expected to scale as $\frac{\Omega}{\omega_c} \sim \sqrt{\alpha n_{\rm QW} \nu}$ where $\alpha$ is the fine structure constant and $n_{\rm QW}$ is the number of 2DEGs [@Hagenmuller:2010p1619]. For high filling factors, this coupling ratio is predicted to assume values even larger than unity (corresponding to transitions in the microwave range). We use high-mobility 2DEG based on GaAs material system [@Umansky:JCG:09:1658] and we realize our experiments in the THz region of the electromagnetic spectrum. These frequencies, for our material system, correspond to magnetic fields of the order of a few Tesla and optical experiments are conducted using broadband THz pulses generated with ultrafast lasers [@SMITH:JQE:88:1255]. A THz-TDS system (bandwidth $0.1-3 \,{\rm THz}$) [@GRISCHKOWSKY:JOSAB:90] is coupled to a split-coil superconducting magnet to probe sample transmission [@supportingmaterial]. Our THz metamaterial integrates the 2DEG with a metasurface of electronic split-ring resonators (see Fig.1(A)). These resonators [@SchurigSci:06:977; @Chen:Nat:2006; @padilla:prl:2006] exhibit electric field enhancement over strongly subwavelength volumes [@walther:science:2010], making them ideal candidates to reach extreme couplings in the Mid-Ir and THz range where long wavelength radiation has to interact with quantum well systems typically extending over length of some micrometers [@Shelton:nanolett:2011]. Moreover, the enhanced in-plane (x-y) electric field couples efficiently to the cyclotron transition when the magnetic field is applied perpendicularly to the plane of the layers and parallel to the wavevector of the incident THz pulse (see Fig.1(A)). Resonators were deposited on top of the 2DEG by conventional photolitography, metallization with Ti/Au ($5/250 \,{\rm nm}$) and lift-off technique. At zero magnetic field we observe two resonances m$_1$, m$_2$ whose origin is qualitatively different: the lowest frequency mode ($f_1\approx 0.9\,{\rm THz}$ ) is attributed to the LC resonance, where counterpropagating currents circulate in the inductive part and the electric field is enhanced mainly in the capacitor gap [@Chen:Nat:2006]. The second mode ($f_2 \approx2.3 \,{\rm THz}$) is attributed to the “cut wire” behavior where a $\lambda/2$ kind of resonance is excited along the sides of the metaparticle [@padilla:prl:2006]. These values correspond well to simulations with 3D FE modeling (see S1 in [@supportingmaterial]). The two modes of the split-ring resonator also have different transverse wavevectors, as it is evident looking at the different field distributions. The presence of conductive layers underneath alters the frequency and the quality factor of these resonances. We observe a value of $Q_{1\rm THz}^{\rm (ins)} \simeq 4.3$ for an insulating substrate and $Q_{1 \rm THz}^{\rm (2DEG)} \simeq 3.1$ when the resonator is deposited on top of the single 2DEG sample. The values for the second resonance results less affected yielding $Q_{2.3 \rm THz}^{\rm (ins)} \simeq 5.3$ and $Q_{2.3 \rm THz}^{\rm (2DEG)} \simeq 5.3$ (a more detailed analysis can be found in S1 of [@supportingmaterial] ). It is important to highlight that, in contrast to atomic systems, we can realize strong light-matter coupling physics with resonators displaying extremely low quality factors. The giant value of the coupling constant $\Omega \sim \sqrt{n_{\rm QW} \rho_{\rm 2DEG}}$ typical of intersubband systems [@Ciuti:PRB:05:115303-1] together with the high electric field enhancement of sub-wavelength metallic resonators (V$_{cav}\simeq 8 \times 10^{-17}$ m$^3$ in our case ) allow the observation of cavity polaritons in a system where both components are in principle highly dissipative. In the data reported in Fig.2(A) we observe the evolution of the sample transmission $\vert t \vert$ $=\vert \frac{E_{\rm Meta} (B)}{E_{\rm 2DEG}(0)}\vert$ as a function of the applied magnetic field (normalized to the electric field $E_{\rm 2DEG}(0)$ of the reference 2DEG wafer at $B=0 \,{\rm T}$) . One 2DEG ($n_{\rm QW}=1$, doping density $\rho^{(1)}_{\rm 2DEG}=3.2 \times 10^{11} \,{\rm cm^{-2}}$) is used as an active medium and placed $100 {\rm \,nm}$ below the surface: its cyclotron resonance can couple to the resonator modes. As the magnetic field is swept a profound modification of the sample transmission is observed. The possibility to tune in a continuous way the material excitation allows to follow the evolution of polaritonic states as the system is driven from the uncoupled regime to the strongly coupled one [@tignon:prl:1995]. We observe two successive anticrossings when the cyclotron energy matches the first and the second resonator modes. In Fig. 2(B) we extract the positions of the minima of sample transmission and plot the dispersion curves for the polariton eigenvalues as a function of the magnetic field. The curves are calculated using a full quantum mechanical treatment of the system, obtained generalizing the theory described in Ref.[@Hagenmuller:2010p1619] to the case of a zero-dimensional resonator exhibiting two modes with different transverse wavevectors [@supportingmaterial]. Following a bosonization procedure, we have derived the different contributions to the total Hamiltonian and have diagonalized it using the Hopfield-Bogoliubov method. It is worth mentioning that the ideal resonator we have considered in the analytical treatment is different from the real split-ring one. However, we emphasize that this would introduce only a form factor in the matrix element calculation. In order to fit the experimental data, we need to know the resonator modes frequencies ($f'_1$ and $f'_2$ ) as well as the strength of their couplings ($\Omega_1$ and $\Omega_2$ ). For each cavity mode we assumed the asymptotic value of the corresponding lower polariton branch to coincide with the frequency of the unloaded resonator ($f'_1=0.83$ THz and $f'_2=2.26$ THz) [@Hagenmuller:2010p1619; @supportingmaterial] . The coupling strength $\frac{\Omega}{\omega}$ for the two modes can not be directly measured, we thus applied a best fit procedure following the least square method, analogously to what done in Ref. [@anapparaPRB] (more details in S3 of [@supportingmaterial]). The minimal error [@supportingmaterial] is obtained for $\frac{\Omega_1}{\omega_1}=0.17$ and $\frac{\Omega_2}{\omega_2}=0.075$ (where $\omega_1=2\pi f'_1$ and $\omega_2=2\pi f'_2$). As expected the coupling strength scales monotonously with $\nu$ : for the measured density $\rho^{(1)}_{\rm 2DEG}$, we have $\nu(B)=\nu(2\,T) \simeq 6.5$ for the first mode and $\nu(B)=\nu(5.5T)\simeq 2.4$ for the second one. To increase the coupling strength, we kept the resonator geometry and hence the frequency constant, and increased the effective number of carriers in the system. A new sample was prepared with $n_{\rm QW}=4$ wells and a doping density per well $\rho^{(4)}_{\rm 2DEG}=4.45 \times 10^{11} \,{\rm cm^{-2}}$ (see scheme in Fig. 1(B) and materials and methods in [@supportingmaterial]). Sample transmission as a function of the applied magnetic field (Fig.3(A)) shows that the system is driven deeply into the ultrastrong coupling regime. The polaritonic line widths display narrowing as the low quality cavity mode is mixed with the cyclotron resonance (Fig.3(B)). Following the fitting procedure previously described we observe a value of $\frac{\Omega_1}{\omega_1}=0.36$ for the first resonance. Indeed, the effects of the anti-resonant terms of the light-matter Hamiltonian start becoming relevant when the dimensionless ratio $\frac{\Omega}{\omega}$ is of the order of $0.1$[@anapparaPRB]. Due to the increased doping, the filling factor in the region of the anti crossing is $\nu \simeq 9$. As expected, the coupling ratio scales with $\sqrt{\rho_{\rm 2DEG}n_{\rm QW}}$ and for the two samples at the same resonant frequency we obtain experimentally $\frac{\left(\frac{\Omega_1}{\omega_1}\right)_{n_{\rm QW}=4}}{\left(\frac{\Omega_1}{\omega_1}\right)_{n_{\rm QW}=1}}=\frac{0.36}{0.17}=2.11$ and theoretically $\sqrt{\frac{4 \rho^{(4)}_{\rm 2DEG}}{\rho^{(1)}_{\rm 2DEG}}}=2.35$. The small discrepancy can be attributed to the different coupling of the quantum wells which do not experience the same electric field of the resonator: the generalized expression of the coupling ratio is calculated in the case when all the wells are coupled in the same way to the resonator’s field [@Hagenmuller:2010p1619]. By further scaling the resonator frequency down to $f\simeq 500$ GHz employing a slightly modified geometry (see inset Fig.3(D), section S2 and Fig. S5 of [@supportingmaterial] ) and employing the sample with n$_{QW}=4$ quantum wells we could probe the regime where the polariton splitting at the anticrossing $2\hbar \Omega$ is larger than the bare cavity photon energy. In Fig.3(D) we report the positions of the minima of the sample transmission for the case of f=500 GHz resonator together with the fitted dispersion curves: we measure a normalized ratio $\frac{\Omega_1}{\omega_1}=0.58$ for a filling factor of $\nu(1.2 T)\simeq15.2$ which corresponds to $2\hbar \Omega \simeq 1.2 \omega_c$ (see also Fig.S6 in [@supportingmaterial] ). The generalization of the theory developed in Ref. [@Hagenmuller:2010p1619] accounts for the depolarization shift in presence of a magnetic field (magnetoplasmon) originating from the long-wavelength part of the Coulomb interaction. We found that the renormalization of the cyclotron transition frequency is too small to allow the experimental resolution of the magnetoplasmon branch (see Figs 2(A) and 3(A)) in our experimental parameter regime including the small wavevector condition $q l_{0} \ll 1$ which is always satisfied since dealing with optical wavevectors. We have observed ultrastrong light matter coupling in a composite THz metamaterial measuring a normalized coupling ratio $\frac{\Omega}{\omega_c}=0.58$. The impact of our results has to be considered also in the perspective of a change in the DC transport properties of the 2DEG, in analogy with what already observed by direct irradiation at lower frequencies [@Mani:Nat:02:646]. These results should lead to the scaling of the frequency to lower values and to an increase of effective density to further enhance the coupling strength. [29]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , ****, (). , , , ****, (). , , , ****, (). , , , , , , ****, (). , , , , , ****, (). , , , , ****, (). , ****, (). , , , , , ****, (). , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , , , , , , ****, (). , , , ****, (). , , , , ****, (). , , , , , (), . , ****, (). , , , , , , ****, (). , , , , , ****, (). , , , , , ****, (). , ****, (). , ****, (). , , , , , , ****, (). , , , , , ****, (). , ****, (). ![(A): Schematic of the composite metamaterial used in our experiment together with the experimental arrangement showing the polarizing static magnetic field, the wavevector and the polarization of the incident broadband THz pulse, the Landau level arrangement and the semiclassical representation of a cyclotron orbit of magnetic length $l_0$. A metasurface composed by LC-metaparticles with a design similar to Ref.[@Ohara2007] is deposited on top of the semiconductor. SEM picture of one metaparticle: the split gap of the capacitance elements is $2.6 \,{\rm \mu m}$. (B): The bandstructure of the multi 2DEG system is schematized together with the quantum well position (not to scale) and (C) x-y spatial distribution of the in-plane electric field (E$_{plane}$=$\sqrt{\vert E_x\vert^2+\vert E_y \vert^2}$) calculated with a finite element commercial software is shown for the two observed modes m$_1$ and m$_2$. (z=100 nm below the semiconductor surface surface). (d): Intensity for E$_{plane}$ in the yz plane for the low frequency mode m$_1$ (cut along the white dashed line in (C) m$_1$ ).](Scalari1new.pdf){width="100mm"} ![((A): Transmission $\vert t \vert$ of the sample ($n_{\rm QW}=1$) as a function of the applied magnetic field $B$. The reference is a plain 2DEG sample without resonators on top and the measurement is performed at T=$2.2\,{\rm K}$. (B): Best fit with the extracted transmitted minima positions for the two different transverse modes of the electronic split ring resonator; the fitting parameter is $\frac{\Omega}{\omega}$.](Scalari2.pdf){width="130mm"} ![(A): Transmission $\vert t \vert$ of the sample ($n_{\rm QW}=4$) as a function of the applied magnetic field. The reference is a plain 2DEG sample without resonators on top and the measurement is performed at T=$10 \,{\rm K}$. The black dotted line highlights the cyclotron signal coming from the uncoupled material which is left in-between the resonators. (B): Sections in the two anticrossing regions for the sample transmission. (C): Best fit with the extracted transmitted minima positions for the two orthogonal modes of the split ring electronic resonator; the fitting parameter is $\frac{\Omega}{\omega}$. (D): Best fit with the extracted transmitted minima positions for the f=500 GHz resonator and $n_{\rm QW}=4$ measured at T=$10 \,{\rm K}$ ; the fitting parameter is $\frac{\Omega}{\omega}$. Inset: scheme of the 500 GHz resonator. ](Scalari3.pdf){width="150mm"}
--- abstract: 'Inspired by topological Wiener-Wintner theorems we study the mean ergodicity of amenable semigroups of Markov operators on $C(K)$ and show the connection to the convergence of strong and weak ergodic nets. The results are then used to characterize mean ergodicity of Koopman semigroups corresponding to skew product actions on compact group extensions.' address: | Institute of Mathematics\ University of Tübingen\ Auf der Morgenstelle 10\ 72076 Tübingen\ Germany author: - Marco Schreiber title: 'Topological Wiener-Wintner theorems for amenable operator semigroups' --- Robinson’s topological Wiener-Wintner theorem [@robinson94 Theorem 1.1] is concerned with the uniform convergence of the weighted Cesàro averages $$\label{eq:1} \frac{1}{n}\sum_{j=1}^n \lambda^j S^j f \tag{$\star$}$$ for a continuous function $f\in C(K)$ on a compact space $K$, the Koopman operator $S: f\mapsto f\circ \ph$ of a continuous transformation $\ph: K\to K$ and $\lambda$ in the unit circle $\Torus$. Subsequently, Robinson’s result has been generalized in various ways by Walters [@walters96], Santos and Walkden [@santos07] and Lenz [@lenz09a; @lenz09]. It turned out that the uniform convergence of Wiener-Wintner averages plays an important role in the mathematical description of diffraction on quasicrystals. In [@lenz09] Lenz showed how the intensity of Bragg peaks can be calculated via certain limits of Wiener-Wintner averages, giving a partial answer to a conjecture of Bombieri and Taylor [@bombieri86; @bombieri87]. So far, all these authors focused on the convergence of a particular sequence of Cesàro means similar to (\[eq:1\]). In this paper we take a more general view and look at semigroups of operators being mean ergodic on $C(K)$ or on some closed invariant subspace. Based on the theory of mean ergodic semigroups (see [@krengel85 Chapter 2]) this allows us to unify and extend the known Wiener-Wintner theorems to amenable semigroups of Markov (instead of Koopman) operators on $C(K)$. The problem when averages of the form (\[eq:1\]) even converge uniformly in $\lambda\in\Torus$ has been studied independently by Assani [@assani03] and Robinson [@robinson94] with their results subsequently generalized by Walters [@walters96], Santos and Walkden [@santos07] and Lenz [@lenz09]. In [@schreiber12] we have developed the concept of a uniform family of ergodic nets that allows us to treat this question also in our more general setting. In the first part of this paper we study mean ergodicity of semigroups of Markov operators on $C(K)$. For an amenable representation $\{S_g: g\in G\}$ of a semitopological semigroup $G$ as Markov operators and for $\chi:G\to \Torus$ a continuous multiplicative map, we then characterize mean ergodicity of the semigroup $\{\chi(g)S_g: g\in G\}$. In the second part we restrict our attention to Koopman operators on the space $C(K,\C^N)$ of continuous $\C^N$-valued functions and show similar results replacing $\chi:G\to\Torus$ by a continuous cocycle $\gamma: G\times K\to U(N)$ into the group of unitary operators on $\C^N$. In the third part we consider skew product actions on compact group extensions. We use the previous results in order to characterize mean ergodicity of the corresponding Koopman representation. Finally, we obtain a new proof and a generalization of a theorem of Furstenberg, showing that an ergodic skew product action corresponding to a uniquely ergodic action is uniquely ergodic. Semigroups of Markov operators {#sec:ww} ============================== We consider the space $C(K)$ of complex valued continuous functions on a compact set $K$, a semitopological semigroup $G$ (see Berglund et al. [@berglund89 Chapter 1.3]) and assume that $\S=\{S_g:g\in G\}$ is a bounded representation of $G$ on $C(K)$, i.e., (i) $S_g\in \L(C(K))$ for all $g\in G$ and $\sup_{g\in G}\|S_g\|<\infty$, (ii) $S_{g_1} S_{g_2}=S_{g_2 g_1}$ for all $g_1, g_2\in G$, (iii) $g\mapsto S_g f$ is continuous for all $f\in C(K)$. Such a bounded representation $\S$ and its convex hull $\co\S$ are topological semigroups with respect to the strong operator topology. On the dual space $C(K)'$, identified with the set $M(K)$ of regular Borel measures on $K$, we consider the adjoint semigroup $\S':=\{S_g': g\in G\}$. A *mean* on the space $C_b(G)$ of bounded continuous functions on $G$ is a linear functional $m\in C_b(G)'$ satisfying ${\left\langle m,{\mathbbm{1}}\right\rangle}=\|m\|=1$. A mean $m\in C_b(G)'$ is called *right (left) invariant* if $${\left\langle m,R_g f \right\rangle}={\left\langle m,f \right\rangle} \quad \left({\left\langle m,L_g f \right\rangle}={\left\langle m,f \right\rangle}\right) \quad\forall g\in G, f\in C_b(G),$$ where $R_g f(h)=f(hg)$ and $L_gf(h)=f(gh)$ for $h\in G$. A mean $m\in C_b(G)'$ is called *invariant* if it is both right and left invariant. The semigroup $G$ is called *right (left) amenable* if there exists a right (left) invariant mean on $C_b(G)$. It is called *amenable* if there exists an invariant mean on $C_b(G)$ (see Berglund et al. [@berglund89 Chapter 2.3] or the survey article of Day [@day69]). Notice that if $\S:=\{S_g:g\in G\}$ is a bounded representation of a right (left) amenable semigroup $G$ on $X$, then $\S$ endowed with the strong operator topology is also right (left) amenable. In the following, the space $\L(C(K))$ will be endowed with the strong operator topology unless stated otherwise. A net $(A_\a^\S )_{\a\in \A}$ of operators in $\L(C(K))$ is called a *strong right (left) $\S$-ergodic net* if the following conditions hold. 1. $A_\a^\S \in \ol{\co}\S$ for all $\a\in \A$. 2. $(A_\a^\S)$ is *strongly right (left) asymptotically $\S$-invariant*, i.e., $\lim_\a A_\a^\S f-A_\a^\S S_g f=0{\enspace}\left(\lim_\a A_\a^\S f- S_g A_\a^\S f=0\right)$ for all $f\in C(K)$ and $g\in G$. The net $(A_\a^\S )$ is called a *strong $\S$-ergodic net* if it is a strong right and left $\S$-ergodic net. Clearly, the Cesàro means $\frac{1}{n}\sum_{j=1}^{n}S^j$ of a contraction $S\in \L(C(K))$ form a strong $\{S^j:j\in\N\}$-ergodic net and we refer to [@eberlein49; @sato78; @schreiber12] for many more examples. The semigroup $\S$ is called *mean ergodic* if $\ol{\co}\S$ contains a zero element $P$ (see [@berglund89 Chapter 1.1]), which is called the *mean ergodic projection of $\S$*. (See e.g. Krengel [@krengel85 Chapter 2] for an introduction to this concept.) Denote by $\Fix\S=\{f\in C(K): S_g f=f\;\forall g\in G\}$ and $\Fix\S'=\{\nu\in C(K)': S_g' \nu=\nu \;\forall g\in G\}$ the fixed spaces of $\S$ and $\S'$, respectively, and by $\lin\rg(I-\S)$ the linear span of the set $\rg(I-\S)=\{f-S_g f: f\in C(K), g\in G\}$. We recall some characterizations of mean ergodicity from Theorem 1.7 and Corollary 1.8 in [@schreiber12]. \[prop:mean-ergodic\] Let $G$ be represented on $C(K)$ by a bounded (right) amenable semigroup $\S=\{S_g:g\in G\}$. Then the following assertions are equivalent. 1. $\S$ is mean ergodic with mean ergodic projection $P$. 2. $\Fix\S$ separates $\Fix\S'$. 3. $C(K)= \Fix\S\oplus\ol{\lin}\rg(I-\S)$. 4. $A_\a^\S f$ converges weakly (to a fixed point of $\S$) for some/every strong (right) $\S$-ergodic net $(A_\a^\S)$ and all $f\in C(K)$. 5. $A_\a^\S f$ converges strongly (to a fixed point of $\S$) for some/every strong (right) $\S$-ergodic net $(A_\a^\S)$ and all $f\in C(K)$. The limit $P$ of the nets $(A_\a^\S)$ in the weak (strong, resp.) operator topology is the mean ergodic projection of $\S$ mapping $C(K)$ onto $\Fix \S$ along $\ol{\lin}\rg(I-\S)$. Let now $G$ be represented on $C(K)$ by a semigroup $\S=\{S_g: g\in G\}$ of *Markov operators*, i.e., of positive operators satisfying $S_g{\mathbbm{1}}={\mathbbm{1}}$ for all $g\in G$. Then $\S$ consists of contractions and hence $\S$ is bounded. Assume that the semigroup $\S$ is *uniquely ergodic*, i.e., $\Fix\S'=\C\cdot \mu$ for some probability measure $\mu\in C(K)'$. We denote by $S_{g,2}$ the continuous extension of the operator $S_g\in\S$ to the space $L^2(K,\mu)$. The corresponding extended semigroup is $\S_2:=\{S_{g,2}: g\in G\}$ with $\S_2^*:=\{S_{g,2}^*:g\in G\}$ the semigroup of Hilbert space adjoints. The semigroup $\S$ is called *ergodic with respect to $\mu$* if $\Fix\S_2=\C\cdot{\mathbbm{1}}$. Since in $L^2(K,\mu)$ all contraction semigroups are mean ergodic (see, e.g., [@schreiber12 Corollary 1.9]), $\S_2$ is mean ergodic. In fact, the above assumptions even imply mean ergodicity on $C(K)$ (cf. Eisner, Farkas, Haase and Nagel [@efhn Theorem 10.6] and Krengel [@krengel85 Chapter 5, Section 5.1] for representations of $\N$). \[prop:uniquely-ergodic-m-erg\] Let $G$ be represented on $C(K)$ by a right amenable semigroup $\S=\{S_g:g\in G\}$ of Markov operators. Then (1) implies (2) in the following statements. 1. $\S$ is uniquely ergodic. 2. $\S$ is mean ergodic and $\Fix\S=\C\cdot{\mathbbm{1}}$. If there exists $0<\mu\in \Fix\S'$, then (2) implies (1). (1)${\Rightarrow}$(2): Since $\Fix\S$ contains the constant functions, it separates $\Fix\S'$ and hence $\S$ is mean ergodic by Proposition \[prop:mean-ergodic\]. To show $\Fix\S=\C\cdot{\mathbbm{1}}$ it suffices to prove that $\Fix\S'$ separates $\Fix\S$. To see this, take $0\neq x\in \Fix\S$ and let $P$ be the mean ergodic projection of $\S$. Choose $x'\in X'$ with ${\left\langle x,x' \right\rangle}\neq 0$. Since $Px\in\ol{\co}\S x=\{x\}$ this implies ${\left\langle x,P'x' \right\rangle}={\left\langle Px,x' \right\rangle}={\left\langle x,x' \right\rangle}\neq 0$ and $P'x'\in\Fix\S'$ follows by taking adjoints in the equality $PS_g=P$ for all $g\in G$. Assume now that there exists $0<\mu\in \Fix\S'$.\ (2)${\Rightarrow}$(1): If $\Fix\S=\C\cdot{\mathbbm{1}}$ separates $\Fix\S'$, then $\Fix\S'$ can be at most one dimensional. But by hypothesis $\Fix\S'$ is at least one dimensional and hence $\Fix\S'=\C\cdot\mu$. Notice that if in the situation of Proposition \[prop:uniquely-ergodic-m-erg\] $\S$ is also left amenable, then Day’s fixed point theorem [@day73 Chapter V, Section 2, Theorem 5] ensures the existence of a probability measure $\mu\in \Fix\S'$. This leads to the following corollary. \[cor:uniquely-ergodic-m-erg\] Let $G$ be represented on $C(K)$ by a amenable semigroup $\S=\{S_g:g\in G\}$ of Markov operators. Then the following assertions are equivalent. 1. $\S$ is uniquely ergodic. 2. $\S$ is mean ergodic and $\Fix\S=\C\cdot{\mathbbm{1}}$. In the following we will always assume that $G$ is an amenable semigroup. Let $\widehat{G}$ be the set of all *characters of $G$*, i.e., the set of all continuous multiplicative maps $\chi:G\to \Torus$ (see [@williamson67]), and take $\chi\in\widehat{G}$. Then we consider the semigroup $\chi \S:=\{\chi(g)S_g: g\in G\}$ and denote by $(\chi\S)':=\{(\chi(g)S_g)': g\in G\}$ the adjoint semigroup on $C(K)'$. Notice that $\chi\S$ is amenable as a bounded representation of the amenable semigroup $G$. Again, $\chi\S$ extends to $L^2(K,\mu)$ and the extended semigroup $\chi\S_2$ is contractive, hence mean ergodic on $L^2(K,\mu)$. But unlike $\S$, the semigroup $\chi\S$ is not always mean ergodic on $C(K)$. In [@robinson94 Proposition 3.1] Robinson gave an elaborate example for such a situation. Here is a much simpler one due to Roland Derndinger (oral communication). \[ex:roland\] Consider the set $\{-1,1\}^\N$ endowed with the product topology and for $i\in\N$ define the sequence $x^{(i)}=(x^{(i)}_n)_{n\in\N}\in \{-1,1\}^\N$ by $$x^{(i)}_n:=\left\{ \begin{array}{ll} (-1)^n,& n<i\\ (-1)^{n+1},& n\ge i. \end{array} \right .$$ If $\ph$ denotes the left shift on $\{-1,1\}^\N$, i.e., $\ph((x_n))=(x_{n+1})$, then the set $K:=\{\pm x^{(i)}: i\in\N\}\subset \{-1,1\}^\N$ is a closed $\ph$-invariant subset of $\{-1,1\}^\N$. Let $S$ be the corresponding Koopman operator on $C(K)$, i.e., $Sf=f\circ\ph$ for $f\in C(K)$. We claim that the semigroup $\S:=\{S^n: n\in \N\}$ is uniquely ergodic, but if $\chi\in\widehat{\N}$ is given by $\chi(n)=(-1)^n$, then $\chi\S=\{(-S)^n: n\in\N\}$ is not mean ergodic. First, notice that $\Fix \S=\C\cdot{\mathbbm{1}}$. Indeed, if $f\in \Fix\S$, then for all $i\in\N$ there exists $n>i$ such that $f(\pm x^{(i)})=S^n f(\pm x^{(i)})=f(x^{(1)}),$ and thus $f$ is constant. To show that $\S$ is uniquely ergodic it thus suffices by Corollary \[cor:uniquely-ergodic-m-erg\] to show that $\S$ is mean ergodic. Let $f\in C(K)$. Then $\frac{1}{N}\sum_{n=0}^{N-1}S^nf$ converges pointwise to the continuous function $\pm x^{(i)}\mapsto \frac{1}{2}(f(x^{(1)})+f(-x^{(1)}))$ since $f(\ph^n(\pm x^{(i)}))=f((-1)^{n+i}(\pm x^{(1)}))$ for all $n>i$. Since weak and pointwise convergence coincide for bounded sequences, it follows from Proposition \[prop:mean-ergodic\] that $\S$ is mean ergodic. We now show that $\chi\S= \{(-S)^n: n\in\N\}$ is not mean ergodic. Let $f_1\in C(\{-1,1\}^\N)$ be defined by $f_1((x_n))= x_1$ and take its restriction $f_1$ to $K$. Then $\frac{1}{N}\sum_{n=0}^{N-1}(-S)^nf_1$ converges pointwise to the function $h$ defined by $h(\pm x^{(i)})=\pm 1$ for all $i\in\N$. But $h\notin C(K)$ since $x^{(i)}\rightarrow -x^{(1)}$ and $h(x^{(i)})=1\nrightarrow -1=h(-x^{(1)})$. Hence the sequence $\left(\frac{1}{N}\sum_{n=0}^{N-1}(-S)^nf_1\right)_N$ does not converge in $C(K)$ and thus $\chi\S$ is not mean ergodic. Motivated by this example and various papers in mathematical physics on diffraction theory of quasicrystals and on the Bombieri-Taylor conjecture (see e.g. [@oliveira98; @hof95; @lenz09a; @lenz09]), we now characterize the mean ergodicity of the semigroup $\chi\S$. Let us first recall some facts about the lattice structure of $C(K)'$ (see [@efhn Appendix D.2] for details). For a bounded linear functional $\nu\in C(K)'$ one defines a mapping $|\nu|$ by $${\left\langle |\nu|,f \right\rangle}:=\sup\{|{\left\langle \nu, h \right\rangle}|: h\in C(K), |h|\le f\}$$ for $0\le f\in C(K)$ and extends it uniquely to a bounded linear functional $|\nu|\in C(K)'$. With this structure the space $C(K)'$ becomes a Banach lattice. On the other hand, the space $M(K)$ of regular Borel measures on $K$ is a Banach lattice with the total variation $|\nu|$ of a measure $\nu\in M(K)$ defined by $$|\nu|(E):=\sup\left\{\sum_{j=1}^\infty|\nu(E_j)|: (E_j)_{j\in\N} \text{ a partition of } E\right\},\quad (E\subset K\text{ measurable}),$$ and the norm $\|\nu\|:=|\nu|(K)$. The notation $|\nu|$ for a functional $\nu\in C(K)'$ and a measure $\nu\in M(K)$ is justified since the mapping $$\begin{aligned} d:M(K)&\to C(K)'\\ \nu&\mapsto d\nu\end{aligned}$$ in the Riesz Representation Theorem is a lattice isomorphism. For a function $h\in L^2(K,\mu)$ we denote by $\ol{h} d\mu\in C(K)'$ the functional defined by $${\left\langle \ol{h}d\mu,f \right\rangle}:={\left\langle f,h \right\rangle}_{L^2(K,\mu)}=\int_K f(x)\ol{h(x)}\,d\mu(x) \quad (f\in C(K)).$$ \[lemma:Fixraum\_dualer\_Fixraum\] Let $G$ be represented on $C(K)$ by a semigroup $\S=\{S_g:g\in G\}$ of Markov operators. If $\S$ is uniquely ergodic with invariant measure $\mu$, then for each $\chi\in\widehat{G}$ the map $$\begin{aligned} L^2(K,\mu)\supset\Fix(\chi\S_2)^*&\to\Fix (\chi\S)'\subset C(K)'\\ h\quad&\mapsto\quad \ol{h}d\mu\end{aligned}$$ is antilinear and bijective. To see that the map is well-defined, let $h\in\Fix(\chi\S_2)^*$. For all $f\in C(K)$ and $g\in G$ we have $$\begin{aligned} {\left\langle (\chi(g)S_g)'(\ol{h} d\mu),f \right\rangle}&={\left\langle \ol{h} d\mu,\chi(g)S_g f \right\rangle}\\ &={\left\langle \chi(g)S_{g,2} f,h \right\rangle}_{L^2(K,\mu)}\\ &={\left\langle f,(\chi(g)S_{g,2})^*h \right\rangle}_{L^2(K,\mu)}\\ &={\left\langle f,h \right\rangle}_{L^2(K,\mu)} ={\left\langle \ol{h} d\mu,f \right\rangle}, \end{aligned}$$ yielding $\ol{h}d\mu\in\Fix(\chi\S)'$. Since antilinearity and injectivity are clear, it remains to show surjectivity. Let $\nu\in\Fix(\chi\S)'$. We claim that $|\nu|\le S_g'|\nu|$ for all $g\in G$. Indeed, if $0\le f\in C(K)$ and $g\in G$, then $$\begin{aligned} {\left\langle |\nu|,f \right\rangle}&=\sup_{|\tilde{f}|\le f}|\langle\nu,\tilde{f}\rangle| =\sup_{|\tilde{f}|\le f}|\langle (\chi(g)S_g)'\nu, \tilde{f}\rangle|\\ &=\sup_{|\tilde{f}|\le f}|\langle\nu,\chi(g)S_g \tilde{f}\rangle|\\ &\le \sup_{|\tilde{f}|\le f}\langle|\nu|,|S_g \tilde{f}|\rangle\\ &\le \sup_{|\tilde{f}|\le f}\langle|\nu|,S_g|\tilde{f}|\rangle\\ &={\left\langle |\nu|,S_g f \right\rangle}={\left\langle S_g'|\nu|,f \right\rangle}.\end{aligned}$$ If $0\le f\in C(K)$ and $g\in G$, then $$0\le{\left\langle S_g'|\nu|-|\nu|,f \right\rangle}\le {\left\langle S_g'|\nu|-|\nu|,\|f\|_\infty{\mathbbm{1}}\right\rangle}=\|f\|_\infty{\left\langle |\nu|,S_g{\mathbbm{1}}-{\mathbbm{1}}\right\rangle}=0.$$ Hence $|\nu|\in \Fix\S'=\C\cdot \mu$ by unique ergodicity and thus $\nu$ is absolutely continuous with respect to $\mu$. The Radon-Nikodým Theorem then implies the existence of a function $h\in L^\infty(K,\mu)$ such that $\nu=\ol{h} d\mu$. The same calculation as above shows that $h\in \Fix(\chi \S_2)^*$. Note that for a contraction $T$ on a Hilbert space $H$ the fixed spaces of $T$ and its adjoint $T^*$ coincide. Indeed, for each $x\in \Fix T$ we have $$\begin{aligned} \|T^*x-x\|^2&=\|T^*x\|^2-{\left\langle T^*x,x \right\rangle}-{\left\langle x,T^*x \right\rangle}+\|x\|^2\\ &\le \|x\|^2-{\left\langle x,Tx \right\rangle}-{\left\langle Tx,x \right\rangle}+\|x\|^2=0,\end{aligned}$$ which yields $\Fix T=\Fix T^*$ by symmetry. Now, if $G$ is represented on $C(K)$ by a semigroup $\S$ of Markov operators, then $\S_2$ consists of contractions on $L^2(K,\mu)$ and thus the fixed spaces of $\S_2$ and $\S_2^*$ coincide. Hence, it follows from Lemma \[lemma:Fixraum\_dualer\_Fixraum\] applied to the constant character ${\mathbbm{1}}\in\widehat{G}$, that unique ergodicity of $\S$ with respect to $\mu$ implies ergodicity of $\S$ with respect to $\mu$. \[lemma:dimension-fixraum\] Let $G$ be represented on $C(K)$ by a semigroup $\S=\{S_g:g\in G\}$ of Markov operators. If $\S$ is ergodic with respect to some invariant measure $\mu$ and $\chi\in\widehat{G}$, then $\dim\Fix \chi\S_2\le 1$. The semigroup $\S_2$ consists of contractions on $L^2(K,\mu)$ and thus the closure $\TT$ of $\S_2$ with respect to the weak operator topology contains a unique minimal idempotent $Q$ (cf. [@efhn Theorem 16.11]). By [@efhn Theorem 16.22] the minimal ideal $\G=\TT Q$ of $\TT$ is a compact group (even for the strong operator topology) and the map $T\mapsto T|_{\ran Q}$ from $\G$ to $\{T|_{\ran Q}: T\in\TT\}$ is a topological isomorphism of compact groups. The projection $Q$ is positive since each operator in $\S_2$ is positive. Moreover, $Q$ is an orthogonal projection onto its range with $Q{\mathbbm{1}}={\mathbbm{1}}$. Since ${\left\langle Qf,{\mathbbm{1}}\right\rangle}_{L^2}={\left\langle f,{\mathbbm{1}}\right\rangle}_{L^2}>0$ for each $0<f\in L^2(K,\mu)$, $Q$ is strictly positive on $L^2(K,\mu)$. Hence $\ran Q$ is a sublattice of $L^2(K,\mu)$ by [@schaefer74 Proposition 11.5]. If $T_g$ denotes the restriction of $S_{g,2}$ to $\ran Q$, then $T_g$ is invertible with positive inverse, hence $T_g$ is a lattice homomorphism with $T_g{\mathbbm{1}}={\mathbbm{1}}$ for each $g\in G$. By [@efhn Theorem 7.18] each $T_g$ is then an algebra homomorphism on the subalgebra $\ran Q\cap L^\infty(K,\mu)$. Now, take $\chi\in\widehat{G}$. If $f \in \Fix\chi\S_2$, then $f$ generates a one-dimensional $\S_2$-invariant subspace of $L^2(K,\mu)$ and hence by [@efhn Theorem 16.29] is contained in $\ran Q$. Since $S_{g,2}$ is a lattice homomorphism on $\ran Q$, we have $|f|=|\ol{\chi(g)}S_{g_2}f|=S_{g,2}|f|$ for each $g\in G$, hence by ergodicity, $|f|\in\C\cdot{\mathbbm{1}}$. So, if $f,h\in \Fix\chi\S_2\setminus\{0\}$ we have $f,h\in\ran Q\cap L^\infty(K,\mu)$ and we may assume $|f|=|h|={\mathbbm{1}}$. We then obtain $$S_{g,2}(f\cdot \ol{h})=T_g(f\cdot\ol{h})=T_gf\cdot T_g\ol{h}=\ol{\chi(g)}f\cdot \chi(g)\ol{h}=f\cdot \ol{h}$$ for each $g\in G$. Hence $f\cdot\ol{h}\in\Fix\S_2$ and therefore $f\cdot \ol{h}=c\cdot{\mathbbm{1}}$ for some $c\in \C$, which yields $f=c\cdot h$. The following theorem is our first main result. \[thm:mean-ergodic\] Let $\S=\{S_g:g\in G\}$ be a representation of a (right) amenable semigroup $G$ as Markov operators on $C(K)$ and assume that $\S$ is uniquely ergodic with invariant measure $\mu$. For $\chi\in\widehat{G}$ the following assertions are equivalent. (1) $\Fix\chi\S_2\subseteq \Fix\chi\S$. (2) $\chi\S$ is mean ergodic with mean ergodic projection $P_\chi$. (3) $\Fix\chi\S$ separates $\Fix(\chi\S)'$. (4) $C(K)= \Fix\chi\S\oplus\ol{\lin}\rg(I-\chi\S)$. (5) $A_\a^{\chi\S} f$ converges weakly (to a fixed point of $\chi\S$) for some/every strong (right) $\chi\S$-ergodic net $(A_\a^{\chi\S})$ and all $f\in C(K)$. (6) $A_\a^{\chi\S} f$ converges strongly (to a fixed point of $\chi\S$) for some/every strong (right) $\chi\S$-ergodic net $(A_\a^{\chi\S})$ and all $f\in C(K)$. The limit $P_\chi$ of the nets $(A_\a^{\chi\S})$ in the strong (weak, resp.) operator topology is the mean ergodic projection of $\chi\S$ mapping $C(K)$ onto $\Fix \chi\S$ along $\ol{\lin}\rg(I-\chi\S)$. The equivalence of the statements (2) to (6) follows directly from Proposition \[prop:mean-ergodic\]. (1)${\Rightarrow}$(3): If $0\neq\nu\in \Fix(\chi\S)'$, then $\nu=\ol{h}d\mu$ by Lemma \[lemma:Fixraum\_dualer\_Fixraum\] for some $0\neq h\in \Fix(\chi\S_2)^*$. Since $\chi\S_2$ consists of contractions on $L^2(K,\mu)$, we have $\Fix(\chi\S_2)^*=\Fix\chi\S_2$. Since $\Fix\chi\S_2\subseteq \Fix\chi\S$ by (1), this yields $h\in \Fix\chi\S$ and $${\left\langle \nu,h \right\rangle}=\|h\|^2_{L^2(K,\mu)}>0.$$ (3)${\Rightarrow}$(1): Suppose $f\in \Fix\chi\S_2\setminus \Fix\chi\S$. Then $\dim\Fix\chi\S_2=1$ and $\dim\Fix\chi\S=0$ by Lemma \[lemma:dimension-fixraum\], while $\dim\Fix(\chi\S)'=1$ by Lemma \[lemma:Fixraum\_dualer\_Fixraum\]. Hence $\Fix\chi\S$ does not separate $\Fix(\chi\S)'$. As Example \[ex:roland\] shows, mean ergodicity of $\chi\S$ does not hold on $C(K)$ in general. The following theorem characterizes mean ergodicity of $\chi\S$ on the closed invariant subspace $Y_f:=\ol{\lin}\chi\S f$ for some given $f\in C(K)$. This extends results of Robinson [@robinson94 Theorem 1.1] and Lenz [@lenz09 Theorem 1]. For a closed subspace $H\subset L^2(K,\mu)$ we denote by $P_H$ the orthogonal projection onto $H$. \[thm:mean-ergodic-in-f\] Let $\S=\{S_g:g\in G\}$ be a representation of a (right) amenable semigroup $G$ as Markov operators on $C(K)$ and assume that $\S$ is uniquely ergodic with invariant measure $\mu$. For $\chi\in\widehat{G}$ and $f\in C(K)$ the following assertions are equivalent. (1) $P_{\Fix\chi\S_2}f\in \Fix\chi\S$. (2) $\chi\S$ is mean ergodic on $Y_f$ with mean ergodic projection $P_\chi$. (3) $\Fix\chi\S|_{Y_f}$ separates $\Fix(\chi\S)|_{Y_f}'$. (4) $f\in \Fix\chi\S\oplus\ol{\lin}\rg(I-\chi\S)$. (5) $A_\a^{\chi\S} f$ converges weakly (to a fixed point of $\chi\S$) for some/every strong (right) $\chi\S$-ergodic net $(A_\a^{\chi\S})$. (6) $A_\a^{\chi\S} f$ converges strongly (to a fixed point of $\chi\S$) for some/every strong (right) $\chi\S$-ergodic net $(A_\a^{\chi\S})$. The limit $P_\chi$ of $A_\a^{\chi\S}$ in the strong (weak, resp.) operator topology on $Y_f$ is the mean ergodic projection of $\chi\S|_{Y_f}$ mapping $Y_f$ onto $\Fix\chi\S|_{Y_f}$ along $\ol{\lin}\rg(I-\chi\S|_{Y_f})$. The equivalence of the statements (2) to (6) follows directly from Proposition 1.11 in [@schreiber12]. (6)${\Rightarrow}$(1): By von Neumann’s Ergodic Theorem $P_{\Fix\chi\S_2}f$ is the limit of $A_\a^{\chi\S} f$ in $L^2(K,\mu)$. If $A_\a^{\chi\S} f$ converges strongly in $C(K)$, then the limits coincide almost everywhere and hence $P_{\Fix\chi\S_2}f$ has a continuous representative in $\Fix\chi\S$. (1)${\Rightarrow}$(4): Let $\nu\in C(K)'$ vanish on $\Fix\chi\S\oplus\ol{\lin}\rg(I-\chi\S)$. Then, in particular, ${\left\langle \nu,h \right\rangle}={\left\langle \nu,\chi(g)S_g h \right\rangle}={\left\langle (\chi(g)S_g)' \nu, h \right\rangle}$ for all $h\in C(K)$ and $g\in G$ and thus $\nu\in \Fix(\chi\S)'$. Hence by Lemma \[lemma:Fixraum\_dualer\_Fixraum\] there exists $h\in \Fix(\chi\S_2)^*$ such that $\nu=\ol{h} d\mu$. Let $(A_\a^{\chi\S_2})_{\a\in\mathcal{A}}$ be a strong $\chi\S_2$-ergodic net on $L^2(K,\mu)$. Then $(A_\a^{\chi\S_2})^* h=h$ for all $\a\in\mathcal{A}$ and von Neumann’s Ergodic Theorem implies $${\left\langle \nu,f \right\rangle}={\left\langle f,h \right\rangle}_{L^2}={\left\langle A_\a^{\chi\S_2} f,h \right\rangle}_{L^2}\to \langle \underbrace{P_{\Fix\chi\S_2}f}_{\in C(K)},h\rangle_{L^2}= \langle\nu,\underbrace{P_{\Fix\chi\S_2}f}_{\in \Fix\chi\S}\rangle=0.$$ Hence the Hahn-Banach Theorem yields $f\in \Fix\chi\S\oplus\ol{\lin}\rg(I-\chi\S)$ since $\Fix\chi\S\oplus\ol{\lin}\rg(I-\chi\S)$ is closed by Theorem 1.9 in Krengel [@krengel85 Chap. 2]. In [@lenz09 Theorem 1] Lenz showed that for Koopman respresentations of locally compact, $\sigma$-compact abelian groups the assertion (1) of Theorem \[thm:mean-ergodic-in-f\] is equivalent to the convergence of $\chi\S$-ergodic nets associated to so-called *van Hove sequences* (see Schlottmann [@schlottmann00 p. 145]), a special class of *Følner sequences* (see Paterson [@paterson88 Chapter 4]). \[rem:p\_chi\] Notice that if $P_{\Fix\chi\S_2}f=0$ in the situation of Theorem \[thm:mean-ergodic-in-f\] then $\chi\S$ is mean ergodic on $Y_f$ with mean ergodic projection $P_\chi=0$. To see this, let $\nu\in C(K)'$ vanish on $\ol{\lin}\rg(I-\chi\S)$. Then the same argument as in the proof of the implication (1)${\Rightarrow}$(4) in Theorem \[thm:mean-ergodic-in-f\] shows that $\nu$ vanishes in $f$, which yields the claim. We now recall the concept of a uniform family of ergodic nets from [@schreiber12] and apply it to operators on the Banach space $C(K)$. \[definition:uniformly-family\] Suppose that the semigroup $G$ is represented on $C(K)$ by bounded semigroups $\S_i=\{S_{i,g}:g\in G\}$ for each $i$ in some index set $I$ such that the $\S_i$ are *uniformly bounded*, i.e., $\sup_{i\in I}\sup_{g\in G}\|S_{i,g}\|<\infty$. Let $\A$ be a directed set and let $(A_\a^{\S_i})_{\a\in\A}\subset \L(C(K))$ be a net of operators for each $i\in I$. Then $\{(A_\a^{\S_i})_{\a\in\A}: i\in I\}$ is a *uniform family of right (left) ergodic nets* if 1. $\forall\a\in\A, \forall \e>0, \forall f_1,\dots, f_m\in C(K), \exists g_1,\dots, g_n\in G$ such that for each $i\in I$ there exists a convex combination $\sum_{j=1}^{n}c_{i,j} S_{i,g_j}\in \co\S_i$ satisfying $$\sup_{i\in I}\|A_\a^{\S_i}f_k-\textstyle{\sum_{j=1}^{n}}c_{i,j} S_{i,g_j}f_k\|_\infty<\e\quad \forall k\in\{1,\dots, m\};$$ 2. $\displaystyle \lim_\a \sup_{i\in I}\|A_\a^{\S_i}f-A_\a^{\S_i}S_{i,g}f\|_\infty=0{\enspace}\left( \lim_\a \sup_{i\in I}\|A_\a^{\S_i}f-S_{i,g}A_\a^{\S_i}f\|_\infty= 0\right) {\enspace}\forall g\in G, f\in~C(K).$ The set $\{(A_\a^{\S_i})_{\a\in\A}: i\in I\}$ is called a *uniform family of ergodic nets* if it is a uniform family of left and right ergodic nets. Notice that if $\{(A_\a^{\S_i})_{\a\in\A}: i\in I\}$ is a uniform family of (right) ergodic nets, then each $(A_\a^{\S_i})_{\a\in\A}$ is a strong (right) $\S_i$-ergodic net. The simplest non-trivial example of a uniform family of ergodic nets is the family of weighted Cesàro means $$\left\{\left(\frac{1}{n}\sum_{j=1}^{n}(\lambda S)^j\right)_{n\in\N}: \lambda\in\Torus\right\}$$ for a contraction $S\in \L(C(K))$. See [@schreiber12 Proposition 2.2] for more examples. We now choose a subset $\Lambda$ of characters in $\widehat{G}$ and consider the semigroups $\chi\S$ for each $\chi\in\Lambda$. If $\{(A_\a^{\chi\S})_{\a\in\mathcal{A}}: \chi\in\Lambda\}$ is a uniform family of right ergodic nets, $f\in C(K)$ and $\chi\S$ is right amenable and mean ergodic on $\ol{\lin}\chi\S f$ with mean ergodic projection $P_\chi$ for each $\chi\in \Lambda$, then $A_\a^{\chi\S} f$ converges (in the supremum norm) to $P_\chi f$ for each $\chi\in\Lambda$ by Theorem \[thm:mean-ergodic-in-f\]. The next corollary gives a sufficient condition for this convergence to be uniform in $\chi\in\Lambda$. It generalizes Theorem 2 of Lenz [@lenz09] to right amenable semigroups of Markov operators. \[cor:uniform-convergence\_C\_K\] Let $G$ be represented on $C(K)$ by a right amenable semigroup $\S=\{S_g:g\in G\}$ of Markov operators and let $\S$ be uniquely ergodic with invariant measure $\mu$. Consider the semigroups $\chi\S$ for each $\chi$ in a compact set $\Lambda\subset\widehat{G}$. If $\{(A_\a^{\chi\S})_{\a\in \A}: \chi\in \Lambda\}$ is a uniform family of right ergodic nets and if $f\in C(K)$ satisfies 1. $P_{\Fix\chi\S_2}f\in C(K)$ for each $\chi\in \Lambda$, 2. the map $\Lambda \to \R_+, \chi\mapsto\|A_\a^{\chi\S}f- P_\chi f\|_\infty$ is continuous for each $\a\in\A$, then $$\lim_\a\sup_{\chi\in \Lambda}\|A_\a^{\chi\S}f-P_\chi f\|_\infty = 0.$$ By Theorem \[thm:mean-ergodic-in-f\] and our hypotheses, the semigroup $\chi\S$ is mean ergodic on $\ol{\lin}\chi\S f$ for all $\chi\in\Lambda$. The result then follows directly from Theorem 2.4 in [@schreiber12]. The following corollary is a direct consequence. It generalizes Theorem 2.10 of Assani [@assani03], who considered Koopman representations of the semigroup $(\N,+)$ and the Følner sequence given by $F_n=\{0,1,\dots,n-1\}$. \[cor:assani\] Let $H$ be a locally compact group with left Haar measure $|\cdot|$ and suppose that $G\subset H$ is a subsemigroup such that there exists a Fø lner net $(F_\a)_{\a\in\A}$ in $G$. Let $G$ be represented on $C(K)$ by a semigroup $\S=\{S_g: g\in G\}$ of Markov operators and assume that $\S$ is uniquely ergodic with invariant measure $\mu$. If $f\in C(K)$ satisfies $P_{\Fix\chi\S_2}f=0$ for all $\chi$ in a compact set $\Lambda\subset\widehat{G}$, then $$\lim_{\a}\sup_{\chi\in\Lambda}\left\|\frac{1}{|F_\a|}\int_{F_\a}\chi(g)S_g f \;dg\right\|_\infty=0.$$ If $(F_\a)$ is a Følner net in $G$, then $G$ and consequently $\S$ is right amenable. Since $\Lambda\subset \widehat{G}$ is compact, it follows from [@schreiber12 Proposition 2.2 (f)] that $$\left\{\left(\frac{1}{|F_\a|}\int_{F_\a}\chi(g)S_g \;dg\right)_{\a\in\A}: \chi\in\Lambda\right\}$$ is a uniform family of right ergodic nets. If $P_{\Fix\chi\S_2}f=0$ for all $\chi\in\Lambda$, then by Remark \[rem:p\_chi\] the conditions (1) and (2) of Corollary \[cor:uniform-convergence\_C\_K\] are satisfied since the map $\chi\mapsto \frac{1}{|F_\a|}\int_{F_\a}\chi(g)S_g f dg$ is continuous. Hence $$\lim_{\a}\sup_{\chi\in\Lambda}\left\|\frac{1}{|F_\a|}\int_{F_\a}\chi(g)S_g f \;dg\right\|_\infty=0.$$ Semigroups of Koopman operators {#sec:koopman} =============================== In this section we consider semigroups of Koopman operators on the space $C(K,\C^N)$ of continuous $\C^N$-valued functions on a compact space $K$ for some $N\in\N$. The space $\C^N$ will be endowed with the Euclidean norm $x\mapsto\|x\|_2=\sqrt{{\left\langle x,x \right\rangle}_2}$ and the space $C(K,\C^N)$ with the norm $f\mapsto\|f\|=\sup_{x\in K}\|f(x)\|_2$. We identify $C(K,\C^N)$ with $C(K)^N$ and write $f=(f_1,\dots,f_N)\in C(K,\C^N)$ with coordinate functions $f_i\in C(K)$. As before, $G$ is a semitopological semigroup. A *semigroup action of $G$ on $K$* is a continuous map $$G\times K\to K,{\enspace}(g,x)\mapsto gx$$ satisfying $$(g_1 g_2)x=g_1(g_2x)\text{ for all } g_1,g_2\in G \text{ and } x\in K.$$ In this case we say that *$G$ acts on $K$*. Let $G$ be a semitopological semigroup acting on $K$ and let $\S:=\{S_g: g\in G\}$ be the corresponding *Koopman representation* on $C(K,\C^N)$, i.e., $S_g f(x)=f(gx)$ for $f\in C(K,\C^N)$, $g\in G$ and $x\in K$. To emphasize the dependence on $N$ we sometimes write $\S^{(N)}$ for the semigroup $\S$ on $C(K,\C^N)$. We say that a measure $\mu$ on $K$ is *$G$-invariant* if $\mu(A)=\mu(g^{-1}A)$ for all Borel sets $A\subseteq K$, where $g^{-1}A=\{x\in K: gx\in A\}$. Notice that this is equivalent to $\mu\in{\Fix\S^{(1)}}'$. If $\mu$ is a $G$-invariant measure, we denote by $\S_2:=\S_2^{(N)}:=\{S_{g,2}: g\in G\}$ the extension of the semigroup $\S^{(N)}$ to $L^2(K,\C^N,\mu)$. The action of $G$ on $K$ is called *ergodic with respect to $\mu$* if there is no non-trivial measurable $G$-invariant set, or equivalently if $\Fix\S_2^{(1)}=\C\cdot {\mathbbm{1}}\subseteq L^2(K,\mu)$. The action of $G$ on $K$ is called *uniquely ergodic* if there exists a unique $G$-invariant probability measure $\mu$ on $K$. Notice that this is equivalent to $\Fix{\S^{(1)}}'=\C\cdot\mu$ for some probability measure $\mu\in C(K)'$. Let $\Omega$ be a topological group. A continuous map $\gamma:G\times K\to\Omega$ is called a *continuous cocycle* if it satisfies the *cocycle equation* $$\gamma(g_1g_2,x)=\gamma(g_2,x)\gamma(g_1,g_2x)\quad\forall g_1,g_2\in G, x\in K.$$ The set of continuous cocycles is denoted by $\Gamma(G\times K, \Omega)$. If $\Omega$ is a compact metric group with metric $d$, then we endow $\Gamma(G\times K,\Omega)$ with the metric $$\tilde{d}(\gamma_1,\gamma_2):=\sup_{(g,x)\in G\times K}d\left(\gamma_1(g,x),\gamma_2(g,x)\right)\quad (\gamma_1,\gamma_2\in \Gamma(G\times K,\Omega)).$$ Denote by $U(N)$ the group of unitary operators on $\C^N$ and take a continuous cocycle $\gamma\in\Gamma(G\times K,U(N))$. Motivated by papers of Walters [@walters96] and Santos and Walkden [@santos07] we study the mean ergodicity of the semigroup $\gamma\S:=\{\gamma(g,\cdot)S_g: g\in G\}$ on $C(K,\C^N)$, where $\gamma(g,\cdot)S_g f(x)=\gamma(g,x)S_gf(x)$ for $f\in C(K,\C^N)$ and $x\in K$. In order to proceed as in the previous section we need some facts about vector valued measures (see Diestel and Uhl [@diestel77]). Denote by $M(K,\C^N)$ the set of $\sigma$-additive functions $\nu: \Sigma\to \C^N$ defined on the Borel $\sigma$-algebra $\Sigma$ of $K$. We define the *total variation* $|\nu|_2:\Sigma\to [0,\infty]$ of a measure $\nu\in M(K,\C^N)$ by $$|\nu|_2(E):=\sup\left\{\sum_{j=1}^\infty \|\nu(E_j)\|_2 : E=\bigsqcup_{j\in\N}E_j\right\},\quad (E\in\Sigma),$$ where $E=\bigsqcup_{j\in\N} E_j$ means that the family $(E_j)_{j\in\N}\subset \Sigma$ is a partition of $E$. The main property of the total variation of a measure $\nu\in M(K,\C^N)$ is the fact that $|\nu|_2:\Sigma\to\R_+$ is a finite positive measure on $K$, which can be deduced from Theorem 6.2 and Theorem 6.4 in Rudin [@rudin87]. Identifying $M(K,\C^N)$ with $M(K)^N$, we take $\nu=(\nu_1,\dots,\nu_N)\in M(K,\C^N)$. If $f=(f_1,\dots,f_N)\in C(K,\C^N)$, then the map $$\tilde{d}\nu: f \mapsto \sum_{i=1}^N \int_K f_i d\nu_i$$ defines a linear functional on $C(K,\C^N)$. For $f\in C(K,\C^N)$ we have $$\begin{aligned} \left | {\left\langle \tilde{d}\nu,f \right\rangle}\right|&=\left|\sum_{i=1}^N \int_K f_i d\nu_i\right|\le \sum_{i=1}^N \int_K |f_i| d|\nu_i|\\ &\le \sum_{i=1}^N \sup_{x\in K}|f_i(x)||\nu_i|(K)\le \max_{i=1,\dots,N}\sup_{x\in K}|f_i(x)|\sum_{i=1}^N |\nu_i|(K)\\ &\le\sup_{x\in K}\left(\sum_{i=1}^N|f_i(x)|^2\right)^{\frac{1}{2}}\sum_{i=1}^N |\nu_i|(K)=\|f\| \sum_{i=1}^N |\nu_i|(K)\end{aligned}$$ and hence $\tilde{d}\nu$ is bounded with $\|\tilde{d}\nu\|\le \sum_{i=1}^N |\nu_i|(K)$. The next result follows from the Riesz Representation Theorem and is in fact equivalent to it (cf. [@dunford-schwartz58 Chapter VI, Section 7, Theorem 3]). \[thm:riesz\] The map $$\begin{aligned} \tilde{d}: M(K,\C^N)&\to C(K,\C^N)'\\ \nu&\mapsto \tilde{d}\nu \end{aligned}$$ is linear and bijective. The only non-trivial statement is the surjectivity. So take $\xi\in C(K,\C^N)'$, $\{e_1,\dots,e_N\}$ the canonical basis of $\C^N$ and define $\xi_i\in C(K)'$ for each $i\in \{1,\dots,N\}$ by $\xi_i(f):=\xi(f\otimes e_i)$, where $f\otimes e_i\in C(K,\C^N)$ is the function $x\mapsto f(x)e_i$. By the Riesz Representation Theorem for each $i\in\{1,\dots,N\}$ there exists $\nu_i\in M(K)$ with $\xi_i=d\nu_i$. If we define $\nu:=(\nu_1,\dots,\nu_N)\in M(K,\C^N)$, then for each $f=(f_1,\dots,f_N)\in C(K,\C^N)$ we obtain $${\left\langle \tilde{d}\nu,f \right\rangle}= \sum_{i=1}^N \int_K f_i d\nu_i=\sum_{i=1}^N {\left\langle \xi_i,f_i \right\rangle} ={\left\langle \xi,\sum_{i=1}^N f_i\otimes e_i \right\rangle}={\left\langle \xi,f \right\rangle}$$ and hence $\tilde{d}\nu=\xi$. For a bounded linear functional $\nu\in C(K,\C^N)'$ we define the functional $|\nu|_2$ by $${\left\langle |\nu|_2,f \right\rangle}:=\sup\left\{|{\left\langle \nu,h \right\rangle}|: h\in C(K,\C^N), \|h(\cdot)\|_2\le f\right\}$$ for $0\le f\in C(K)$. It is clear that $\|\nu\|={\left\langle |\nu|_2,{\mathbbm{1}}\right\rangle}$. There exists a unique bounded and linear extension of $|\nu|_2$ to $C(K)$ with $\||\nu|_2\|=\|\nu\|$. The positive homogeneity of $|\nu|_2$ is clear from the definition. To see additivity, take $0\le f_1,f_2\in C(K)$ and $\|h_1(\cdot)\|_2\le f_1$ and $\|h_2(\cdot)\|_2\le f_2$. Then we have for certain $c_1,c_2\in\Torus$ $$\begin{aligned} |{\left\langle \nu,h_1 \right\rangle}|+|{\left\langle \nu,h_2 \right\rangle}|&=|c_1{\left\langle \nu,h_1 \right\rangle}+c_2{\left\langle \nu,h_2 \right\rangle}| \le {\left\langle |\nu|_2,\|c_1 h_1(\cdot)+c_2 h_2(\cdot)\|_2 \right\rangle} \\ &\le {\left\langle |\nu|_2,\|h_1(\cdot)\|_2+\|h_2(\cdot)\|_2 \right\rangle} \le {\left\langle |\nu|_2,f_1+f_2 \right\rangle}\end{aligned}$$ and thus ${\left\langle |\nu|_2,f_1 \right\rangle}+{\left\langle |\nu|_2,f_2 \right\rangle}\le {\left\langle |\nu|_2,f_1+f_2 \right\rangle}$. For the converse inequality take $\|h(\cdot)\|_2\le f_1+f_2$ and $\e>0$. The open sets $$U_1:=\{x\in K: \|h(x)\|_2>0\}{\enspace}\text{ and }{\enspace}U_2:=\{x\in K: \|h(x)\|_2<\e\}$$ cover $K$. Hence by Theorem D.6 in [@efhn] there exists a function $\psi\in C(K)$ with $0\le\psi\le{\mathbbm{1}}$ and $\supp(\psi)\subset U_1$ and $\supp({\mathbbm{1}}-\psi)\subset U_2$. Define for $j=1,2$ $$h_j(x):=\left\{ \begin{array}{ll} \psi(x)\frac{f_j(x)}{f_1(x)+f_2(x)}h(x),& f_1(x)+f_2(x)\neq 0\\ 0,& \text{else}. \end{array} \right.$$ Then we have $h_j\in C(K)$, $h_1+h_2=\psi h$ and $\|h_j(\cdot)\|_2\le f_j$ for each $j=1,2$. Moreover, we obtain $$\|h(\cdot)-(h_1+h_2)(\cdot)\|_2=\|({\mathbbm{1}}-\psi)h(\cdot)\|_2=|{\mathbbm{1}}-\psi|\|h(\cdot)\|_2< \e {\mathbbm{1}}$$ and thus $$\begin{aligned} |{\left\langle \nu,h \right\rangle}|&\le |{\left\langle \nu,h_1+h_2 \right\rangle}|+\e\|\nu\|\le {\left\langle |\nu|_2,f_1 \right\rangle}+{\left\langle |\nu|_2,f_2 \right\rangle}+\e\|\nu\|. \end{aligned}$$ Hence ${\left\langle |\nu|_2,f_1+f_2 \right\rangle}\le {\left\langle |\nu|_2,f_1 \right\rangle}+{\left\langle |\nu|_2,f_2 \right\rangle}+\e\|\nu\|$ and thus $${\left\langle |\nu|_2,f_1+f_2 \right\rangle}\le {\left\langle |\nu|_2,f_1 \right\rangle}+{\left\langle |\nu|_2,f_2 \right\rangle}$$ by letting $\e\downarrow 0$. Finally, we extend $|\nu|_2$ first to $C(K,\R)$ by $${\left\langle |\nu|_2,f \right\rangle}:= {\left\langle |\nu|_2,f^+ \right\rangle}-{\left\langle |\nu|_2,f^- \right\rangle}\quad (f\in C(K,\R)),$$ where $f^+:=\sup\{f,0\}$ and $f^-:=\sup\{-f,0\}$, and then to $C(K)$ by $${\left\langle |\nu|_2,f \right\rangle}:={\left\langle |\nu|_2,\Re f \right\rangle}+i {\left\langle |\nu|_2,\Im f \right\rangle}\quad (f\in C(K)).$$ It is straightforward to check that in this way $|\nu|_2$ becomes linear on $C(K)$. The boundedness of $|\nu|_2$ follows from $${\left\langle |\nu|_2,f \right\rangle}\le {\left\langle |\nu|_2,\|f\|_\infty {\mathbbm{1}}\right\rangle}=\|f\|_\infty \|\nu\| \quad (0\le f\in C(K)).$$ This implies $\||\nu|_2\|\le\|\nu\|$, and equality follows from ${\left\langle |\nu|_2,{\mathbbm{1}}\right\rangle}=\|\nu\|$. The notation $|\nu|_2$ for a functional $\nu\in C(K,\C^N)'$ and a measure $\nu\in M(K,\C^N)$ is justified by the following theorem. \[thm:commutative-diagram\] The following diagram commutes. $$\begin{CD} M(K,\C^N) @>\tilde{d}>>C(K,\C^N)' \\ @VV|\cdot|_2V @VV|\cdot|_2V\\ M(K) @>d>> C(K)' \end{CD}$$ Let $\nu=(\nu_1,\dots,\nu_N)\in M(K,\C^N)$ and $0\le f\in C(K)$. We consider $C(K,\C^N)$ as a dense subspace of $L^1(K,\nu)$ and obtain $$\begin{aligned} {\left\langle |\tilde{d}\nu|_2,f \right\rangle}&=\sup\left\{\left|{\left\langle \tilde{d}\nu,h \right\rangle}\right|: h\in C(K,\C^N), \|h(\cdot)\|_2\le f\right\} \\ & =\sup \left\{\left|{\left\langle \tilde{d}\nu,h \right\rangle}\right|: 0\le\sum_{j\in\N}\beta_j{\mathbbm{1}}_{E_j}\le f, \bigsqcup_j E_j=K, \|h(\cdot)\|_2\le\sum_{j\in\N}\beta_j{\mathbbm{1}}_{E_j}\right\}\\ &=\sup \left\{\left|\sum_{i=1}^N\sum_{j,l\in\N}\alpha_{j,l}^{(i)}\nu_i(E_{j,l})\right| : {0\le\sum_{j\in\N}\beta_j{\mathbbm{1}}_{E_j}\le f, \bigsqcup_j E_j=K, \atop \sum_{j,l\in\N}\|\alpha_{j,l}\|_2{\mathbbm{1}}_{E_{j,l}}\le\sum_{j\in\N}\beta_j{\mathbbm{1}}_{E_j}, \bigsqcup_l E_{j,l}=E_j}\right\}\\ &=\sup \left\{\left|\sum_{j,l\in\N}{\left\langle \alpha_{j,l},\nu(E_{j,l}) \right\rangle}_2\right| : {0\le\sum_{j\in\N}\beta_j{\mathbbm{1}}_{E_j}\le f, \bigsqcup_j E_j=K, \atop \sum_{j,l\in\N}\|\alpha_{j,l}\|_2{\mathbbm{1}}_{E_{j,l}}\le\sum_{j\in\N}\beta_j{\mathbbm{1}}_{E_j}, \bigsqcup_l E_{j,l}=E_j}\right\}\end{aligned}$$ Under the condition $\|\alpha_{j,l}\|_2\le\beta_j$ for all $j,l\in\N$, the expression $\left|\sum_{j,l\in\N}{\left\langle \alpha_{j,l},\nu(E_{j,l}) \right\rangle}_2\right|$ becomes maximal if $\alpha_{j,l}=\beta_j\frac{\nu(E_{j,l})}{\|\nu(E_{j,l})\|_2}$ for all $j,l\in\N$. Hence $$\begin{aligned} {\left\langle |\tilde{d}\nu|_2,f \right\rangle} &=\sup \left\{\sum_{j,l\in\N}\beta_j\|\nu(E_{j,l})\|_2 : {\sum_{j\in\N}\beta_j{\mathbbm{1}}_{E_j}\le f, \bigsqcup_j E_j=K,\, \bigsqcup_l E_{j,l}=E_j}\right\}\\ &=\sup \left\{\sum_{j\in\N}\beta_j|\nu|_2(E_{j}) : {\sum_{j\in\N}\beta_j{\mathbbm{1}}_{E_j}\le f, \,\bigsqcup_j E_j=K}\right\}\\ &=\sup \left\{\int_K\sum_{j\in\N}\beta_j{\mathbbm{1}}_{E_j}d|\nu|_2 : {\sum_{j\in\N}\beta_j{\mathbbm{1}}_{E_j}\le f, \,\bigsqcup_j E_j=K}\right\}\\ &=\int_K f\, d|\nu|_2={\left\langle d|\nu|_2,f \right\rangle}\end{aligned}$$ and thus $|\tilde{d}\nu|_2=d|\nu|_2$. By virtue of Theorem \[thm:riesz\] and Theorem \[thm:commutative-diagram\] we shall identify $M(K,\C^N)$ with $C(K,\C^N)'$ and we will use the same notation $|\nu|_2$ for a measure $\nu\in M(K,\C^N)$ and a functional $\nu\in C(K,\C^N)'$ without explicitly distinguishing these two objects. We now return to the situation of the beginning of this section and characterize the mean ergodicity of $\gamma\S$ for a continuous cocycle $\gamma\in \Gamma(G\times K, U(N))$. For a function $h\in L^2(K,\C^N,\mu)$ we denote by $\ol{h} d\mu\in C(K,\C^N)'$ the functional defined by $${\left\langle \ol{h} d\mu,f \right\rangle}:={\left\langle f,h \right\rangle}_{L^2(K,\C^N,\mu)}=\sum_{i=1}^N\int_K f_i(x)\ol{h_i(x)}d\mu(x)\quad (f\in C(K,\C^N)).$$ \[lemma:Fixraum\_dualer\_Fixraum\_cocycles\] Let the action of $G$ on $K$ be uniquely ergodic with invariant measure $\mu$ and let $\S$ and $\S_2$ be the corresponding Koopman representations on $C(K,\C^N)$ and $L^2(K,\C^N,\mu)$, respectively. If $\gamma:G\times K\to U(N)$ is a continuous cocycle, then the map $$\begin{aligned} L^2(K,\C^N,\mu)\supseteq\Fix( \gamma\S_2)^*&\to\Fix (\gamma\S)'\subseteq C(K,\C^N)'\\ h\quad&\mapsto\quad \ol{h} d\mu\end{aligned}$$ is antilinear and bijective. To see that the map is well defined, take $h\in\Fix(\gamma\S_2)^*$. Then for all $f\in C(K,\C^N)$ and $g\in G$ we have $$\begin{aligned} {\left\langle (\gamma(g,\cdot)S_g)'(\ol{h} d\mu),f \right\rangle}&={\left\langle \ol{h} d\mu,\gamma(g,\cdot)S_g f \right\rangle} ={\left\langle \gamma(g,\cdot)S_{g,2} f,h \right\rangle}_{L^2(K,\C^N,\mu)}\\ &={\left\langle f,(\gamma(g,\cdot)S_{g,2})^*h \right\rangle}_{L^2(K,\C^N,\mu)}\\ &={\left\langle f,h \right\rangle}_{L^2(K,\C^N,\mu)} ={\left\langle \ol{h} d\mu,f \right\rangle}, \end{aligned}$$ yielding $\ol{h} d\mu\in\Fix (\gamma\S)'$. As antilinearity and injectivity are clear, it remains to show surjectivity. Let $\nu=(\nu_1,\dots,\nu_N)\in\Fix (\gamma\S)'$. We claim that $|\nu|_2\le S_g'|\nu|_2$ for all $g\in G$. Indeed, if $0\le f\in C(K)$ and $g\in G$, then $$\begin{aligned} {\left\langle |\nu|_2,f \right\rangle}&=\sup_{\|h(\cdot)\|_2\le f}|{\left\langle \nu,h \right\rangle}|\\ &=\sup_{\|h(\cdot)\|_2\le f}|{\left\langle \nu,\gamma(g,\cdot)S_g h \right\rangle}|\\ &\le \sup_{\|h(\cdot)\|_2\le f}{\left\langle |\nu|_2,\|\gamma(g,\cdot) S_g h(\cdot)\|_2 \right\rangle}\\ &= \sup_{\|h(\cdot)\|_2\le f}{\left\langle |\nu|_2,S_g\|h(\cdot)\|_2 \right\rangle}\\ &={\left\langle |\nu|_2,S_g f \right\rangle}={\left\langle S_g'|\nu|_2,f \right\rangle},\end{aligned}$$ since $\gamma(g,x)$ is unitary for all $x\in K$. If $0\le f\in C(K)$ and $g\in G$, then $$0\le{\left\langle S_g'|\nu|_2-|\nu|_2,f \right\rangle}\le {\left\langle S_g'|\nu|_2-|\nu|_2,\|f\|_\infty{\mathbbm{1}}\right\rangle}=\|f\|_\infty{\left\langle |\nu|_2,S_g{\mathbbm{1}}-{\mathbbm{1}}\right\rangle}=0$$ and thus $|\nu|_2\in \Fix\S'=\C\cdot \mu$ by unique ergodicity. As a consequence of Theorem \[thm:commutative-diagram\] and since $|\nu_i|\le |\nu|_2$ the measures $\nu_i$ are thus absolutely continuous with respect to $\mu$ for each $i=1,\dots,N$. The Radon-Nikodým Theorem then implies the existence of functions $h_i\in L^\infty(K,\mu)$ such that $\nu_i=\ol{h_i} d\mu$ for all $i=1,\dots,N$. Defining $h:=(h_1,\dots,h_N)\in L^\infty(K,\C^N,\mu)$ we obtain $\nu=\ol{h} d\mu$ and the same calculation as above shows that $h\in \Fix(\gamma \S_2)^*$. \[lemma:dimension-fixraum2\] Let the action of a right amenable semigroup $G$ on $K$ be ergodic with respect to some invariant measure $\mu$ and let $\S_2$ be the corresponding Koopman representation on $L^2(K,\C^N,\mu)$. If $\gamma: G\times K\to U(N)$ is a continuous cocycle, then $\dim \Fix\gamma\S_2\le N$. Suppose $\dim \Fix\gamma\S_2> N$ and take $N+1$ linearly independent functions $f_1,\dots, f_N,h\in \Fix\gamma\S_2$. We may assume that $\|f_i(\cdot)\|_2={\mathbbm{1}}$ for each $i\in\{1,\dots,N\}$ since if $f_i\in \Fix\gamma\S_2$ then $\|f_i(\cdot)\|_2\in\Fix\S_2$ and thus $\|f_i(\cdot)\|_2$ is constant by ergodicity. Moreover, by a pointwise application of the Gram-Schmidt process, we may assume that ${\left\langle f_i(x),f_j(x) \right\rangle}_2=\delta_{ij}$ for $\mu$-a.e. $x\in K$ and each $i,j\in\{1,\dots,N\}$. Hence $h$ can be written as $$h(x)=\sum_{i=1}^N {\left\langle h(x),f_i(x) \right\rangle}_2f_i(x)\qquad \mu\text{-a.e. }x\in K.$$ For each $i\in\{1,\dots, N\}$ we define the function $h\bullet f_i$ by $h\bullet f_i(x):={\left\langle h(x),f_i(x) \right\rangle}_2$ for $\mu$-a.e. $x\in K$ and claim that $h\bullet f_i$ is constant. Indeed, for each $i\in\{1,\dots,N\}$, $g\in G$ and $\mu$-a.e. $x\in K$ we have $$\begin{aligned} S_{g,2}(h\bullet f_i)(x)&={\left\langle h(gx),f_i(gx) \right\rangle}_2={\left\langle \gamma(g,x)^{-1}h(x),\gamma(g,x)^{-1}f_i(x) \right\rangle}_2\\ &={\left\langle h(x),f_i(x) \right\rangle}_2=h\bullet f_i(x)\end{aligned}$$ since $\gamma(g,x)$ is unitary. Hence $h\bullet f_i\in\Fix\S_2^{(1)}$ and thus $h\bullet f_i\in\C\cdot{\mathbbm{1}}$ by ergodicity. Hence $h$ is a linear combination of $f_1,\dots, f_N$ contradicting the linear independence. The following theorem is the analogue of Theorem \[thm:mean-ergodic\] for cocycles and generalizes Theorem 4 of Walters [@walters96] to amenable semigroups. \[thm:mean-ergodic-koopman\] Let the action of a (right) amenable semigroup $G$ on $K$ be uniquely ergodic with invariant measure $\mu$ and let $\S$ and $\S_2$ be the corresponding Koopman representations on $C(K,\C^N)$ and $L^2(K,\C^N,\mu)$, respectively. If $\gamma: G\times K\to U(N)$ is a continuous cocycle, then the following assertions are equivalent. (1) $\Fix\gamma\S_2\subseteq \Fix\gamma\S$. (2) $\gamma\S$ is mean ergodic on $C(K,\C^N)$ with mean ergodic projection $P_\gamma$. (3) $\Fix\gamma\S$ separates $\Fix(\gamma\S)'$. (4) $C(K,\C^N)= \Fix\gamma\S\oplus\ol{\lin}\rg(I-\gamma\S)$. (5) $A_\a^{\gamma\S} f$ converges weakly (to a fixed point of $\gamma\S$) for some/every strong (right) $\gamma\S$-ergodic net $(A_\a^{\gamma\S})$ and all $f\in C(K,\C^N)$. (6) $A_\a^{\gamma\S} f$ converges strongly (to a fixed point of $\gamma\S$) for some/every strong (right) $\gamma\S$-ergodic net $(A_\a^{\gamma\S})$ and all $f\in C(K,\C^N)$. The limit $P_\gamma$ of the nets $(A_\a^{\gamma\S})$ in the strong (weak, resp.) operator topology is the mean ergodic projection of $\gamma\S$ mapping $C(K,\C^N)$ onto $\Fix \gamma\S$ along $\ol{\lin}\rg(I-\gamma\S)$. The equivalence of the statements (2) to (6) follows directly from Theorem 1.7 and Corollary 1.8 in [@schreiber12]. Notice that $\Fix(\gamma\S_2)^*=\Fix\gamma\S_2$ since $\gamma\S_2$ consists of contractions on $L^2(K,\C^N,\mu)$. (1)${\Rightarrow}$(3): If $0\neq\nu\in \Fix(\gamma\S)'$, then $\nu=\ol{h}d\mu$ by Lemma \[lemma:Fixraum\_dualer\_Fixraum\_cocycles\] for some $0\neq h \in \Fix(\gamma\S_2)^*=\Fix\gamma\S_2$. Since $\Fix\gamma\S_2\subseteq \Fix\gamma\S$ by (1), this yields $h\in \Fix\gamma\S$ and $${\left\langle \nu,h \right\rangle}=\|h\|^2_{L^2(K,\C^N,\mu)}>0.$$ (3)${\Rightarrow}$(1): Suppose $f\in \Fix\gamma\S_2\setminus \Fix\gamma\S$. By Lemma \[lemma:dimension-fixraum2\] the space $\Fix\gamma\S_2$ is finite dimensional and by Lemma \[lemma:Fixraum\_dualer\_Fixraum\_cocycles\] we thus have $$\dim\Fix\gamma\S< \dim\Fix\gamma\S_2=\dim\Fix(\gamma\S)'.$$ Hence $\Fix\gamma\S$ does not separate $\Fix(\gamma\S)'$. The following theorem characterizes mean ergodicity of $\gamma\S$ on the closed invariant subspace $Y_f:=\ol{\lin}\gamma\S f$ for some $f\in C(K,\C^N)$ and $\gamma\in\Gamma(G\times K, U(N))$. It generalizes Theorem 8.1 of Lenz [@lenz09a] to amenable semigroups. \[thm:mean-ergodic-koopman-in-f\] Let the action of a (right) amenable semigroup $G$ on $K$ be uniquely ergodic with invariant measure $\mu$ and let $\S$ and $\S_2$ be the corresponding Koopman representations on $C(K,\C^N)$ and $L^2(K,\C^N,\mu)$, respectively. If $\gamma: G\times K\to U(N)$ is a continuous cocycle and $f\in C(K,\C^N)$ is given, then the following assertions are equivalent. (1) $P_{\Fix\gamma\S_2}f\in \Fix\gamma\S$. (2) $\gamma\S$ is mean ergodic on $Y_f$ with mean ergodic projection $P_\gamma$. (3) $\Fix\gamma\S|_{Y_f}$ separates $\Fix(\gamma\S)|_{Y_f}'$. (4) $f\in \Fix\gamma\S\oplus\ol{\lin}\rg(I-\gamma\S)$. (5) $A_\a^{\gamma\S} f$ converges weakly (to a fixed point of $\gamma\S$) for some/every strong (right) $\gamma\S$-ergodic net $(A_\a^{\gamma\S})$. (6) $A_\a^{\gamma\S} f$ converges strongly (to a fixed point of $\gamma\S$) for some/every strong (right) $\gamma\S$-ergodic net $(A_\a^{\gamma\S})$. The limit $P_\gamma$ of the nets $A_\a^{\gamma\S}$ in the strong (weak, resp.) operator topology on $Y_f$ is the mean ergodic projection of $\gamma\S|_{Y_f}$ mapping $Y_f$ onto $\Fix\gamma\S|_{Y_f}$ along $\ol{\lin}\rg(I-\gamma\S|_{Y_f})$. The equivalence of the statements (2) to (6) follows directly from [@schreiber12 Proposition 1.11]. (6)${\Rightarrow}$(1): By von Neumann’s Ergodic Theorem $P_{\Fix\gamma\S_2}f$ is the limit of $A_\a^{\gamma\S} f$ in $L^2(K,\C^N,\mu)$. If $A_\a^{\gamma\S} f$ converges strongly in $C(K,\C^N)$ then the limits coincide almost everywhere and hence $P_{\Fix\gamma\S_2}f$ has a continuous representative in $\Fix\gamma\S$. (1)${\Rightarrow}$(4): Let $\nu\in C(K,\C^N)'$ vanish on $\Fix\gamma\S\oplus\ol{\lin}\rg(I-\gamma\S)$. Then in particular ${\left\langle \nu,h \right\rangle}={\left\langle \nu,\gamma(g,\cdot)S_g h \right\rangle}={\left\langle (\gamma(g,\cdot)S_g)' \nu, h \right\rangle}$ for all $h\in C(K,\C^N)$ and $g\in G$ and thus $\nu\in \Fix(\gamma\S)'$. Hence by Lemma \[lemma:Fixraum\_dualer\_Fixraum\_cocycles\] there exists $h\in \Fix(\gamma\S_2)^*$ such that $\nu=\ol{h} d\mu$. Let $(A_\a^{\gamma\S_2})_{\a\in\mathcal{A}}$ be a strong $\gamma\S_2$-ergodic net on $L^2(K,\C^N\mu)$. Then $(A_\a^{\gamma\S_2})^* h=h$ for all $\a\in\mathcal{A}$ and von Neumann’s Ergodic Theorem implies $${\left\langle \nu,f \right\rangle}={\left\langle f,h \right\rangle}_{L^2}={\left\langle A_\a^{\gamma\S_2} f,h \right\rangle}_{L^2}\to \langle \underbrace{P_{\Fix\gamma\S_2}f}_{\in C(K,\C^N)},h\rangle_{L^2}= \langle\nu,\underbrace{P_{\Fix\gamma\S_2}f}_{\in \Fix\gamma\S}\rangle=0.$$ Hence the Hahn-Banach Theorem yields $f\in \Fix\gamma\S\oplus\ol{\lin}\rg(I-\gamma\S)$, since $\Fix\gamma\S\oplus\ol{\lin}\rg(I-\gamma\S)$ is closed by Theorem 1.9 in Krengel [@krengel85 Chap. 2]. \[rem:fixraum\] Notice that if $P_{\Fix\gamma\S_2}f=0$ in the situation of Theorem \[thm:mean-ergodic-koopman-in-f\] then $\gamma\S$ is mean ergodic on $Y_f$ with mean ergodic projection $P_\gamma=0$. This observation then directly implies the notable fact that if $\Fix\gamma\S_2=\{0\}$, then $\Fix\gamma\S=\{0\}$. Analogously to Corollary \[cor:uniform-convergence\_C\_K\] we consider the semigroups $\gamma\S$ for $\gamma\in\Lambda\subseteq \Gamma(G\times K, Z)$, where $Z$ is a compact subgroup of $U(N)$, and ask when $A_\a^{\gamma\S}f$ converges uniformly in $\gamma\in\Lambda$ for a uniform family of right ergodic nets $\{(A_\a^{\gamma\S})_{\a\in \mathcal{A}}: \gamma\in \Lambda\}$ on $C(K,\C^N)$ and a given $f\in C(K,\C^N)$. If $\gamma\S$ is a mean ergodic semigroup on $\ol{\lin}\gamma\S f$, we denote by $P_\gamma$ its mean ergodic projection. The following corollary is a cocycle version of Theorem 2 in [@lenz09] for amenable semigroups. \[cor:uniform-convergence\_Walters\] Let the action of a right amenable semigroup $G$ on $K$ be uniquely ergodic with invariant measure $\mu$ and let $\S$ and $\S_2$ be the corresponding Koopman representations on $C(K,\C^N)$ and $L^2(K,\C^N,\mu)$, respectively. Assume that $\Lambda\subseteq \Gamma(G\times K, Z)$ is compact and consider the semigroups $\gamma\S$ on $C(K,\C^N)$ for each $\gamma\in\Lambda$. If $\{(A_\a^{\gamma\S})_{\a\in \mathcal{A}}: \gamma\in \Lambda\}$ is a uniform family of right ergodic nets on $C(K,\C^N)$ and if $f\in C(K,\C^N)$ satisfies (1) $P_{\Fix\gamma\S_2}f\in \Fix\gamma\S$ for each $\gamma\in \Lambda$ and (2) $\Lambda \to \R_+, \gamma\mapsto\|A_\a^{\gamma\S}f- P_\gamma f\|$ is continuous for each $\a\in\mathcal{A}$, then $$\lim_\a\sup_{\gamma\in \Lambda}\|A_\a^{\gamma\S}f-P_\gamma f\|= 0.$$ By Theorem \[thm:mean-ergodic-koopman-in-f\] and our hypotheses the semigroup $\gamma\S$ is mean ergodic on $\ol{\lin}\gamma\S f$ for each $\gamma\in\Lambda$. The result then follows directly from Theorem 2.4 in [@schreiber12]. In order to show the analogue of Corollary \[cor:assani\] for cocycles we need a lemma. \[lemma:uniform-family\] Let $H$ be a locally compact group with left Haar measure $|\cdot|$ and suppose that $G\subset H$ is a subsemigroup acting on $K$ such that there exists a Fø lner net $(F_\a)_{\a\in\A}$ in $G$. Consider the semigroups $\gamma\S$ on $C(K,\C^N)$ for each $\gamma$ in a compact set $\Lambda\subset \Gamma(G\times K, Z)$. Then $\{(A_\a^{\gamma\S})_{\a\in\A}: \gamma\in\Lambda\}$ defined by $$A_\a^{\gamma\S}f:=\frac{1}{|F_\a|}\int_{F_\a}\gamma(g,\cdot)S_g f\,dg\quad (f\in C(K,\C^N))$$ is a uniform family of right ergodic nets. (1): Let $\a\in \A$, $\e>0$ and $f_1,\dots, f_m\in C(K,\C^N)$. Since $\Lambda\subset \Gamma(G\times K, Z)$ is compact the family $\{g\mapsto \gamma(g,\cdot)S_gf_k: \gamma\in\Lambda\}$ is uniformly equicontinuous on the compact set $F_\a$ for each $k\in\{1,\dots, m\}$. Hence for each $k\in\{1,\dots, m\}$ we can choose an open neighbourhood $U_k$ of the unity of $H$ satisfying $$g,h\in G, {\enspace}h^{-1}g\in U_k {\enspace}{\Rightarrow}{\enspace}\sup_{\gamma\in \Lambda}\|\gamma(g,\cdot)S_g f_k-\gamma(h,\cdot)S_h f_k\|<\e.$$ Then $U:=\bigcap_{k=1}^m U_k$ is still an open neighbourhood of unity. Since $F_\a$ is compact there exists $g_1,\dots, g_n\in F_\a$ such that $F_\a\subset\bigcup_{j=1}^n g_j U$. Defining $V_1:=g_1 U\cap F_\a$ and $V_j:=(g_j U\cap F_\a) \setminus V_{j-1}$ for $j=2,\dots, n$ we obtain a disjoint union $F_\a=\bigcup_{j=1}^n V_j.$ Hence for all $\gamma\in \Lambda$ and $k\in \{1,\dots, m\}$ we have $$\begin{aligned} \left\| \frac{1}{|F_\a|}\int_{F_\a}\gamma(g,\cdot)S_g f_k\, dg-\right . &\left .\sum_{j=1}^n \frac{|V_j|}{|F_\a|} \gamma(g_j,\cdot) S_{g_j}f_k \right\|\\ &\le \frac{1}{|F_\a|}\sum_{j=1}^n \int_{V_j}\underbrace{\|\gamma(g,\cdot)S_g f_k-\gamma(g_j,\cdot)S_{g_j} f_k\|}_{<\e}dg\\ &< \frac{1}{|F_\a|}\sum_{j=1}^n |V_j|\e=\e.\end{aligned}$$ (2): Let $g\in G$ and $f\in C(K,\C^N)$. Then $$\|\gamma(g,\cdot)S_g f\|=\sup_{x\in K}\|\gamma(g,x)f(gx)\|_2 =\sup_{x\in K}\|f(gx)\|_2\le \|f\|,$$ since $\gamma(g,x)$ is unitary for all $x\in K$. Hence $$\begin{aligned} \sup_{\gamma\in\Lambda}\left\|\frac{1}{|F_\a|}\int_{F_\a}\gamma(g,\cdot))S_gf-\gamma(hg,\cdot)S_{hg}f\,dg\right\|& \le \sup_{\gamma\in\Lambda} \frac{1}{|F_\a|}\int_{F_\a\triangle h F_\a} \|\gamma(g,\cdot) S_g f\|\\ &\le \frac{|F_\a\triangle h F_\a|}{|F_\a|} \|f\|\to 0.\end{aligned}$$ For actions of the semigroup $(\N,+)$ and the Følner sequence $F_n=\{0,1,\dots, n-1\}$ the following corollary has been proved by Santos and Walkden [@santos07 Corollary 4.4] generalizing a previous result of Walters [@walters96 Theorem 5], who considered the case $N=1$. \[cor:santos-walkden\] Let $H$ be a locally compact group with left Haar measure $|\cdot|$ and suppose that $G\subset H$ is a subsemigroup such that there exists a Fø lner net $(F_\a)_{\a\in\A}$ in $G$. If the action of $G$ on $K$ is uniquely ergodic with invariant measure $\mu$ and if $f\in C(K,\C^N)$ satisfies $P_{\Fix\gamma\S_2}f=0$ for all $\gamma$ in a compact set $\Lambda\subset\Gamma(G\times K, Z)$, then $$\lim_{\a}\sup_{\gamma\in\Lambda}\left\|\frac{1}{|F_\a|}\int_{F_\a}\gamma(g,\cdot)S_g f \;dg\right\|=0.$$ If $(F_\a)$ is a Følner net in $G$, then $G$ is right amenable. By Lemma \[lemma:uniform-family\] the set $\{(A_\a^{\gamma\S})_{\a\in\A}: \gamma\in\Lambda\}$ defined by $$A_\a^{\gamma\S}\psi:=\frac{1}{|F_\a|}\int_{F_\a}\gamma(g,\cdot)S_g \psi\,dg\quad (\psi\in C(K,\C^N))$$ is a uniform family of right ergodic nets on $C(K,\C^N)$. If $P_{\Fix\gamma\S_2}f=0$ for all $\gamma\in\Lambda$, then the conditions (1) and (2) of Corollary \[cor:uniform-convergence\_Walters\] are satisfied since the map $\gamma\mapsto \frac{1}{|F_\a|}\int_{F_\a}\gamma(g,\cdot)S_g f dg$ is continuous. Hence $$\lim_{\a}\sup_{\gamma\in\Lambda}\left\|\frac{1}{|F_\a|}\int_{F_\a}\gamma(g,\cdot)S_g f \;dg\right\|=0.$$ Mean ergodicity on group extensions =================================== In this section we characterize mean ergodicity of semigroups of Koopman operators associated to skew product actions on compact group extensions. Let $G$ be an amenable semigroup acting on a compact space $K$ and assume that $\mu$ is a $G$-invariant probability measure on $K$. Suppose that $\Omega$ is a compact group with Haar measure $\eta$ and $\gamma:G\times K\to\Omega$ is a continuous cocycle. We define the *skew product action* of $G$ on $K\times \Omega$ by $$g(x,\omega):=(gx, \omega \gamma(g,x)),\quad (x,\omega)\in K\times\Omega, g\in G.$$ The cocycle equation implies that this is indeed a semigroup action and one checks that the product measure $\mu\times\eta$ on $K\times\Omega$ is $G$-invariant. We denote by $\T=\{T_g: g\in G\}$ the Koopman representation of this action on $C(K\times\Omega)$ and by $\T_2=\{T_{g,2}: g\in G\}$ its extension to $L^2(K\times\Omega,\mu\times \eta)$. In order to study the mean ergodicity of $\T$, we need some harmonic analysis. Denote by $\widehat{\Omega}$ the set of irreducible representations of $\Omega$ and by $\Rep(\Omega,N)$ the set of irreducible $N$-dimensional representations of $\Omega$ (see e.g. Folland [@folland95 Chap. 3.1]). Notice that $\widehat{\Omega}=\bigcup_{N\in\N}\Rep(\Omega,N)$ by [@deitmar09 Theorem 7.2.4]. If $\pi\in \Rep(\Omega,N)$, we can choose an inner product on $\C^N$ such that $\pi$ becomes unitary (see [@deitmar09 Lemma 7.1.1]). Hence, we may always assume that each finite dimensional representation is unitary. For $\pi\in\Rep(\Omega,N)$ and for a fixed orthonormal basis $\{e_1,\dots, e_N\}$ in $\C^N$ the maps $\pi_{i,j}\in C(\Omega)$ defined by $\pi_{i,j}(\omega)={\left\langle \pi(\omega)e_i, e_j \right\rangle}$ are called the *matrix elements* of $\pi$. On $C(K,\C^N)$ we consider the Koopman representation $\S^{(N)}=\{S_g: g\in G\}$ of $G$. If $\gamma:G\times K\to\Omega$ is a continuous cocycle and $\pi\in \Rep(\Omega, N)$ is an $N$-dimensional representation of $G$ then $\pi\circ\gamma: G\times K\to U(N)$ is a continuous cocycle into the group of unitary operators on $\C^N$. Hence, in accordance with the notation from Section \[sec:koopman\] for $g\in G$ the operator $(\pi\circ\gamma)(g,\cdot) S_g$ is defined by $$(\pi\circ\gamma)(g,\cdot) S_g f(x)=\pi(\gamma(g,x))f(gx),\quad f\in C(K,\C^N), x\in K.$$ We denote by $(\pi\circ\gamma)\S^{(N)}:=\{(\pi\circ\gamma)(g,\cdot)S_g: g\in G\}$ the corresponding semigroup and by $(\pi\circ\gamma)\S_2^{(N)}$ its extension to $L^2(K,\C^N,\mu)$. \[thm:mean-ergodic-group-extension\] Let $G$ be a right amenable semigroup acting on $K$ and suppose that $\Omega$ is a compact group and $\gamma:G\times K\to\Omega$ a continuous cocycle. For the Koopman representations $\S^{(N)}$ of $G$ on $C(K,\C^N)$ and the Koopman representation $\T$ of the skew product action on $C(K\times\Omega)$ the following assertions are equivalent. (1) $\T$ is mean ergodic on $C(K\times\Omega)$. (2) $(\pi\circ \gamma)\S^{(N)}$ is mean ergodic on $C(K,\C^N)$ for each $\pi\in\Rep(\Omega,N)$ and each $N\in\N$. For a fixed orthonormal basis $\{e_1,\dots, e_N\}$ of $\C^N$ every $f\in C(K,\C^N)$ can be written as $f=\sum_{i=1}^N f_i e_i$ for functions $f_i\in C(K)$. Therefore, for each $\pi\in\Rep(\Omega,N), (x,\omega)\in K\times \Omega$ and $g\in G$ we have $$\begin{aligned} \sum_{i=1}^N\sum_{j=1}^N T_{g}(f_i\otimes \pi_{ij})(x,\omega)e_j&= \sum_{i=1}^N\sum_{j=1}^N f_i(gx) \langle\pi(\omega\gamma(g,x))e_i,e_j\rangle e_j\\ &= \sum_{i=1}^N \pi(\omega)\pi(\gamma(g,x)) f_i(gx)e_i\\ &=\pi(\omega)\pi(\gamma(g,x))S_gf (x).\end{aligned}$$ (1)${\Rightarrow}$(2): Take $P\in\ol{\co}\T$ such that $T_gP=PT_g=P$ for all $g\in G$ and $\pi\in\Rep(\Omega,N)$ for some $N\in\N$. Define the operator $Q$ on $C(K,\C^N)$ by $$Qf(x):= \sum_{i=1}^N\sum_{j=1}^N P(f_i\otimes \pi_{ij})(x,1_\Omega)e_j,\quad (f\in C(K,\C^N), x\in K)$$ where $1_\Omega$ is the unit element of $\Omega$. We claim that $Q$ is the mean ergodic projection of $(\pi\circ \gamma)\S^{(N)}$. Since $P\in\ol{\co}\T$, there exists a net $(\sum_{n=1}^{N_\a}\lambda_{n,\a}T_{g_n})_\a\subseteq\co\T$ with $PF=\lim_\a \sum_{n=1}^{N_\a}\lambda_{n,\a}T_{g_n}F$ for all $F\in C(K\times\Omega)$. For every $f\in C(K,\C^N)$ we thus obtain $$\begin{aligned} Qf(x)&=\lim_{\a}\sum_{n=1}^{N_\a}\lambda_{n,\a}\sum_{i=1}^N\sum_{j=1}^N T_{g_n}(f_i\otimes \pi_{ij})(x,1_\Omega)e_j\\ &=\lim_{\a}\sum_{n=1}^{N_\a}\lambda_{n,\a}\pi(1_\Omega)\pi(\gamma(g_n,x))S_{g_n}f(x) ,\end{aligned}$$ where the limit is uniform in $x\in K$. This yields $$Qf=\lim_{\a}\sum_{n=1}^{N_\a}\lambda_{n,\a}(\pi\circ\gamma)(g_n,\cdot)S_{g_n}f$$ for all $f\in C(K,\C^N)$ and hence $Q\in\ol{\co}(\pi\circ\gamma)\S^{(N)}$. To see that $Q$ is a null element of $\ol{\co}(\pi\circ\gamma)\S^{(N)}$ let $g\in G$ and $f\in C(K,\C^N)$. We then have $$\begin{aligned} Q((\pi\circ\gamma)(g,\cdot) S_gf) &=\lim_{\a}\sum_{n=1}^{N_\a}\lambda_{n,\a}((\pi\circ\gamma)(g_k,\cdot)S_{g_k}) (\pi\circ\gamma)(g,\cdot) S_gf\\ &=\lim_{\a}\sum_{n=1}^{N_\a}\lambda_{n,\a}(\pi\circ\gamma)(gg_k,\cdot)S_{gg_k}f\\ &=\lim_{\a}\sum_{n=1}^{N_\a}\lambda_{n,\a}\sum_{i=1}^N\sum_{j=1}^NT_{gg_k}(f_i\otimes\pi_{ij})(\cdot,1_\Omega)e_j\\ &=\sum_{i=1}^N\sum_{j=1}^N \underbrace{PT_g}_{=P} (f_i\otimes\pi_{ij})(\cdot,1_\Omega)e_j\\ &=Qf.\end{aligned}$$ Analogously, one verifies $(\pi\circ\gamma)(g,\cdot) S_g Qf=Qf$ and hence $Q$ is a null element of $\ol{\co}(\pi\circ\gamma)\S^{(N)}$. (2)${\Rightarrow}$(1): Take a strong right $\T$-ergodic net $(A_\a)_{\a\in\mathcal{A}}$ on $C(K\times\Omega)$ with $A_\a\in \co\T$ for all $\a\in\mathcal{A}$ (cf. the proof of Proposition 1.3 and Theorem 1.4 in [@schreiber12]). So suppose that $A_\a=\sum_{n=1}^{N_\a}\lambda_{n,\a}T_{g_n}\in\co\T$ for all $\a\in\mathcal{A}$. For $\T$ to be mean ergodic we need to show that $(A_\a F)$ converges strongly to a fixed point of $\T$ for every $F\in C(K\times\Omega)$. By [@folland95 Theorem 5.11] the linear span of the matrix elements is dense in $C(\Omega)$. Since $C(K)\otimes C(\Omega)$ is dense in $C(K\times\Omega)$ and the $A_\a$ are linear and uniformly bounded, it thus suffices to show that $A_\a(f\otimes\pi_{ij})$ converges for every $f\in C(K)$, $\pi\in\Rep(\Omega,N)$ and $i,j\in\{1,\dots,N\}$ for each $N\in\N$. So take $f\in C(K)$, $\pi\in\Rep(\Omega,N)$ and fix an orthonormal basis $\{e_1,\dots,e_N\}$ of $\C^N$. For each $(x,\omega)\in K\times\Omega$ and $i,j\in\{1,\dots, N\}$ we have $$\begin{aligned} A_\a(f\otimes\pi_{ij})(x,\omega)&=\sum_{n=1}^{N_\a}\lambda_{n,\a}(f\otimes\pi_{ij})(g_n x,\omega\gamma(g_n,x))\\ &=\sum_{n=1}^{N_\a}\lambda_{n,\a}f(g_n x){\left\langle \pi(\omega)\pi(\gamma(g_n,x))e_i,e_j \right\rangle}\\ &=\left\langle\pi(\omega) \underbrace{\sum_{n=1}^{N_\a}\lambda_{n,\a}(\pi(\gamma(g_n,\cdot))S_{g_n})}_{=:B_\a}f^{(i)}(x),e_j\right\rangle,\end{aligned}$$ where $f^{(i)}\in C(K,\C^N)$ is defined by $f^{(i)}(x)=f(x)e_i$. One verifies that the net $(B_\a)_{\a\in \mathcal{A}}$ forms a strong right $(\pi\circ\gamma)\S^{(N)}$-ergodic net. Since by hypothesis $(\pi\circ\gamma)\S^{(N)}$ is mean ergodic, it follows from Theorem \[thm:mean-ergodic-koopman\] that $B_\a f^{(i)}$ converges in $C(K,\C^N)$ to a fixed point $h^{(i)}$ of $(\pi\circ\gamma)\S^{(N)}$ for each $i=1,\dots,N$. Defining the function $h_{ij}\in C(K\times\Omega)$ by $h_{ij}(x,\omega):={\left\langle \pi(\omega) h^{(i)}(x),e_j \right\rangle}$, we thus obtain $$\begin{aligned} \|A_\a(f\otimes\pi_{ij})&-h_{ij}\|_{C(K\times\Omega)}\\ &=\sup_{(x,\omega)\in K\times \Omega} \left|{\left\langle \pi(\omega)\left(\sum_{n=1}^{N_\a} \lambda_{n,\a}(\pi(\gamma(g_n,\cdot))S_{g_n})f^{(i)}(x)-h^{(i)}(x)\right),e_j \right\rangle}\right|\\ &\le \left\|\sum_{n=1}^{N_\a}\lambda_{n,\a}(\pi(\gamma(g_n,\cdot))S_{g_n})f^{(i)}-h^{(i)}\right\|_{C(K,\C^N)} \to 0,\end{aligned}$$ since $\pi(\omega)$ is unitary for each $\omega\in\Omega$. Hence the net $A_\a(f\otimes\pi_{ij})$ converges in $C(K\times\Omega)$ to the fixed point $h_{ij}$ of $\T$. In Theorem \[thm:mean-ergodic-group-extension\] we have reduced the mean ergodicity of $\T$ to the mean ergodicity of the semigroups $(\pi\circ\gamma)\S^{(N)}$ on $C(K,\C^N)$. Theorem \[thm:mean-ergodic-koopman\] now provides a useful criterion for the mean ergodicity of the semigroup $(\pi\circ\gamma)\S^{(N)}$. Hence, combining these two results we obtain the following corollary. \[cor:1\] Let the action of a right amenable semigroup $G$ on $K$ be uniquely ergodic with invariant measure $\mu$ and suppose that $\Omega$ is a compact group and $\gamma:G\times K\to\Omega$ a continuous cocycle. For the Koopman representations $\S^{(N)}$ and $\S_2^{(N)}$ of $G$ on $C(K,\C^N)$ and $L^2(K,\C^N,\mu)$, respectively, and the Koopman representation $\T$ of the skew product action on $C(K\times\Omega)$ the following assertions are equivalent. (1) $\T$ is mean ergodic on $C(K\times\Omega)$. (2) $\Fix(\pi\circ\gamma)\S_2^{(N)}\subseteq \Fix(\pi\circ\gamma)\S^{(N)}$ for each $\pi\in\Rep(\Omega,N)$ and each $N\in\N$. For the proof of Theorem \[thm:furstenberg\] below we need a characterization of the ergodicity of the above skew product action. For this purpose we extend Theorem 2.1 of Keynes and Newton [@keynes-newton76] to semigroup actions. The trivial $1$-dimensional representation $\omega\mapsto 1$ on $\Omega$ is denoted by ${\mathbbm{1}}$. \[prop:ergodicity\] Let the action of a semitopological semigroup $G$ on $K$ be ergodic with respect to some invariant measure $\mu$ and suppose that $\Omega$ is a compact group with Haar measure $\eta$ and $\gamma:G\times K\to\Omega$ a continuous cocycle. For the Koopman representations $\S_2^{(N)}$ of $G$ on $L^2(K,\C^N,\mu)$ and the Koopman representation $\T_2$ of the skew product action on $L^2(K\times\Omega,\mu\times \eta)$ the following assertions are equivalent. (1) $\Fix\T_2=\C\cdot{\mathbbm{1}}$. (2) $\Fix (\pi\circ\gamma)\S_2^{(N)}=\{0\}$ for each $\pi\in \Rep(\Omega,N)\setminus \{{\mathbbm{1}}\}$ and each $N\in\N$. (1)${\Rightarrow}$(2): Assume (1) and suppose there exists $0\neq f\in\Fix(\pi\circ\gamma)\S_2^{(N)}$ for some $N$-dimensional representation $\pi\in \Rep(\Omega,N)\setminus \{{\mathbbm{1}}\}$. Fix an orthonormal basis $\{e_1,\dots, e_N\}$ of $\C^N$ and define $F_j\in L^2(K\times\Omega,\mu\times \eta)$ by $F_j(x,\omega)={\left\langle \pi(\omega)f(x),e_j \right\rangle}$ for each $j\in\{1,\dots,N\}$. Then for each $j\in\{1,\dots, N\}$, $g\in G$ and $\mu\times \eta$-a.e. $(x,\omega)\in K\times\Omega$ we have $$\begin{aligned} T_{g,2}F_j(x,\omega)&= F_j(gx,\omega\gamma(g,x)) ={\left\langle \pi(\omega)\pi(\gamma(g,x))f(gx),e_j \right\rangle}\\ &={\left\langle \pi(\omega)f(x), e_j \right\rangle}=F_j(x,\omega). \end{aligned}$$ Hence $F_j\in \Fix\T_2$ for all $j\in\{1,\dots,N\}$ and thus each $F_j$ is constant. Since $f\in L^2(K, \C^N,\mu)$, we can write $f$ as $f=\sum_{i=1}^N f_i e_i$ for $f_i\in L^2(K,\mu)$. By Fubini’s Theorem we then obtain $$\begin{aligned} \int_{K\times\Omega}F_j(x,\omega)\; d(\mu\times \eta)(x,\omega) &= \sum_{i=1}^N\int_{K\times\Omega}f_i(x){\left\langle \pi(\omega)e_i,e_j \right\rangle}\;d(\mu\times \eta)(x,\omega)\\ &=\sum_{i=1}^N\int_K f_i(x)\; d\mu(x)\underbrace{\int_\Omega \pi_{ij}(\omega)\; d\eta(\omega)}_{{\left\langle \pi_{ij},{\mathbbm{1}}\right\rangle}_{L^2(\Omega,\eta)}}=0\end{aligned}$$ for all $j\in\{1,\dots,N\}$, since $\pi\neq{\mathbbm{1}}$ (see [@deitmar09 Theorem 7.2.1]). Hence $F_j=0$ for all $j\in\{1,\dots,N\}$ and thus $f=0$, which contradicts the assumption. (2)${\Rightarrow}$(1): Let $F\in\Fix \T_2$, $N\in\N$, $\pi\in\Rep(\Omega,N)$, $i\in\{1,\dots,N\}$ and define $$h_{\pi,i}(x)=\int_\Omega F(x,\omega)\pi(\omega)^{-1}e_i\;d\eta(\omega).$$ Then for every $g\in G$ and $\mu$-a.e. $x\in K$ we have $$\begin{aligned} (\pi\circ\gamma)(g,\cdot)S_{g,2} h_{\pi,i}(x)&=\pi(\gamma(g,x))\int_\Omega F(gx,\omega)\pi(\omega)^{-1}e_i\;d\eta(\omega)\\ &=\pi(\gamma(g,x))\int_\Omega F(gx,\omega\gamma(g,x))\pi(\gamma(g,x))^{-1}\pi(\omega)^{-1}e_i\;d\eta(\omega)\\ &=\int_\Omega F(x,\omega)\pi(\omega)^{-1}e_i\;d\eta(\omega)=h_{\pi,i}(x)\end{aligned}$$ by the invariance of the Haar measure $\eta$. Hence $h_{\pi,i}\in\Fix(\pi\circ\gamma)\S_2=\{0\}$ and thus $$\begin{aligned} 0&=\int_\Omega {\left\langle F(x,\omega)\pi(\omega)^{-1}e_i,e_j \right\rangle}d\eta(\omega)\\ &=\int_\Omega F(x,\omega)\ol{\pi_{ji}(\omega)}d\eta(\omega)\\ &={\left\langle F(x,\cdot),\pi_{ji} \right\rangle}_{L^2(\Omega,\eta)}\end{aligned}$$ for each $\pi\in\Rep(\Omega,N)\setminus\{{\mathbbm{1}}\}$, each $i,j\in\{1,\dots,N\}$ and $\mu$-a.e. $x\in K$. Hence $F(x,\cdot)$ is constant for $\mu$-almost every $x\in K$ and thus there exists $f\in L^2(K,\mu)$ with $F=f\otimes {\mathbbm{1}}$. Since $F\in\Fix\T_2$ it follows that $f\in\Fix\S_2^{(1)}$ and by ergodicity $f$ and consequently $F$ is constant. The following result is due to Furstenberg [@furstenberg81 Proposition 3.10] in the case of an $\N$-action on a compact metric space $K$. Our proof does not use the Pointwise Ergodic Theorem and so-called generic points. \[thm:furstenberg\] Let the action of a right amenable semigroup $G$ on $K$ be uniquely ergodic with invariant measure $\mu$ and suppose that $\Omega$ is a compact group with Haar measure $\eta$ and $\gamma:G\times K\to\Omega$ a continuous cocycle. If the skew product action $(g,(x,\omega))\mapsto (gx,\omega\gamma(g,x))$ of $G$ on $K\times\Omega$ is ergodic with respect to the product measure $\mu\times \eta$, then it is uniquely ergodic. The ergodicity of the skew product action is equivalent to $\Fix\T_2=\C\cdot{\mathbbm{1}}$ and thus $\Fix(\pi\circ\gamma)\S_2^{(N)}=\{0\}$ for each $\pi\in\Rep(\Omega,N)\setminus \{{\mathbbm{1}}\}$ and $N\in\N$ by Proposition \[prop:ergodicity\]. Since $\Fix\S_2^{(1)}=\C\cdot {\mathbbm{1}}$ on $L^2(K,\mu)$ by ergodicity this yields $\Fix(\pi\circ\gamma)\S_2^{(N)}\subseteq \Fix(\pi\circ\gamma)\S^{(N)}$ for each $\pi\in\Rep(\Omega,N)$ and $N\in\N$. Hence $\T$ is mean ergodic by Corollary \[cor:1\]. To show that $\T$ is uniquely ergodic by Proposition \[prop:uniquely-ergodic-m-erg\] it thus remains to verify that $\Fix\T=\C\cdot{\mathbbm{1}}$. Take a strong right $\T$-ergodic net $(A_\a)_{\a\in\mathcal{A}}$ on $C(K\times\Omega)$ with $A_\a=\sum_{n=1}^{N_\a}\lambda_{n,\a}T_{g_n}\in\co\T$ for all $\a\in\mathcal{A}$. Since $\T$ is mean ergodic the net $(A_\a)$ converges strongly to a projection $P$ with $\ran P=\Fix\T$. By density of the linear span of the matrix elements in $C(\Omega)$ it thus suffices to show that $P(f\otimes\pi_{ij})$ is constant for each $f\in C(K)$ and each matrix element $\pi_{ij}$. So take $f\in C(K)$, $\pi\in\Rep(\Omega,N)$ and fix an orthonormal basis $\{e_1,\dots,e_N\}$ of $\C^N$. For each $(x,\omega)\in K\times\Omega$ and $i,j\in\{1,\dots, N\}$ we have $$\begin{aligned} A_\a(f\otimes\pi_{ij})(x,\omega)&=\sum_{n=1}^{N_\a}\lambda_{n,\a}(f\otimes\pi_{ij})(g_nx,\omega\gamma(g_n,x))\\ &=\sum_{n=1}^{N_\a}\lambda_{n,\a}f(g_n x){\left\langle \pi(\omega)\pi(\gamma(g_n,x))e_i,e_j \right\rangle}\\ &=\left\langle\pi(\omega) \underbrace{\sum_{n=1}^{N_\a}\lambda_{n,\a}(\pi(\gamma(g_n,\cdot))S_{g_n})}_{=:B^{\pi}_\a}f^{(i)}(x),e_j\right\rangle,\end{aligned}$$ where $f^{(i)}\in C(K,\C^N)$ is defined by $f^{(i)}(x)=f(x)e_i$. One verifies that the net $(B^{\pi}_\a)_{\a\in \mathcal{A}}$ forms a strong right $(\pi\circ\gamma)\S^{(N)}$-ergodic net. From Remark \[rem:fixraum\] we deduce $\Fix(\pi\circ\gamma)\S^{(N)}=\{0\}$ for each $\pi\in\Rep(\Omega,N)\setminus\{{\mathbbm{1}}\}$ and $N\in\N$, hence $B^{\pi}_\a f^{(i)}\to 0$ in $C(K,\C^N)$ for each $\pi\in\Rep(\Omega,N)\setminus\{{\mathbbm{1}}\}$ and $N\in\N$. For $\pi={\mathbbm{1}}$ the net $B_\a f^{(1)}$ converges to a constant $c\cdot{\mathbbm{1}}$ since $\Fix\S^{(1)}=\C\cdot{\mathbbm{1}}$ by unique ergodicity. Hence if $\pi\neq{\mathbbm{1}}$ we obtain $$\begin{aligned} \| A_\a(f\otimes\pi_{ij})\|_{C(K\times\Omega)}&=\sup_{(x,\omega)\in K\times \Omega} \left|{\left\langle \pi(\omega)B^{\pi}_\a f^{(i)}(x),e_j \right\rangle}\right|\\ &\le \left\|B^{\pi}_\a f^{(i)}\right\|_{C(K,\C^N)} \to 0,\end{aligned}$$ since $\pi(\omega)$ is unitary for each $\omega\in\Omega$. This yields $P(f\otimes\pi_{ij})=0$ if $\pi\neq{\mathbbm{1}}$ and the same calculation shows $P(f\otimes{\mathbbm{1}})=c\cdot{\mathbbm{1}}$ if $\pi={\mathbbm{1}}$. Hence $\Fix\T=\ran P=\C\cdot{\mathbbm{1}}$, which finishes the proof. **Acknowledgement.** The author is grateful to Rainer Nagel and Pavel Zorin-Kranich for valuable comments and interesting discussions. [10]{} , [*Wiener [W]{}intner Ergodic Theorems*]{}, World Scientific Publishing Co. Inc., River Edge, NJ, 2003. , [*Analysis on Semigroups*]{}, Canadian Mathematical Society Series of Monographs and Advanced Texts, John Wiley & Sons Inc., New York, 1989. , [*Which distributions of matter diffract? [A]{}n initial investigation*]{}, J. Physique, 47 (1986), pp. C3–19–C3–28. International Workshop on Aperiodic Crystals (Les Houches, 1986). , [*Quasicrystals, tilings, and algebraic number theory: some preliminary connections*]{}, in The Legacy of [S]{}onya [K]{}ovalevskaya ([C]{}ambridge, [M]{}ass., and [A]{}mherst, [M]{}ass., 1985), vol. 64 of Contemp. Math., Amer. Math. Soc., Providence, RI, 1987, pp. 241–264. , [*Semigroups and amenability*]{}, in Semigroups ([P]{}roc. [S]{}ympos., [W]{}ayne [S]{}tate [U]{}niv., [D]{}etroit, [M]{}ich., 1968), Academic Press, New York, 1969, pp. 5–53. height 2pt depth -1.6pt width 23pt, [*Normed Linear Spaces*]{}, Springer-Verlag, New York, third ed., 1973. Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 21. , [*Vector Measures*]{}, American Mathematical Society, Providence, R.I., 1977. Mathematical Surveys, No. 15. , [*A proof of the dynamical version of the [B]{}ombieri-[T]{}aylor conjecture*]{}, J. Math. Phys. 39 (1998), pp. 4335–4342. , [*Principles of Harmonic Analysis*]{}, Universitext, Springer-Verlag, New York, 2009. , [*Linear [O]{}perators. [I]{}. [G]{}eneral [T]{}heory*]{}, With the assistance of W. G. Bade and R. G. Bartle. Pure and Applied Mathematics, Vol. 7, Interscience Publishers, Inc., New York, 1958. , [*Abstract ergodic theorems and weak almost periodic functions*]{}, Trans. Amer. Math. Soc. 67 (1949), pp. 217–240. , [*Operator Theoretic Aspects of Ergodic Theory*]{}, Graduate Texts in Math., Springer, to appear, 2013. , [*A Course in Abstract Harmonic Analysis*]{}, Studies in Advanced Mathematics, CRC Press, Boca Raton, FL, 1995. , [*Recurrence in Ergodic Theory and Combinatorial Number Theory*]{}, Princeton University Press, Princeton, N.J., 1981. M. B. Porter Lectures. , [*On diffraction by aperiodic structures*]{}, Comm. Math. Phys. 169 (1995), pp. 25–43. , [*Ergodic measures for non-abelian compact group extensions*]{}, Compositio Math. 32 (1976), pp. 53–70. , [*Ergodic Theorems*]{}, vol. 6 of de Gruyter Studies in Mathematics, Walter de Gruyter & Co., Berlin, 1985. , [*Aperiodic order via dynamical systems: diffraction for sets of finite local complexity*]{}, in Ergodic Theory, vol. 485 of Contemp. Math., Amer. Math. Soc., Providence, RI, 2009, pp. 91–112. height 2pt depth -1.6pt width 23pt, [*Continuity of eigenfunctions of uniquely ergodic dynamical systems and intensity of [B]{}ragg peaks*]{}, Comm. Math. Phys. 287 (2009), pp. 225–258. , [*Amenability*]{}, vol. 29 of Mathematical Surveys and Monographs, American Mathematical Society, Providence, RI, 1988. , [*On uniform convergence in the [W]{}iener-[W]{}intner theorem*]{}, J. London Math. Soc. 49 (1994), pp. 493–501. , [*Real and Complex Analysis*]{}, McGraw-Hill Book Co., New York, third ed., 1987. , [*Topological [W]{}iener-[W]{}intner ergodic theorems via non-abelian [L]{}ie group extensions*]{}, Ergodic Theory Dynam. Systems 27 (2007), pp. 1633–1650. , [*On abstract mean ergodic theorems*]{}, Tôhoku Math. J. 30 (1978), pp. 575–581. , [*Banach Lattices and Positive Operators*]{}, Springer-Verlag, New York, 1974. Die Grundlehren der mathematischen Wissenschaften, Band 215. , [*Generalized model sets and dynamical systems*]{}, in [Directions in Mathematical Quasicrystals]{}, vol. 13 of CRM Monogr. Ser., Amer. Math. Soc., Providence, RI, 2000, pp. 143–159. , [*Uniform families of ergodic operator nets*]{}, Semigroup Forum, (2012). http://dx.doi.org/10.1007/s00233-012-9444-9. , [*Topological [W]{}iener-[W]{}intner ergodic theorems and a random [$L^2$]{} ergodic theorem*]{}, Ergodic Theory Dynam. Systems 16 (1996), pp. 179–206. , [*Harmonic analysis on semigroups*]{}, J. London Math. Soc., 42 (1967), pp. 1–41.
--- abstract: 'In this paper, we study the inverse signed total domination number in graphs and present new sharp lower and upper bounds on this parameter. For example by making use of the classic theorem of Turán (1941), we present a sharp upper bound on $K‎_{r+1}‎$-free graphs for $r‎\geq2‎$. Also, we bound this parameter for a tree from below in terms of its order and the number of leaves and characterize all trees attaining this bound.' author: - | D.A. Mojdeh[^1]\ Department of Mathematics\ University of Mazandaran\ Babolsar, IRI\ [damojdeh@umz.ac.ir]{}\ Babak Samadi\ Department of Mathematics\ Arak University\ Arak, IRI\ [b-samadi@araku.ac.ir]{} title: On the inverse signed total domination number in graphs --- [**Keywords:**]{} inverse signed total dominating function, inverse signed total domination number, $k$-tuple total domination number.\ [**2010 Mathematics Subject Classification:**]{} 05C69. Introduction ============ Throughout this paper, let $G$ be a finite connected graph with vertex set $V=V(G)$, edge set $E=E(G)$, minimum degree $\delta=\delta(G)$ and maximum degree $\Delta=\Delta(G)$. For any vertex $v\in V$, $N(v)=\{u\in G\mid uv\in E\}$ denotes the [*open neighborhood*]{} of $v$ in $G$, and $N[v]=N(v)\cup \{v\}$ denotes its [*closed neighborhood*]{}. For all $A,B‎\subseteq V‎$ let $[A,B]$ be the set of edges having one end point in $A$ and the other in $B$. We use [@we] as a reference for terminology and notation which are not defined here.\ A set $D\subseteq V$ is a [*total dominating set*]{} in $G$ if each vertex in $V$ is adjacent to at least one vertex in $D$. The [*total domination number $\gamma_{t}(G)$*]{} is the minimum cardinality of a total dominating set in $G$.\ The [*$k$-tuple total dominating set*]{} (or [*$k$-total dominating set*]{}) was introduced by Kulli [@k], as a subset $D\subseteq V$ with $|N(v)\cap D|\geq k$ for all $v\in V$, where $1\leq k\leq \delta$. The [*$k$-tuple total domination number*]{} (or [*$k$-total domination number*]{}), denoted $\gamma_{\times k,t}(G)$, is the smallest number of vertices in a $k$-tuple total dominating set. Clearly, $\gamma_{\times 1,t}(G)=‎\gamma‎_{t}(G)‎‎$.\ Let $B\subseteq V$. For a real-valued function $f:V\rightarrow R$ we define $f(B)=\sum_{v \in B}f(v)$. Also, $f(V)$ is the weight of $f$. A [*signed total dominating function*]{}, abbreviated STDF, of $G$ is defined in [@z] as a function $f:V\rightarrow \{-1,1\}$ such that $f(N(v))\geq1$ for every $v\in V$. The [*signed total domination number*]{} (STDN) of $G$, $\gamma_{st}(G)$, is the minimum weight of a STDF of $G$. If we replace $"‎\geq‎"$ and “minimum” with $"‎\leq‎"$ and “maximum”, respectively, in the definition of STDN, we will have the [*signed total $2$-independence function*]{} (ST$2$IF) and the [*signed total $2$-independence number*]{} (ST$2$IN) of the graph $G$. This concept was introduced in [@ws] and studied in [@w; @w1] as the [*negative decision number*]{}.\ An [*inverse signed total dominating function*]{}, abbreviated ISTDF, of $G$ is defined in [@hfx] as a function $f:V\rightarrow \{-1,1\}$ such that $f(N(v))‎\leq0‎$ for every $v\in V$. The [*inverse signed total domination number*]{} (ISTDN), denoted by $‎\gamma‎‎^{0}_{st}‎‎(G)‎$, is the maximum weight of an ISTDF of $G$. For more information the reader can consult .\ In this paper, we continue the study of the concept of inverse signed total domination in graphs. In Section $2$, we present a sharp upper bound on a general graph $G$ by considering the concept of $k$-tuple total domination in graphs. Moreover, as an application of the well-known theorem of Turán [@t] about $K_p$-free graphs we give an upper bound on $\gamma‎‎^{0}_{st}‎‎(G)$ for a $K‎_{r+1}‎$-free graph $G$ and show the bound is the best possible by constructing an $r$-partite graph attaining the bound. In Section $3$, we discuss ISTDN for regular graphs and give sharp lower and upper bounds on the ISTDN of regular graphs. Finally, in the next section we show that this parameter is not bounded from both above and below in general, even for trees. Also, we give a lower bound on the ISTDN of a tree $T$ as $\gamma‎‎^{0}_{st}‎‎(T)‎\geq-n+2(‎\lfloor‎\frac{\ell‎_{1}‎}{2}‎‎\rfloor‎‎+...+‎\lfloor‎\frac{\ell‎_{s}‎}{2}‎‎\rfloor)‎$, where $\ell‎_{1},...,\ell‎_{s}$ are the number of leaves adjacent to its $s$ support vertices, and characterize all trees attaining this bound. Two upper bounds ================ Throughout this paper, if $f$ is an ISTDF of $G$, then we let $P$ and $M$ denote the sets of those vertices which are assigned $1$ and $-1$ under $f$, respectively. We apply the concept of tuple total domination to obtain a sharp upper bound on $\gamma_{st}(G)$. It is easy to check the proof of the following observation. Let $K_n$ and $C_n$ be complete graph and cycle with $n$ vertices. Then\ *(i)* $\gamma‎‎^{0}_{st}‎‎(K_n) = \begin{cases} -2\ \ &\text{if}\ \ n‎\equiv0\ (mod\ 2)‎\\ -1\ \ &\text{if}\ \ n‎\equiv1\ (mod\ 2)‎. \end{cases}$\ *(ii)* $\gamma‎‎^{0}_{st}‎‎(C_n) = \begin{cases} \ 0\ \ &\text{if}\ \ n‎\equiv0\ (mod\ 4) \\ -1\ \ &\text{if}\ \ n‎\equiv1\ or\ 3\ (mod\ 4)\\ -2\ \ &\text{if}\ \ n‎\equiv2\ (mod\ 4). \end{cases}$\ *(iii)*  $‎\gamma‎_{t}‎‎(K_{n})=2$ for $n‎\geq 2‎$.\ *(iv)*  $\gamma‎‎_{t}‎‎(C_n) = \begin{cases} \left\lceil \frac{n}{2}\right\rceil +1\ &\text{if}\ \ n‎\equiv2\ (mod\ 4)\\ \left\lceil \frac{n}{2}\right\rceil &\text{otherwise}. \end{cases}$\ We make use of them to show that the following bound is sharp. If $G$ is a graph of order $n$ and minimum degree $‎\delta‎\geq1‎‎$, then $$\gamma‎‎^{0}_{st}‎‎(G)‎\leq n-2‎\lceil‎\frac{2‎\gamma‎_{t}(G)‎‎+‎\delta-2‎}{2}‎‎\rceil‎‎‎$$ and this bound is sharp. Let $f:V‎\rightarrow\{-1,1\}‎$ be a maximum ISTDF of $G$. The condition $f(N(v))‎\leq0‎$, for all $v‎\in V‎$, follows that $|N(v)‎\cap M‎|‎\geq ‎\lceil‎\frac{deg(v)}{2}‎‎\rceil‎\geq‎‎‎ ‎\lceil‎‎\frac{‎\delta‎}{2}‎\rceil‎‎‎$. So, $M$ is a $\lceil‎‎\frac{‎\delta‎}{2}‎\rceil$-tuple total dominating set in $G$ and therefore $$(n-\gamma‎‎^{0}_{st}‎‎(G))/2=|M|‎\geq ‎\gamma‎_{‎\times\lceil‎‎\frac{‎\delta‎}{2}‎\rceil,t}‎‎‎(G).$$ Now let $D$ be a minimum $\lceil‎‎\frac{‎\delta‎}{2}‎\rceil$-tuple total dominating set in $G$. Hence, $|N(v)‎\cap D|‎\geq ‎\lceil‎‎\frac{‎\delta‎}{2}‎\rceil$, for all $v\in V$. Let $u\in D$. Then $|N(v)‎\cap (D‎\setminus\{u\}‎)‎|‎\geq ‎\lceil‎‎\frac{‎\delta‎}{2}‎\rceil-1$, for all $v\in V$. This shows that $D‎\setminus\{u\}$ is a ($‎\lceil‎‎\frac{‎\delta‎}{2}‎\rceil-1$)-tuple total dominating set in $G$. Therefore, $‎\gamma‎_{‎\times\lceil‎‎\frac{‎\delta‎}{2}‎\rceil,t}‎‎‎(G)‎\geq ‎\gamma‎_{‎\times(\lceil‎‎\frac{‎\delta‎}{2}‎\rceil-1),t}‎‎‎(G)+1‎$. If we iterate this process we finally arrive at $$\gamma‎_{‎\times\lceil‎‎\frac{‎\delta‎}{2}‎\rceil,t}‎‎‎(G)‎\geq ‎\gamma‎_{‎\times(\lceil‎‎\frac{‎\delta‎}{2}‎\rceil-1),t}‎‎‎(G)+1‎\geq...‎\geq \gamma‎_{‎\times1,t}‎‎‎(G)‎‎+\lceil‎‎\frac{‎\delta‎}{2}‎\rceil-1=\gamma‎_{‎t}‎‎‎(G)‎‎+\lceil‎‎\frac{‎\delta‎}{2}‎\rceil-1.$$ Thus, (1) yields $$(n-\gamma‎‎^{0}_{st}‎‎(G))/2‎\geq \gamma‎_{‎t}‎‎‎(G)‎‎+\lceil‎‎\frac{‎\delta‎}{2}‎\rceil-1‎$$ as desired. Moreover, this bound is sharp. It is sufficient to consider the complete graph $K‎_{n}‎$, when $n‎\geq2‎$ and the cycle $C‎_{n}‎$, when $n‎\geq3‎$. Note that the difference between $\gamma‎‎^{0}_{st}‎‎(G)$ and $n-2‎\lceil‎\frac{2‎\gamma‎_{t}(G)‎‎+‎\delta-2‎}{2}‎‎\rceil‎‎‎$ may be large. It is easy to check that for complete bipartite graph $K_{m,n}$;\ $\gamma‎‎^{0}_{st}‎‎(K_{m,n}) = \begin{cases} \ 0\ \ &\text{if}\ \ n,m‎\equiv0\ (mod\ 2) \\ -1\ \ &\text{if\ $m$\ and\ $n$\ have\ different\ parity} \\ -2\ \ &\text{if}\ \ n,m‎\equiv1\ (mod\ 2). \end{cases}$\ While $n+m-2‎\lceil‎\frac{2‎\gamma‎_{t}(K_{m,n})‎‎+‎\delta-2‎}{2}‎‎\rceil\ge n-3‎‎‎$ where $n=max\{m,n\}$. The upper bound in Theorem \[th7\] works better for bipartite graphs. We first recall that a graph is $K_p$-free if it does not contain the complete graph $K_p$ as an induced subgraph. For our next upper bound, we use the following well-known theorem of Turán [@t]. \[th6\] If $G$ is a $K_{r+1}$-free graph of order $n$, then $$|E(G)|\le \frac{r-1}{2r}\cdot n^2.$$ \[th7\] Let $r‎\geq2‎$ be an integer, and $G$ be a $K‎_{r+1}‎$-free graph of order $n$. If $c=‎\lceil‎\frac{‎\delta‎}{2}‎‎\rceil‎‎$, then $$\gamma‎‎^{0}_{st}‎‎(G)‎‎\leq n-\frac{r}{r-1}\left(-c+\sqrt{c^2+4\frac{r-1}{r}cn}\right)$$ and this bound is sharp. Let $f$ be an ISTDF of $G$. Let $v\in P$. Since $f(N(v))‎\leq0‎$, then $|N(v)‎\cap M‎|‎\geq \lceil‎‎\frac{‎\delta‎}{2}‎\rceil‎‎‎‎$. Therefore $$|[M,P]|‎\geq ‎\lceil‎‎\frac{‎\delta‎}{2}‎\rceil‎‎‎‎|P|.$$ Furthermore, Theorem \[th6\] implies $$|[M,P]|=\sum_{v\in M}|N(v)\cap P|\le \sum_{v\in M}|N(v)\cap M|=2|E(G[M])|\le\frac{r-1}{r}|M|^2.$$ Combining this inequality chain and (2), we arrive at $$‎\frac{r-1}{r}|M|^2+c|M|‎-cn‎\geq0.‎$$ Solving the above inequality for $|M|$ we obtain $$‎‎|M|‎\geq ‎\frac{r}{2(r-1)}\left(-c‎‎+‎\sqrt{c^2+4‎\frac{r-1}{r}cn‎}‎\right).$$ Because of $|M|=(n-\gamma‎‎^{0}_{st}‎‎(G)‎‎)/2$, we obtain the desired upper bound.\ That the bound is sharp, can be seen by constructing an $r$-partite graph attaining this upper bound as follows. Let $H‎_{i}‎$ be a complete bipartite graph with vertex partite sets $X‎_{i}‎$ and $Y‎_{i}‎$, where $|X‎_{i}‎|=r-1$ and $|Y‎_{i}‎|=(r-1)^2$ for all $1‎\leq‎ i‎\leq r‎$. Let the graph $H(r)$ be the disjoint union of $H‎_{1},...,H‎_{r}‎‎$ by joining each vertex of $X‎_{i}‎$ (in $H‎_{i}‎$) with all vertices in union of $X_{j}‎$, $i‎\neq j‎$. Also, we add $(r-1)^3$ edges between $Y‎_{i}‎$ and the union of $Y‎_{j}‎$, $i‎\neq j‎$, so that every vertex of $Y‎_{i}‎$ has exactly $r-1$ neighbors in this union. Now let $Z‎_{i}=X‎_{i}‎\cup Y‎_{i+1}‎‎‎‎$, for all $1‎\leq‎ i‎\leq r‎‎‎$ (mod $r$). Then $H(r)$ is an $r$-partite graph of order $n=r^2(r-1)$ with partite sets $Z‎_{1},...,Z‎_{r}‎$. Clearly, every vertex in $Y‎_{i}‎$ has the minimum degree $‎\delta=2r-2‎$ and hence $c=r-1$. Now we define $f:V(H(r))‎\rightarrow\{-1,1\}‎$, by $$f(v)=\left \{ \begin{array}{lll} -1 & \mbox{if} & v\in X_{1}‎\cup...‎\cup X‎_{r}‎‎‎‎ \\ 1 & \mbox{if} & v\in Y_{1}‎\cup...‎\cup Y‎_{r}. \end{array} \right.$$ It is easy to check that $f$ is an ISTDF of $H(r)$ with weight $$r(r-1)^2-r(r-1)=n-\frac{r}{r-1}\left(-c+\sqrt{c^2+4\frac{r-1}{r}cn}\right).$$ This completes the proof. Regular graphs ============== Our aim in this section is to give sharp lower and upper bounds on the ISTDN of a regular graph. Henning [@h] and Wang [@w] proved that for an $r$-regular graph $G$, $$\gamma‎‎_{st}‎‎(G)‎\leq‎\left \{ \begin{array}{lll} ‎(\frac{r^2+r+2}{r^2+3r-2})n‎ & \mbox{if} & r‎\equiv0\ (mod\ 2)‎ ‎‎‎‎ \vspace{1.5mm}\\ ‎(\frac{r^2+1}{r^2+2r-1})n‎ & \mbox{if} & r‎\equiv1\ (mod\ 2) \end{array} \right.$$ (see [@h]), and $$\alpha_{st}^{2}(G)‎\geq‎\left \{ \begin{array}{lll} ‎(\frac{1-r}{1+r})n‎ & \mbox{if} & r‎\equiv0\ (mod\ 2)‎ ‎‎‎‎ \vspace{1.5mm}\\ ‎(\frac{1+2r-r^2}{1+r^2})n‎ & \mbox{if} & r‎\equiv1\ (mod\ 2). \end{array} \right.$$ (see [@w]). Furthermore, these bounds are sharp. Also, the following sharp lower an upper bounds on STDN and ST$2$IN of an $r$-regular graph $G$ can be found in [@z] and [@w1; @ws], respectively. $$\gamma‎‎_{st}‎‎(G)‎‎\geq‎\left \{ \begin{array}{lll} 2n/r & \mbox{if} & r‎\equiv0\ (mod\ 2)‎ \\ ‎n/r‎ & \mbox{if} & r‎\equiv1\ (mod\ 2) \end{array} \right.$$ and $$\alpha_{st}^{2}(G)‎‎\leq‎‎\left \{ \begin{array}{lll} 0& \mbox{if} & r‎\equiv0\ (mod\ 2)‎ ‎‎‎‎\\ n/r‎ & \mbox{if} & r‎\equiv1\ (mod\ 2). \end{array} \right.$$ Applying the concept of tuple total domination we can show that there are special relationships among $\gamma‎‎^{0}_{st}‎‎(G)$, $\gamma_{st}‎‎(G)$ and $\alpha_{st}^{2}(G)$ when we restrict our discussion to the regular graph $G$. \[th8\] Let $G$ be an $r$-regular graph. If $r$ is odd, then $\gamma‎‎^{0}_{st}‎‎(G)=-\gamma_{st}‎‎(G)$ and if $r$ is even, then $\gamma‎‎^{0}_{st}‎‎(G)=\alpha_{st}^{2}(G)$. By (1), we have $$\gamma‎‎^{0}_{st}‎‎(G)‎\leq n-2‎\gamma‎_{‎\times‎‎\lceil‎\frac{r‎}{2}‎‎\rceil‎‎,t}(G)‎‎‎.$$ Now let $D$ be a minimum $‎‎\lceil‎\frac{r‎}{2}‎‎\rceil‎$-tuple total dominating set in $G$. We define $f:V‎\rightarrow\{-1,1\}‎$, by $$f(v)=\left \{ \begin{array}{lll} -1 & \mbox{if} & v\in D \\ \ 1 & \mbox{if} & v\in V‎\setminus‎ D. \end{array} \right.$$ Taking into account that $D$ is a $‎‎\lceil‎\frac{r}{2}‎‎\rceil‎$-tuple total dominating set, we have $f(N(v))=|N(v)‎\cap(V‎\setminus D‎)‎|-|N(v)‎\cap D‎|=deg(v)-2|N(v)‎\cap D‎|‎\leq r-2‎\lceil‎\frac{‎r}{2}‎‎\rceil‎‎‎‎‎\leq0‎$. Thus, $f$ is an ISTDF of $G$ with weight $n-2‎\gamma‎_{‎\times‎‎\lceil‎\frac{r‎}{2}‎‎\rceil‎‎,t}(G)$ and therefore $\gamma‎‎^{0}_{st}‎‎(G)‎‎\geq‎ n-2‎\gamma‎_{‎\times‎‎\lceil‎\frac{r‎}{2}‎‎\rceil‎‎,t}(G)‎‎‎$. Now the inequality (7) implies $$\gamma‎‎^{0}_{st}‎‎(G)‎‎=n-2‎\gamma‎_{‎\times‎‎\lceil‎\frac{r‎}{2}‎‎\rceil‎‎,t}(G).$$ The following equalities for the STDN and the ST$2$IN of $G$ can be proved similarly. $$\gamma‎‎_{st}‎‎(G)‎‎=2‎\gamma‎_{‎\times‎‎\lceil‎\frac{r+1‎}{2}‎‎\rceil‎‎,t}(G)-n$$ and $$\alpha_{st}^{2}(G)=n-2\gamma_{\times\lfloor\frac{r}{2}\rfloor,t}(G).$$ From (8), (9) and (10), the desired results follow. Using Theorem \[th8\] and inequalities (3),...,(6) we can bound $\gamma‎‎^{0}_{st}‎‎(G)‎‎‎$ of a regular graph $G$ from both above and below as follows. Let $G$ be an $r$-regular graph of order $n$. Then $$(\frac{1-r}{1+r})n‎‎\leq ‎\gamma‎‎^{0}_{st}‎‎(G)‎‎‎\leq0,\ \ \ r‎\equiv0\ ‎(mod\ 2)$$ and $$-‎(\frac{r^2+1}{r^2+2r-1})n‎\leq ‎\gamma‎‎^{0}_{st}‎‎(G)‎‎‎\leq-n/r,\ \ \ r‎\equiv1\ ‎(mod\ 2).$$ Furthermore, these bounds are sharp. As an immediate result of Theorem \[th8\] we have $\gamma‎‎^{0}_{st}‎‎(G)=-\gamma_{st}‎‎(G)$, for all cubic graph $G$. Hosseini Moghaddam et al. [@hmsv] showed that $‎\gamma‎_{st}(G)‎\leq 2n/3$ is a sharp upper bound for all connected cubic graph $G$ different from the Heawood graph $G‎_{14}‎$. Therefore, if $G$ is a connected cubic graph different from $G‎_{14}‎$, then $\gamma‎‎^{0}_{st}‎‎(G)‎\geq-2n/3‎$ is a sharp lower bound. (0:1.3cm) – (25.7:1.3cm) – (51.4:1.3cm) – (77.1:1.3cm) – (102.8:1.3cm) – (128.5:1.3cm) – (154.2:1.3cm) – (179.9:1.3cm) – (205.6:1.3cm) – (231.3:1.3cm) – (257:1.3cm) – (282.7:1.3cm) – (308.4:1.3cm) – (334.1:1.3cm) – cycle; (0:1.3cm) – (128.5:1.3cm) – cycle; (25.7:1.3cm) – (257.5:1.3cm) – cycle; (51.4:1.3cm) – (179.9:1.3cm) – cycle; (77.1:1.3cm) – (308.4:1.3cm) – cycle; (102.8:1.3cm) – (231.3:1.3cm) – cycle; (154.2:1.3cm) – (282.7:1.3cm) – cycle; (205.6:1.3cm) – (334.1:1.3cm) – cycle; at (0,-2) [Figure 1. The Heawood graph $G_{14}$]{}; in [0,25.7,51.4,77.1,102.8,128.5,154.2,179.9,205.6,231.3,257,282.7,308.4,334.1]{}[ (:1.3cm) circle (1.5pt);]{} Trees ===== Henning [@h] proved that $‎\gamma‎_{st}(T)‎\geq2‎‎‎$, for any tree $T$ of order $n$. A similar result cannot be presented for $\gamma‎‎^{0}_{st}‎‎(T)$. In what follows we show that in general $\gamma‎‎^{0}_{st}‎‎(T)$ is not bounded from both above and below. In fact, we prove a stronger result as follows. For any integer $k$, there exists a tree $T$ with $\gamma‎‎^{0}_{st}‎‎(T)=k$. We consider three cases.\ [**Case 1.**]{} Let $k=0$. Consider the $v‎_{1}-v‎_{2}-v‎_{3}-v‎_{4}‎‎‎‎$ path $P‎_{4}‎$. Clearly, $f(v‎_{1})=f(v‎_{4})=1$ and $f(v‎_{2})=f(v‎_{3})=-1$ define a maximum ISTDF of $P‎_{4}‎$ with weight $0$.\ [**Case 2.**]{} Let $k‎\geq1‎$. Consider the path $P‎_{k+4}‎$ on vertices $v‎_{1},...,v‎_{k+4}‎‎$, respectively, in which $v‎_{1}‎$ and $v‎_{k+4}‎$ are the leaves. Let $T$ be a tree obtained from $P‎_{k+4}‎$ by adding two leaves to each vertex $v‎_{i}‎‎$, for all $3‎\leq i‎\leq k+2‎‎$. The condition $f(N(v))‎\leq0‎$, for all ISTDF $f$ and $v\in V(T)$, follows that all support vertices must be assigned $-1$ under $f$. This shows that $f(v‎_{2}‎)=...=f(v‎_{k-3}‎)=-1$ and $f(v)=1$, for $v‎\neq ‎v‎_{2},...,v‎_{k-3}‎$, is a maximum ISTDF of $T$ with weight $k$.\ [**Case 3.**]{} Let $k‎\leq-1‎$. Let $T‎_{i}‎$ be a tree obtained from a the path $P‎_{3}‎$ on vertices $v_{i1},v‎_{i2}‎$ and $v_{i3}$ by adding two leaves $\ell‎‎^{1}_{ij}‎$ and $\ell‎‎^{2}_{ij}‎$ to $v_{ij}$, for $j=1,2,3$. Let the tree $T$ be the disjoint union of $T‎_{1},...,‎T‎_{k}$ by adding a path on the vertices $v‎_{12},...,v‎_{k2}‎‎$. Then $f:V(T)‎\rightarrow\{-1,1\}‎$ defined by, $$f(v)=\left \{ \begin{array}{lll} -1 & \mbox{if} & v=v‎_{ij}, ‎\ell‎‎^{2}_{i1}, \ell‎‎^{2}_{i3} ‎‎‎‎ \\ \ 1 & \mbox{if} & v‎\neq ‎v‎_{ij}, ‎\ell‎‎^{2}_{i1}, \ell‎‎^{2}_{i3} \end{array} \right.$$ is a maximum ISTDF of $T$ with weight $k$. We bound the ISTDN of a tree from below by considering its leaves and support vertices and characterize all trees attaining this bound. For this purpose, we introduce some notation. The set of leaves and support vertices of a tree $T$ are denoted by $L=L(T)$ and $S=S(T)$, respectively. Consider $L‎_{v}‎$ as the set of all leaves adjacent to the support vertex $v$, and $T'$ as the subgraph of $T$ induced by the set of support vertices.\ The following lemma will be useful. \[th9\] If $T$ is a tree, then there exists an ISTDF of $T$ of the weight $\gamma‎‎^{0}_{st}‎‎(T)$ that assigns to at least ‎$\lfloor‎\frac{\ell‎_{i}‎}{2}‎‎\rfloor$ leaves of the support vertex $v‎_{i}‎$ the value $1$, where $\ell‎_{i}‎$ is the number of leaves adjacent to $v‎_{i}‎$. Let $f:V‎\rightarrow\{-1,1\}‎$ be an ISTDF of weight $\gamma‎‎^{0}_{st}‎‎(T)$. Suppose that there exists support vertex $v‎_{i}‎$ which is adjacent to at most $\lfloor‎\frac{\ell‎_{i}‎}{2}‎‎\rfloor-1$ vertices in $P$. Then, $f(N(v‎_{i}‎))‎\leq-deg(v‎_{i}‎)‎+\lfloor‎\frac{\ell‎_{i}‎}{2}‎‎\rfloor-1‎\leq-2‎$. Let $u$ be a leaf adjacent to $v‎_{i}‎$ with $f(u)=-1$. Define $f':V\rightarrow\{-1,1\}$ by $$f'(v)=\left \{ \begin{array}{lll} 1 & \mbox{if} & v=u \\ f(v) & \mbox{if} & v‎\neq‎‎ u. \end{array} \right.$$ Then $f'$ is an ISTDF of $T$ with weight $\gamma‎‎^{0}_{st}‎‎(T)+2$, which is a contradiction. Therefore, every support vertex $v‎_{i}‎$ is adjacent to at least $\lfloor‎\frac{\ell‎_{i}‎}{2}‎‎\rfloor$ vertices in $P$. Consider the support vertex $v‎_{1}‎$ and the vertices $u‎_{1}‎,...,u‎_{\lfloor‎\frac{\ell‎_{1}‎}{2}‎‎\rfloor}$ in $L‎_{v‎_{1}‎}‎$. Without loss of generality we can assume that $u‎_{1}‎,...,u‎_{\lfloor‎\frac{\ell‎_{1}‎}{2}‎‎\rfloor}$ have the value $1$ under $f$. Since $f$ is a maximum ISTDF then this statement holds for the other support vertices, as well.‎ Now we define $‎\Omega‎‎$ to be the family of all trees $T$ satisfying:\ $(a)$ For any support vertex $w$ we have $|L‎_{w}‎|‎\geq2‎$ or $T$ is isomorphic to the path $P‎_{2}‎$;\ $(b)$ $‎\Delta(T')‎‎\leq1‎$;\ $(b_1)$ If $‎\Delta(T')‎‎=1‎$, then all of vertices of $T$ are leaves or support vertices and $|L‎_{w‎}|‎$ is even for all support vertex $w$;\ $(b_2‎)$ if $‎\Delta(T')=0‎$, then $(i)$ $T$ is isomorphic to the star $K‎_{1,n-1}‎$ or $(ii)$ each support vertex is adjacent to just one vertex in $V‎\setminus(L‎\cup S‎)‎$, every vertex in $V‎\setminus(L‎\cup S‎)‎$ has at least one neighbor in $S$ and $|L‎_{w‎}|‎$ is even for all support vertices $w$.\ We are now in a position to present the following lower bound. Let $T$ be a tree of order $n‎$ with the set of support vertices $S=\{v‎_{1},...,v‎_{s}‎‎\}$. Then $$\gamma‎‎^{0}_{st}‎‎(T)‎\geq-n+2(‎\lfloor‎\frac{\ell‎_{1}‎}{2}‎‎\rfloor‎‎+...+‎\lfloor‎\frac{\ell‎_{s}‎}{2}‎‎\rfloor)‎$$ where $\ell‎_{i}‎$ is the number of leaves adjacent to $v‎_{i}‎$. Moreover, the equality holds if and only if $T‎‎\in ‎\Omega‎‎$. Let $f:V‎\rightarrow\{-1,1\}‎$ be a function which assigns $1$ to $‎\lfloor‎‎\frac{\ell‎_{i}‎}{2}‎\rfloor‎‎$ leaves of $v‎_{i}‎$ and $-1$ to all remaining vertices. Then, it is easy to see that $f$ is an ISTDF of $T$. Therefore $$\gamma‎‎^{0}_{st}‎‎(T)‎\geq f(V)=-n+2(‎\lfloor‎\frac{\ell‎_{1}‎}{2}‎‎\rfloor‎‎+...+‎\lfloor‎\frac{\ell‎_{s}‎}{2}‎‎\rfloor).$$ Let $T$ be a tree for which the equality holds. So, we may assume that the above function $f$ is a maximum ISTDF of $T$. We first show that $T$ satisfies $(a)$. Without loss of generality we assume that $T$ is not isomorphic to the path $P‎_{2}‎$. Suppose that there exists a support vertex $v‎_{k}‎$ adjacent to just one leaf $u$. Thus, $s‎\geq2‎$. Then the function $g:V‎\rightarrow\{-1,1\}‎$ which assigns $1$ to $u$ and $‎\lfloor‎‎\frac{\ell‎_{i}‎}{2}‎\rfloor‎‎$ leaves of $v‎_{i}‎$, $i‎\neq k‎$, and $-1$ to all other vertices is an ISTDF of $T$ with weight $f(V)+1$, which is a contradiction. Therefore, all support vertices are adjacent to at least two leaves.\ That the tree $T$ satisfies $(b)$, may be seen as follows. Let $‎\Delta(T')‎‎\geq2‎$. Then there are three vertices $v‎_{i-1}‎$, $v‎_{i}‎$ and $v‎_{i+1}‎$ on a path as a subgraph of $T'$. Let $u$ be a leaf adjacent to $v‎_{i}‎$ with $f(u)=-1$. Since $v‎_{i}‎$ is adjacent to the vertices $v‎_{i-1}‎$ and $v‎_{i+1}‎$ with $f(v‎_{i-1})=f(v‎_{i+1})=-1$, then $g:V‎\rightarrow\{-1,1\}$ defined by $$g(v)=\left \{ \begin{array}{lll} 1 & \mbox{if} & v=u \\ f(v) & \mbox{if} & v‎\neq‎‎ u \end{array} \right.$$ is an ISTDF of $T$ with weight $f(V)+2$, a contradiction. Thus, $‎\Delta(T')‎‎\leq‎1$. We now distinguish two cases depending on $‎\Delta(T')=0‎$ or $‎1‎$.\ [**Case 1.**]{} $‎\Delta(T')=1‎$. Suppose to the contrary that there exists a vertex $v$ in $V‎\setminus (S‎\cup L)‎‎$ adjacent to a support vertex $v‎_{k}‎$ with degree one in $T'$. Then it must be assigned $-1$ under $f$. Let $u$ be a leaf adjacent to $v‎_{k}‎$ with $f(u)=-1$. We define a function $h:V‎\rightarrow\{-1,1\}$ by $$h(v)=\left \{ \begin{array}{lll} 1 & \mbox{if} & v=u \\ f(v) & \mbox{if} & v‎\neq‎‎ u. \end{array} \right.$$ Since $vv‎_{k}‎$ is an edge of $T$, then $h$ is an ISTDF of $T$ with weight $f(V)+2$, which is a contradiction. Therefore all of vertices of $T$ are leaves or support vertices. This implies that $S=\{v‎_{1},v‎_{2}‎‎\}$ and $v‎_{1}v‎_{2}$ is an edge of $T$. Now let $|L‎_{v‎_{1}‎}‎|$ be odd. Then the function $h':V‎\rightarrow\{-1,1\}‎$ that assigns $1$ to $\lfloor‎\frac{\ell‎_{1}‎}{2}‎‎\rfloor+1$ leaves of $v‎_{1}‎$, $\lfloor‎\frac{\ell‎_{2}‎}{2}‎‎\rfloor$ leaves of $v‎_{2}‎$ and $-1$ to all remaining vertices is an ISTDF of $T$ with weight $f(V)+2$, a contradiction. A similar argument shows that $|L‎_{v‎_{2}‎}‎|$ is even, as well.\ [**Case 2.**]{} $‎\Delta(T')=0‎$. If $T$ is isomorphic to the star $K‎_{1,n-1}‎$, then $(b_2)$ holds. Otherwise, $s‎\geq2‎$. Suppose to the contrary that there exists a support vertex $w$ adjacent to at least two vertices $u‎_{1}‎,u‎_{2}‎‎\in V‎\setminus(L‎\cup S‎)‎‎$ and $u$ is a leaf adjacent to $w$ with $f(u)=-1$. Since $f(u‎_{1})=f(u‎_{2})=-1$ then, $r:V‎\rightarrow\{-1,1\}$ defined by $$r(v)=\left \{ \begin{array}{lll} 1 & \mbox{if} & v=u \\ f(v) & \mbox{if} & v‎\neq‎‎ u \end{array} \right.$$ is an ISTDF of $T$ with weight $f(V)+2$, which is a contradiction. Moreover, if $u$ is a vertex in $V‎\setminus(L‎\cup S‎)‎$ with no neighbor in $S$ then the function $r':V‎\rightarrow\{-1,1\}$ for which $r'(u)=1$ and $r'(v)=f(v)$ for all remaining vertices is an ISTDF of $T$ with weight $f(V)+2$, this contradicts the fact that $f$ is a maximum ISTDF of $T$. Finally, the proof of the fact that $|L‎_{w‎}|‎$ is even for all support vertex $w$ is similar to that of Case 1.\ These two cases imply that $T$ satisfies $(b_1)$ and $(b_2)$.\ Conversely, suppose that $T‎\in ‎\Omega‎‎$ and $f:V‎\rightarrow\{-1,1\}‎$ is an ISTDF of $T$ with weight $$f(V)=\gamma‎‎^{0}_{st}‎‎(T)>-n+2(‎\lfloor‎\frac{\ell‎_{1}‎}{2}‎‎\rfloor‎‎+...+‎\lfloor‎\frac{\ell‎_{s}‎}{2}‎‎\rfloor).$$ By Lemma \[th9\], we may assume that $f$ assigns $1$ to at least $\lfloor‎\frac{\ell‎_{i}‎}{2}‎‎\rfloor$ leaves $u‎‎^{i}_{1},...,u‎‎^{i}_{\lfloor‎\frac{\ell‎_{i}‎}{2}‎‎\rfloor}$ of the support vertex $v‎_{i}‎$, for $1‎\leq i‎\leq s‎‎$. The inequality (11) shows that there exists a vertex $u\in V‎\setminus‎ ‎\cup‎‎_{i=1}^{s}\{u‎‎^{i}_{1},...,u‎‎^{i}_{\lfloor‎\frac{\ell‎_{i}‎}{2}‎‎\rfloor}\}‎$ such that $f(u)=1$. Since all support vertices must be assigned $-1$ under $f$, then $u\in L‎_{v‎_{i}‎}‎$ or $u$ is a vertex in $V‎\setminus(L‎\cup S‎)‎$ adjacent to a support vertex $v‎_{i}‎$, for some $1‎\leq i‎\leq s‎‎$. It is not hard to see that this contradicts the fact that $T$ belongs to ‎$\Omega‎‎$ and $f(N(v‎_{i}‎))‎\leq0‎$, for all $1‎\leq i‎\leq s‎‎$. This completes the proof. [9]{} M. Atapour, S. Norouzian, S.M. Sheikholeslami and L. Volkmann, [*Bounds on the inverse signed total domination numbers in graphs*]{}, Opuscula Math. [**36**]{}, no. 2 (2016), 145–152. M.A. Henning, [*Signed total domination in graphs*]{}, Discrete Math. [**278**]{} (2004) 109–125. S.M. Hosseini Moghaddam, D.A. Mojdeh, B. Samadi and L. Volkmann, [*New bounds on the signed total domination number of graphs*]{}, Discuss. Math. Graph Theory, [**36**]{} (2016), 467–477. Z. Huang, Z. Feng and H. Xing, [*Inverse signed total domination numbers of some kinds of graphs*]{}, Information Computing and Applications Communications in Computer and Information Science ICICA 2012, Part II, CCIS 308, 315–321, 2012. V. Kulli, [*On $n$-total domination number in graphs*]{}, Graph theory, Combinatorics, Algorithms and Applications, pp. 319–324. SIAM, Philadelphia (1991). P. Turán, [*On an extremal problem in graph theory*]{}, Math. Fiz. Lapok [**48**]{} (1941), 436–452. C. Wang, [*Lower negative decision number in a graph*]{}, J. Appl. Math. Comput. [**34**]{} (2010), 373–384. C. Wang, [*The negative decision number in graphs*]{}, Australas. J. Combin. [**41**]{} (2008), 263–272. H. Wang and E. Shan, [*Signed total $2$-independence in graphs*]{}, Utilitas Math. [**74**]{} (2007), 199–206. D.B. West, Introduction to graph theory (Second Edition), Prentice Hall, USA, 2001. B. Zelinka, [*Signed total domination number of a graph*]{}, Czechoslovak Math. J. [**51**]{} (2001) 225–229. [^1]: Corresponding author
--- abstract: 'We present a new way of performing hypothesis tests on scattering data, by means of a perturbatively calculable classifier. This classifier exploits the “history tree" of how the measured data point might have evolved out of any simpler (reconstructed) points along classical paths, while explicitly keeping quantum-mechanical interference effects by copiously employing complete leading-order matrix elements. This approach extends the standard Matrix Element Method to an arbitrary number of final state objects and to exclusive final states where reconstructed objects can be collinear or soft. We have implemented this method into the standalone package [<span style="font-variant:small-caps;">hytrees</span>]{} and have applied it to Higgs boson production in association with two jets, with subsequent decay into photons. [<span style="font-variant:small-caps;">hytrees</span>]{} allows to construct an optimal classifier to discriminate this process from large Standard Model backgrounds. It further allows to find the most sensitive kinematic regions that contribute to the classification.' author: - Stefan Prestel - Michael Spannowsky bibliography: - 'ref.bib' title: 'HYTREES: Combining Matrix Elements and Parton Shower for Hypothesis Testing' --- Introduction {#sec:intro} ============ The separation of interesting signal events from large Standard-Model induced backgrounds is one of the biggest challenges in searches for new physics and when measuring particle properties at the LHC. This problem is magnified when the final-states of interest have a large probability to be produced proton-proton collisions according to the Standard Model. Typical classifications into signal and background events are based on observables that are characteristic of the the quantum numbers of the particles involved in each hypothesis. For example, the quantum numbers (e.g. charges, spin and mass) of a resonance result in a specific radiation profile in the detector. The radiation induced by such a resonance is more likely to populate specific phase space regions. Thus, to infer if a process is induced by signal or by background, one wants to know how likely the measured radiation profile was induced by either hypothesis, i.e. $\mathcal{P}(\{p_i\}|S)$ for signal and $\mathcal{P}(\{p_i\}|B)$ for background, where $\{p_i\}$ denotes the set of 4-momenta measured in the detector. The Neyman-Pearson Lemma shows [@James:2000et] that by taking the ratio between both probabilities $$\label{eq:chi} \chi = \frac{\mathcal{P}(\{p_i\}|S)}{\mathcal{P}(\{p_i\}|B)}$$ yields an ideal classifier. This approach underlies the so-called Matrix Element Method (MEM) [@Kondo:1988yd], which has been used in a large variety of contexts [@Abazov:2004cs; @Abulencia:2005pe; @Cranmer:2006zs; @Gao:2010qx; @Andersen:2012kn; @Martini:2015fsa; @Gritsan:2016hjl]. In the MEM, the probabilities $\mathcal{P}(\{p_i\}|S)$ and $\mathcal{P}(\{p_i\}|B)$ are calculated directly from the matrix elements of the respective “hard" processes. In [@Soper:2011cr; @Soper:2012pb] the parton-level MEM has been extended to including the parton shower in the evaluation of the probabilities, and has been implemented in Shower [@Soper:2011cr; @Soper:2012pb; @FerreiradeLima:2016gcz] and Event [@Soper:2014rya; @Englert:2015dlp; @FerreiradeLima:2017iwx] Deconstruction, thereby allowing for the analysis of an arbitrary number of final state objects. Information from the parton shower is particularly important in jet-rich final states and in the comparison of the substructure of jets for classification. Here exclusive fixed-order matrix elements do not provide a good description of nature, due to the appearance of collinear and soft divergences in the matrix elements. Conversely, LHC signals and backgrounds are often predicted by using General-Purpose Event Generators (see e.g. [@Buckley:2011ms]) to produce scattering event pseudo-data. In this context, several frameworks to combine the parton shower with multiple hard matrix elements of the multi-jet processes containing have been laid out, e.g. the MLM [@Mangano:2001xp] procedure, the CKKW [@Catani:2001cc] and CKKW-L [@Lonnblad:2001iq] methods, or iterated matrix-element corrections approach [@Giele:2011cb; @Fischer:2017yja]. These formalisms allow to avoid double counting between jets generated during the parton shower step and the matrix element, such that multiple jets can simultaneously be described with matrix-element accuracy. We are proposing to combine techniques used traditionally for the CKKW-L and the iterated matrix-element correction approach of [@Fischer:2017yja], and then use the resulting procedure to construct sophisticated perturbative weights of the full input event, that will facilitate the classification between signal and background. To calculate $\mathcal{P}(\{p_i\}|S)$ and $\mathcal{P}(\{p_i\}|B)$, one needs to evaluate all possible combinations of parton shower and hard process histories that can give rise to the final state $\{p_i\}$. Conceptually, such an analysis method is suitable for any final state of interest consisting of reconstructed objects, i.e. arbitrary numbers of isolated leptons, photons and jets. The approach, dubbed [<span style="font-variant:small-caps;">hytrees</span>]{}, is in line with the Shower/Event deconstruction method, but goes beyond these by including hard matrix elements with multiple jet emissions to calculate the weights of the event histories. We describe here the first implementation of such a method and showcase it in the context of a concrete example which is highly relevant for Higgs phenomenology, i.e. $pp \to (\mathrm{H}\to \gamma \gamma) + \mathrm{jets}$. The outline of the paper is as follows. In Sec. \[sec:implementation\] we discuss the details of the [<span style="font-variant:small-caps;">hytrees</span>]{} algorithm. [<span style="font-variant:small-caps;">hytrees</span>]{} relies on the [<span style="font-variant:small-caps;">Dire</span>]{}parton shower [@Hoche:2015sya] to calculate the weights of the event histories. For details on the splitting probabilities used in the [<span style="font-variant:small-caps;">Dire</span>]{} dipole shower we refer to Appendix \[app:dire\]. In Sec. \[sec:results\] we apply [<span style="font-variant:small-caps;">hytrees</span>]{} to the study of the classification of the process $pp \to (\mathrm{H}\to \gamma \gamma) + \mathrm{jets}$ versus the processes without Higgs boson that lead to $pp \to \gamma \gamma +\mathrm{jets}$. We offer conclusions in Sec. \[sec:conclusion\]. Implementation of [<span style="font-variant:small-caps;">hytrees</span>]{} {#sec:implementation} =========================================================================== ![image](structure2.pdf){width="100.00000%"} The definition of the classifier $\chi$ suggested in Eq. \[eq:chi\] is in principle very intuitive. A practical implementation however requires assumptions and abstractions before the classifier can be calculated on experimental data. Thus, to test and develop the classifier, we will use event generator pseudo-data. We will evaluate the new classifier on this pseudo-data. To be concrete, we use realistic (showered and hadronised) events, i.e. each “event" consists of a collection of particles – photons, leptons, long-lived hadrons, [*etc.*]{} – with each particle represented by a 4-vector stored in <span style="font-variant:small-caps;">HepMc</span> event format [@Dobbs:2001ck]. The hard process underlying these events were generated using MadGraph [@Alwall:2014hca], and showered and hadronised using [<span style="font-variant:small-caps;">Pythia</span>]{} [@Sjostrand:2014zea]. These events are further processed to arrive at final states consisting of reconstructed objects, i.e. isolated leptons, isolated photons or jets. A lepton $(e,\mu)$ or photon isolated from hadronic energy by demanding that the total hadronic activity around in a cone of radius $R=0.3$ around the object must contain less than $10\%$ of its $p_T$, and the object is required to have $p_T \geq 20$ GeV and $|y|<2.5$. Jets are reconstructed using the anti-kT algorithm [@Cacciari:2008gp] as implemented in `fastjet` [@Cacciari:2011ma], with radius $R=0.4$. We only consider events with at least two jets of $p_{T,j} \geq 35$ GeV, since looser cuts are usually not considered in experimental analyses at the LHC. After these steps, the final state of interest is now considerably simplified compared to the particle-level final state, only consisting of $\mathcal{O}(10)$ reconstructed objects. On these states, we will want to calculate $\chi$ of Eq. \[eq:chi\] from first principles relying on perturbative methods. Thus, we want to be as insensitive as possible from experimental or non-perturbative effects, such as hadronisation or pileup-induced soft scatterings. Using reconstructed objects as input to our calculation protects us to a large degree from contributions that are theoretically poorly controllable. To allow the calculation of the classifier to be as detailed and physical as possible, we will directly use a parton shower to calculate the necessary factors. For this, we identify the reconstructed objects in the event with partons of a parton shower, i.e. with the perturbative part of the event generation before hadronisation. The first necessary step is to redistribute momenta to ensure that all jet momenta can be mapped to on-shell parton momenta, and then adding beam momenta defined by momentum conservation in the center-of-mass frame. Each of these events is then translated to *all possible* partonic pseudo-events, by assigning all possible parton flavors and all possible color connections to the jets[^1]. The resulting collection of events are then passed to the parton shower algorithm[^2] to calculate all necessary weights. The general philosophy is illustrated in Fig \[fig:histories\]. A reasonable probability for the six configurations in the lowest layer should depend on the $2\rightarrow 2$ matrix elements for particles connected to the “hard" scattering (grey blob). At the same time, the probability of the three configurations in the middle layer should be proportional to the $2\rightarrow 3$ matrix elements for particles connected to the blob, and the overall probability of the top layer should be proportional to the $2\rightarrow 4$ matrix elements. It is crucial to keep these conditions in mind when attempting a classification, since in general, the distinction between “hard scattering" and “subsequent radiation" is only well-defined in the phase-space region of ordered soft- and or collinear emissions. In such phase-space regions, the quantum-mechanical amplitudes factorize into independent building blocks (such as splitting functions or eikonal factors) that effectively make up a “classical" path. If the kinematics of the event is such that interference effects between the amplitudes for different paths (i.e. hypotheses) are sizable, then this needs to be reflected in the classifier. There should not be any discriminating power for such events. Here, we will build a classifier that *does depend on assigning a classical path* to phase-space points. The kinematics of each unique point will be used to calculate the rate of classical paths, such as the ones illustrated in Fig \[fig:histories\]. In phase-space regions that allow a (quantum-mechanically) sensible discrimination, the rates of the dominant paths will factorize into products of squared low-multiplicity matrix elements and approximate (splitting) kernels. In all other regions, we should be as agnostic as possible to the path. These two regions can be reconciled by always using the complete, non-factorized matrix elements to calculate the rate, and only employ the approximate (splitting) kernels to “project out" the rate of paths. This will guarantee that we minimize the dependence of assigning classical paths in inappropriate phase-space regions. We can succeed in defining the rate by the full non-factorized matrix element, for events of varying multiplicity, by employing the iterated matrix-element correction probabilities derived in [@Fischer:2017yja] (see Eq. (15) therein) when calculating the probability of each path. The simultaneous use of matrix-elements for different multiplicities is a significant improvement over traditional matrix-element methods, which only leverage matrix-elements for a fixed multiplicity at a time. The calculation of the classifier thus proceeds by constructing all possible ways how the partonic input state could have evolved out of a sequence of lower-multiplicity partonic states, by explicitly constructing all lower-multiplicity intermediate states via successive recombination of three into two particles, until no further recombination is possible. This construction of all “histories" follows closely the ideas used in matrix-element and parton shower merging methods [@Lonnblad:2001iq]. The probability of an individual recombination sequence relies on full matrix elements as much as possible. In particular, we ensure that not only the probability of the lowest-multiplicity state is given by leading-order matrix elements, but that the probability of higher-multiplicity states is simultaneously determined by leading-order matrix elements. Further improvements of the method to incorporate running coupling effects, rescaling of parton distributions due to changes in initial-state longitudinal momentum components, as well as all-order corrections for momentum configurations with large scale hierarchies are discussed below. Let us illustrate the calculation using the red paths in Fig \[fig:histories\]. One definite path (from dashed red through solid red to the top layer, e.g. following the rightmost lines in the figure) will contribute to the overall probability as $$\begin{aligned} \label{eq:calcP} \mathcal{P}_{\textnormal{H}}= {\left|\mathcal{M}\left({hj}\right)^{(1)}\right|^2} \otimes P_{\textnormal{H}(1)} &\otimes& \overbrace{\left[\frac{{\left|\mathcal{M}\left({\mathrm{H}jj}\right)^{}\right|^2} }{{\left|\mathcal{M}\left({hj}\right)^{(1)}\right|^2}P_{\textnormal{H}(1)}+{\left|\mathcal{M}\left({hj}\right)^{(2)}\right|^2}P_{\textnormal{H}(2)}}\right]}^{{\mathcal}{R}(\mathrm{H}jj)}\\ \otimes P_{\textnormal{H}} &\otimes&\left[ \frac{{\left|\mathcal{M}\left({\gamma\gamma jj}\right)^{}\right|^2} }{{\left|\mathcal{M}\left({\mathrm{H}jj}\right)^{}\right|^2}{\mathcal}{R}(\mathrm{H}jj)P_{\textnormal{H}} + {\left|\mathcal{M}\left({\gamma\gamma j}\right)^{}\right|^2}{\mathcal}{R}(\gamma\gamma j)P_{\textnormal{QED}} + {\left|\mathcal{M}\left({\gamma j j}\right)^{}\right|^2}{\mathcal}{R}(\gamma j j)P_{\textnormal{QCD}}} \right]~\nonumber,\end{aligned}$$ where $P_{\textnormal{X}}$ are approximate transition kernels, for example given by dipole splitting functions [@Gustafson:1987rq; @Catani:1996vz]. In order to construct this probability for the case of Fig \[fig:histories\], splitting functions for all QCD and QED vertices, as well as for Higgs-gluon, Higgs-fermion and Higgs-photon couplings have been calculated. When summing over the two dashed red paths, the full ${\left|\mathcal{M}\left({\mathrm{H}jj}\right)^{}\right|^2}$ is recovered, while summing over the dashed green and dashed blue paths yield the full mixed QCD/QED matrix elements ${\left|\mathcal{M}\left({\gamma\gamma j}\right)^{}\right|^2}$ and ${\left|\mathcal{M}\left({\gamma jj}\right)^{}\right|^2}$, respectively. The total sum of the probabilities of all paths reduces to ${\left|\mathcal{M}\left({\gamma\gamma jj}\right)^{}\right|^2}$, as desired. This discussion is complicated significantly by phase-space constraints, but can be generalized to an arbitrary multiplicity and to arbitrary splittings [@Fischer:2017yja]. Note that it is straightforward to “tag" a path of recombinations as QCD-, QED- or Higgs-type by simply examining the intermediate configuration. The sum of all probabilities of all Higgs-type paths is an excellent measure of how Higgs-like the input state was, while the sum of all non-Higgs-type probabilities is an excellent measure of how background-like the input was. Following Eq. \[eq:chi\], it is thus natural to define the probability of the Higgs-hypothesis as $$\begin{aligned} \chi_{\textnormal{H}} \equiv \frac{\mathcal{P}(\{p_i\}|\,\textnormal{Higgs})}{\mathcal{P}(\{p_i\}|\,\neg\,\textnormal{Higgs})}, \label{eq:chih}\end{aligned}$$ where the respective probabilities are defined as $$\mathcal{P}(\{p_i\}|\,\textnormal{Higgs}) = \frac{\sum\mathcal{P}_{\textnormal{H}}}{\sum(\mathcal{P}_{\textnormal{H}} + \mathcal{P}_{\textnormal{QCD}}+ \mathcal{P}_{\textnormal{QED}}) } ~~~ \mathrm{and} ~~~ \mathcal{P}(\{p_i\}|\,\neg\,\textnormal{Higgs}) = \frac{\sum (\mathcal{P}_{\textnormal{QCD}}+\mathcal{P}_{\textnormal{QED}})}{ \sum(\mathcal{P}_{\textnormal{H}} + \mathcal{P}_{\textnormal{QCD}}+ \mathcal{P}_{\textnormal{QED}}) }. \label{eq:probs}$$ A plethora of tags defining a hypothesis can be envisioned – once all paths of all intermediate states leading to the highest-multiplicity (input) state are known, it is straightforward to attribute a probability to each hypothesis. Of course, not all hypotheses are sensible from the quantum-mechanical perspective if interference effects are important. In this case, we expect that if the hypothesis is tested on pseudo-data with the [<span style="font-variant:small-caps;">hytrees</span>]{} method, the results are similar, irrespective of how the pseudo-data was generated. There should not be strong discrimination power for such problematic hypotheses. Finally, a discrimination based on matrix elements alone is likely to give an unreasonable probability for multi-jet hadronic states, since e.g. large hierarchies in jet transverse momenta will not be described by fixed-order matrix elements alone, and because the overall flux of initial-state partons is tied to changes in the parton distribution functions. Thus, we include the all-order effects of the evolution between intermediate states into the probability of each path. For a path $p$ of intermediate states $S_i^{(p)}, i\in[1,n^{(p)}]$ transitioning to the next higher multiplicity at scales $t_i^{(p)}$, this can be done by correcting the probability of each path to $$\begin{aligned} \label{eq:calcWP} \mathcal{P}_{\textnormal{A}}\rightarrow \mathcal{P}_{\textnormal{A}} w_{p}\quad\textnormal{where}\quad w_{p} = \prod_{i=1}^{n^{(p)}} \Pi(S_{i-1}^{(p)}; t_{i-1}^{(p)},t_i^{(p)}) \, \frac{\alpha(S_i^{(p)},t_i^{(p)})}{ \alpha^{\textnormal{\tiny FIX}}(S_i^{p})} \, \frac{f(S_{i-1}^{(p)}; x_{i-1}^{(p)},t_{i-1}^{(p)})} {f(S_{i-1}^{(p)}; x_{i-1}^{(p)},t_{i}^{(p)})} \end{aligned}$$ where $\Pi(S_{i-1}^{(p)}; t_{i-1}^{(p)},t_i^{(p)})$ is the no-branching probability of state $S_{i-1}^{(p)}$ between scales $t_{i-1}^{(p)}$ and $t_i^{(p)}$, which is directly related to Sudakov form factors [@Sudakov:1954sw; @Sjostrand:1985xi]. We have also introduced the placeholder $\alpha^{\textnormal{\tiny FIX}}(S_i^{p})$ for the coupling constant of the branching producing state $S_i^{p}$ out of state $S_{i-1}^{(p)}$, and $\alpha(S_i^{(p)},t_i^{(p)})$ as a placeholder for the same coupling evaluated taking the kinematics of state $S_i^{p}$ into account[^3]. Finally, the parton luminosity appropriate for state $S_{i-1}^{(p)}$, evaluated at longitudinal momentum fraction $x_{i-1}^{(p)}$ and factorization scale $t_{i-1}^{(p)}$ are collected in the factors $f(S_{i-1}^{(p)}; x_{i-1}^{(p)},t_{i-1}^{(p)})$. Ratios of these factors account for the rescaling of the initial flux due to branchings. The weights $w_p$ are also a key component of the CKKW-L algorithm, which employs trial showers to generate the no-branching probabilities, and attaches the PDF- and $\alpha_s$ ratios as event weight to pretabulated fixed-order input events. In [<span style="font-variant:small-caps;">hytrees</span>]{}, we also invoke trial showers to generate the no-branching factors, i.e. the calculation of the weights $w_p$ is performed by directly using a realistic parton shower, specifically the [<span style="font-variant:small-caps;">Dire</span>]{}plugin to <span style="font-variant:small-caps;">Pythia</span>. To correctly calculate $w_{p}$ for all possible paths, we extend this parton shower to include QED radiation (so that the shower can give a sensible all-order QED-resummed weight for the green paths in Fig \[fig:histories\]) and to allow the transitions $q\rightarrow qH, g\rightarrow g H$ and $H\rightarrow\gamma\gamma$ (in order to correctly assign the red clustering paths in Fig. \[fig:histories\]). Details on these improvements, and on the use of matrix-element corrections in [<span style="font-variant:small-caps;">Dire</span>]{}, are given in Appendix \[app:direqed\]. Application to $\mathrm{H} \to \gamma \gamma$ + jets {#sec:results} ==================================================== To assess the performance of our approach in separating signal from background, and to showcase the scope of its potential applications, we study the signal process $pp \to \mathrm{H}jj$ with subsequent decay of the Higgs boson into photons, $\mathrm{H} \to \gamma \gamma$, at a center-of-mass energy of $\sqrt{s} = 13$ TeV. This process is of importance in studying the quantum numbers of the Higgs boson, e.g. its couplings to other Standard Model particles [@Corbett:2015mqf; @Englert:2015hrx; @Englert:2017aqb; @Ellis:2018gqa] or its CP properties [@Plehn:2001nj; @Englert:2012ct; @Englert:2012xt; @Bernlochner:2018opw; @Englert:2019xhk]. Just like for the Higgs discovery channel with inclusive number of jets, $pp \to (\mathrm{H} \to \gamma \gamma) + X$, this channel suffers from a large Standard-Model continuum background. We generate signal and background events using MadGraph for the hard process cross section and [<span style="font-variant:small-caps;">Pythia</span>]{}for showering and hadronisation. At the generation level, we apply minimal cuts for the photons ($p_{T,\gamma} \geq 20$ GeV, $|\eta| < 2.5$ and $\Delta R_{\gamma \gamma} \geq 0.2$), and on the final state partons $j$ ($p_{T,j} \geq 30$ GeV, $|\eta| \leq 4.5$ and $\Delta R_{jj} \geq 0.4$). While we do not consider detector efficiencies for the jets, we simulate the detector response in the reconstruction of the photons by smearing their energy such that the Breit-Wigner distributed invariant mass $m^2_{\gamma\gamma}= (p_{\gamma,1} + p_{\gamma,2})^2$ has a width of 2 GeV after reconstruction. Under such inclusive cuts, the signal process receives contributions from gluon fusion, as well as from weak-boson fusion [@DelDuca:2001eu; @Klamke:2007cu]. Standard approaches to exploit this signal process often rely on the application of weak boson fusion cuts [@Rainwater:1998kj; @Figy:2003nv], which render gluon fusion contributions sub-dominant. Instead here, we will focus on the gluon fusion contributions exclusively, aiming to apply [<span style="font-variant:small-caps;">hytrees</span>]{} to discriminate the continuum di-photon background from the gluon-fusion induced Higgs signal[^4]. \ In Fig. \[fig:chi\_higgs\_scale\_var\] we show $\log_{10}(\chi_{\textnormal{H}})$, as calculated according to Eqs. \[eq:chi\] and  \[eq:calcP\], for Higgs-signal pseudo-data (left) and non-Higgs background samples (right). It is apparent that the observable $\chi$ can discriminate between signal and background events. Signal events have on average large $\chi_{\textnormal{H}}$, i.e. they result in a relatively large value for $\mathcal{P}(\{p_i\}|S)$ in comparison to $\mathcal{P}(\{p_i\}|B)$, and vice versa for background events. Since the [<span style="font-variant:small-caps;">hytrees</span>]{} method is based on calculating well-defined perturbative factors, it goes beyond many existing classification methods by also providing an estimate of theoretical uncertainties of the hypothesis-testing variable $\chi_{\textnormal{H}}$. We find that the theoretical uncertainty, estimated by varying the renormalisation scale between $t/2 \leq \mu_R \leq 2 t$ (where $t$ are the [<span style="font-variant:small-caps;">Dire</span>]{}parton-shower evolution variables [@Hoche:2015sya], as necessary to evaluate running $\alpha_s$ effects at the nodal splittings in the history tree, and to perform $\mu_R$-variations of the no-branching factors) are very small for $\chi$ in our example. This is somewhat remarkable, as signal and background enter to lowest order at $\mathcal{O}(\alpha^2_s)$ for the hard process. As shown in Fig. \[fig:p\_higgs\_scale\_var\], $\mathcal{P}(\{p_i\}|S)$ and $\mathcal{P}(\{p_i\}|B)$ separately (and multiplied by the total probability to ensure that no artificial numerator-denominator cancellations occur) show a large sensitivity on scale variations, which cancels when taking the ratio to calculate $\chi_{\textnormal{H}}$. This can also be understood in terms of a cancellation for the performance of the classifier. In the calculation of both the signal and the background hypotheses, partons are interpreted as emitted from the initial state partons, thus forming the final states with two (or more) jets. As this underlying dynamics is governed by QCD, this is very similar for signal and background, so that this part of the event does not contain much discriminative information. Furthermore, changing the argument of $\alpha_s$ will affects signal and background in a similar way. ![\[fig:chi\_higgs\_gamma\_var\] Classification of signal or background pseudodata according to Higgs hypothesis, using different values of $\Gamma_{H}$. Only configurations with diphoton invariant masses in a small window are shown, to further demonstrate the discrimination power w.r.t. a simple mass cut. ](chi_higgs_gamma_var.pdf){width="50.00000%"} \[fig:ps\] This raises the question whether all information used in discriminating signal from background is in fact contained in the electroweak part of the event, and could e.g. be captured by analyzing the invariant mass distribution $m_{\gamma \gamma}$. We can investigate the effect of a mass-window cut within experimental uncertainties by selecting signal and background events that satisfy $|m_{\gamma \gamma} - 125~\mathrm{GeV}| < 2~\mathrm{GeV}$, in line with the way we smeared the energy of the photons. Fig. \[fig:chi\_higgs\_gamma\_var\] shows when applying a mass cut, the normalised distributions of $\chi_{\textnormal{H}}$ overlap much more for signal and background samples, indicating that the very good separating observed in Fig. \[fig:chi\_higgs\_scale\_var\] rests largely on the fact that the photons in the signal arise due to the decay of a narrow resonance. Still, the signal samples result on average in a large value for $\chi_{\textnormal{H}}$ compared to the background samples and thus $S/B$ can be improved with a cut on $\chi_{\textnormal{H}}$. In order to construct the history tree for the [<span style="font-variant:small-caps;">hytrees</span>]{} method, it was necessary to introduce “Higgs splitting kernels" (cf. App. \[app:dire\]) to define the probability of the $\textnormal{H}\rightarrow \gamma\gamma$ decay. In principle, it would be permissible to use the physical Higgs-boson width when calculating these splitting kernels. However, it is reasonable to expect that this might lead to an artificially strong discrimination power. Fig. \[fig:chi\_higgs\_gamma\_var\] shows that this is not the case, by varying the Higgs-boson width in the splitting kernel in a very large range. The [<span style="font-variant:small-caps;">hytrees</span>]{} method effectively takes all possible observables into account to discriminate between two hypotheses. To investigate further how this relates to cutting on $m_{\gamma \gamma}$, Fig. \[fig:probsvsmass\] shows the probabilities $\mathcal{P}$ directly, binned in the differential distributions $m_{\gamma \gamma}$ and $m_{jj}$. This highlights that [<span style="font-variant:small-caps;">hytrees</span>]{}might also be useful to find optimal cuts in a cut-and-count analysis, since [<span style="font-variant:small-caps;">hytrees</span>]{} can quantify how much differential observables can discriminate between different hypotheses. As shown in Fig. \[fig:probsvsmass\], $m_{jj}$ is very similar for signal and backgrounds, while $m_{\gamma \gamma}$ is very discriminative. The sensitivity of any observable in classifying events can be studied in this way. Classification with respect to Higgs or no-Higgs hypotheses is not the only application for [<span style="font-variant:small-caps;">hytrees</span>]{} in our example. One can imagine to construct different classification observables to test different hypotheses. For example, we could define $\chi_{\textnormal{QED}}$ and $\chi_{\textnormal{QCD}}$ in analogy to Eqs. (\[eq:chih\]) and (\[eq:probs\]), i.e. $$\begin{aligned} \chi_{\textnormal{QED}} \equiv \frac{\mathcal{P}(\{p_i\}|\,\textnormal{QED})}{\mathcal{P}(\{p_i\}|\,\neg\,\textnormal{QED})} \qquad\textnormal{and} \qquad \chi_{\textnormal{QCD}} \equiv \frac{\mathcal{P}(\{p_i\}|\,\textnormal{QCD})}{\mathcal{P}(\{p_i\}|\,\neg\,\textnormal{QCD})},\end{aligned}$$ with the probabilities $$\begin{aligned} \mathcal{P}(\{p_i\}|\,\textnormal{QED}) &= \frac{\sum\mathcal{P}_{\textnormal{QED}}}{\sum(\mathcal{P}_{\textnormal{H}} + \mathcal{P}_{\textnormal{QCD}}+ \mathcal{P}_{\textnormal{QED}}) }, ~~~~\mathcal{P}(\{p_i\}|\,\neg\,\textnormal{QED}) = \frac{\sum (\mathcal{P}_{\textnormal{QCD}}+\mathcal{P}_{\textnormal{H}})}{ \sum(\mathcal{P}_{\textnormal{H}} + \mathcal{P}_{\textnormal{QCD}}+ \mathcal{P}_{\textnormal{QED}}) }\\ \mathcal{P}(\{p_i\}|\,\textnormal{QCD}) &= \frac{\sum\mathcal{P}_{\textnormal{QCD}}}{\sum(\mathcal{P}_{\textnormal{H}} + \mathcal{P}_{\textnormal{QCD}}+ \mathcal{P}_{\textnormal{QED}}) }, ~~~~\mathcal{P}(\{p_i\}|\,\neg\,\textnormal{QCD}) = \frac{\sum (\mathcal{P}_{\textnormal{QED}}+\mathcal{P}_{\textnormal{H}})}{ \sum(\mathcal{P}_{\textnormal{H}} + \mathcal{P}_{\textnormal{QCD}}+ \mathcal{P}_{\textnormal{QED}}) }.\end{aligned}$$ In Fig. \[fig:chis\], we show how the Higgs-signal and non-Higgs background samples fare regarding these three classification variables $\chi_\textnormal{H}$, $\chi_\mathrm{QED}$ and $\chi_\mathrm{QCD}$. The best discrimination between signal and background is observed in $\chi_\textnormal{H}$. This is not surprising, as $\chi_\textnormal{H}$ tests explicitly if there is a Higgs boson in the sample or not. $\chi_\mathrm{QCD}$ and $\chi_\mathrm{QED}$ perform as expected, yielding an on average larger value of $\chi$ for the background sample, and smaller values for the events that do contain a Higgs boson. While $\chi_\mathrm{QCD}$ retains some discriminative power between the Higgs and no-Higgs samples, the least discriminate variable is $\chi_\mathrm{QED}$. Hence, with respect to the green path in Fig. \[fig:histories\], the signal and background samples provide very little separable kinematic features. The $\mathrm{QED}$ hypothesis provides a very similar classifier, irrespective of the event sample, indicating that no “classical" path in the history tree is preferred, and that thus, interferences are relevant. It is comforting that in this case, the [<span style="font-variant:small-caps;">hytrees</span>]{}method does indeed, as desired, not produce an artificial discrimination power by referring to classical paths. In conclusion, by applying [<span style="font-variant:small-caps;">hytrees</span>]{} to known signal and background samples it is possible to optimise the discriminating observable, and to obtain an improved understanding of the kinematic features that allow a discrimination between signal and backgrounds. Conclusions {#sec:conclusion} =========== The classification of events into signal and background is the basis for all searches and measurements at collider experiments. By building on the Event Deconstruction method [@Soper:2011cr; @Soper:2014rya], CKKW-L merging [@Lonnblad:2001iq] and the iterated matrix-element correction approach of [@Fischer:2017yja], we have developed and implemented a novel way to classify realistic (i.e. fully showered and hadronised) final states according to different theory hypotheses. This method has been implemented in a standalone package, called [<span style="font-variant:small-caps;">hytrees</span>]{}, and will be made publicly available. In principle this method is applicable to any final state and any theoretical hypotheses. However, there is a practical limitation due to the sharply increasing time it takes to evaluate complex final states with many (colored) particles. While invisible particles have not been implemented yet, approaches how to take them into account in the hypothesis testing exist [@FerreiradeLima:2017iwx] and will be included in a future release of [<span style="font-variant:small-caps;">hytrees</span>]{}. We have applied [<span style="font-variant:small-caps;">hytrees</span>]{} to the gluon-fusion induced production of H$jj$ with subsequent decay H$ \to \gamma \gamma$. This process receives large backgrounds where the photons can either be produced in the hard interaction of the process $pp \to \gamma \gamma jj$ or by being radiated off the final state or initial state quarks of the process $pp \to jj$. Detector effects were rudimentarily taken into account by smearing the photon momenta. [<span style="font-variant:small-caps;">hytrees</span>]{} can directly calculate the probability of how likely an event was produced through a transition of interest. We have shown that [<span style="font-variant:small-caps;">hytrees</span>]{} can confidently separate between signal and background samples with respect to the Higgs or no-Higgs hypothesis. While the method takes into account all possible kinematic observables simultaneously to classify the event according to the hypotheses of consideration, it is also possible to study how much individual observables, or combinations of observables, contribute to the overall classification. Thus, [<span style="font-variant:small-caps;">hytrees</span>]{} can be used to optimise cuts for cut-and-count based analyses very efficiently. The flexible and first-principle calculation-based approach enables us to obtain an improved understanding of the kinematic features that allow us to discriminate between signal and backgrounds for very large classes of processes at any high-energy collider experiment. Acknowledgments =============== We thank Valentin Hirschi for collaboration during an early stage of this project, by sharing a private code to generate all color connections, and for longstanding help with using MadGraph to generate the C++ matrix element code employed for matrix element corrections. MS is grateful to Dave Soper for a longstanding collaboration on the Shower/Event Deconstruction approach. MS thanks the University of Tuebingen and the Humboldt Society for support and hospitality during the finalisation of parts of this work. SP would like to thank Walter Giele for collaboration on dipole showers for QED splittings. QCD, QED and Higgs splittings in the [<span style="font-variant:small-caps;">Dire</span>]{}dipole shower {#app:direqed} ======================================================================================================== \[app:dire\] Realistic classifications of final states containing jets and photons according to an hypothesis require the construction of all possible branching histories that could have produced the final states. Thus, all possible ways of splitting or recombining the particles in the final state have to be considered. For the problem at hand, this requires a simultaneous description of QCD- and QED branchings, both at fixed- and all-order perturbative accuracy. If an hypothesis does not only depend on the final-state particles alone, but rather infers reconstructed intermediate state such as Higgs bosons, it is also necessary to incorporate the relevant intermediate branchings. The description of QCD splittings used in this publication is implemented in the [<span style="font-variant:small-caps;">Dire</span>]{}plugin to [<span style="font-variant:small-caps;">Pythia</span>]{}, and consists of a partial-fractioned dipole parton shower including mass effects, as documented in detail in [@Hoche:2015sya]. For all QCD splittings, we evaluate the running coupling at the evolution scale assigned to the splitting, i.e. $\alpha(S_i^{(p)},t_i^{(p)}) = \alpha_{s}(t_i^{(p)})$. We implement QED emissions as an extension of the partial fractioned dipole shower of [<span style="font-variant:small-caps;">Dire</span>]{} [@Hoche:2015sya], using the same evolution and energy sharing variables as well as kinematical splitting functions (and mass corrections) as discussed in [@Hoche:2015sya]. The crucial difference to the treatment of QCD is that we allow all pairs of electric charges to form dipoles that coherently emit photons, similar to the ideas presented in [@Kniehl:1992ra] and more recently discussed in [@Schonherr:2017qcj] and [@Kleiss:2017iir]. At variance to the latter, we split the soft-photon radiation pattern into two pieces each assigned to one dipole splitting kernel. The color factors in the QCD splitting functions in [@Hoche:2015sya] are further replaced by the electric (dipole) charge correlators, which can readily be negative. This inconvenience is addressed by using the weighted parton shower [@Hoeche:2009xc; @Lonnblad:2012hz] algorithm implemented in [<span style="font-variant:small-caps;">Dire</span>]{}. The assignment of recoilers for the $\gamma\rightarrow f\bar f$ splitting takes guidance from the simultaneous emission of a soft quark pair in QCD (see e.g. [@Catani:1999ss]) which can be thought of being emitted from a parent color dipole [@Dulat:2018vuy]. The latter calculation is of course not directly applicable to QED. Nevertheless, in the absense of other concrete ideas, we allow all electrically charged particles to act as spectator for the $\gamma\rightarrow f\bar f$ splitting. For all QED branchings, we do not employ a running coupling and instead fix $\alpha$ to the Thompson value (cf. [@Kniehl:1992ra].), i.e.$\alpha(S_i^{(p)},t_i^{(p)}) = \alpha_{em}(0) = 0.00729735$. More details on the formalism of QED showers will be presented elsewhere [@Giele:2019xxx]. Since our QED splitting kernels can readily become negative, we expect that the event weight fluctuation due to the weighting algorithm can become a significant problem. This is however largely circumvented by including QCD and QED matrix element corrections up to $pp\rightarrow \gamma\gamma j j j$ in the formalism of  [@Fischer:2017yja] into the parton shower: Since the matrix-element corrections guarantee the correct radiation pattern irrespective of the splitting kernels, it is legitimate to enforce positive splitting kernels for splittings yielding states for which matrix-element corrections are available, thus not producing large weight fluctuations. To allow testing the hypothesis of an intermediate Higgs boson, we further include the emission rate $g\rightarrow g$H and the decay rate H$\rightarrow \gamma\gamma$ directly into the parton shower evolution. The emission rate $q\rightarrow q$H is omitted, since its contribution is only present for heavy quarks and is, due to the quark masses, further suppressed by phase space. The evolution variable and phase space mapping for the emission rate $g\rightarrow g$H is identical to that of (massive) QCD or QED splittings, and the splitting function is a simple uniform weight $\Gamma_{\textnormal{H}\rightarrow gg}(m_\mathrm{H})$. This allows to assign a probability to the production vertex of the Higgs boson, and is sufficient as long as the emission rate is effectively absent in the shower evolution. All gluons that can be reached by tracing leading-$N_C$ color connections are possible spectators for this splitting. The coupling value $\alpha(S_i^{(p)},t_i^{(p)})$ for the $g\rightarrow g$H emission is fixed to $\alpha(S_i^{(p)},t_i^{(p)})=\Gamma_{\mathrm{H}\rightarrow gg}(m_\mathrm{H})$ The virtuality of the photon pair serves as evolution variable for the H$\rightarrow \gamma\gamma$ decay. In this case, the splitting kernel is defined by $$\begin{aligned} P_{\mathrm{H}\rightarrow \gamma\gamma} = \frac{1}{\mathcal{S}} \Gamma_{\mathrm{H}\rightarrow \gamma\gamma} \frac{8\pi p_\mathrm{H}^2}{ (p_\mathrm{H}^2 - m_\mathrm{H}^2)^2 + p_\mathrm{H}^2\Gamma^2_{tot~\mathrm{H}} }\end{aligned}$$ where $\mathcal{S}$ is the number of possible recoilers for this splitting. In line with the reasoning for the $\gamma\rightarrow f\bar f$ splitting above, we allow all gluons as spectators for this splitting. Again, it worth noting that we do employ matrix-element corrections for shower splittings that produce $pp\rightarrow \gamma\gamma j j j$ or less complicated states, such that for the purposes of this publication, the concrete prescription of the $P_{\mathrm{H}\rightarrow \gamma\gamma}$ is of minor importance. The coupling value $\alpha(S_i^{(p)},t_i^{(p)})$ for the H$\rightarrow \gamma\gamma$ decay is fixed to $\alpha(S_i^{(p)},t_i^{(p)})=\Gamma_{\mathrm{H}\rightarrow \gamma\gamma}(m_\mathrm{H})$ [^1]: We want to thank Valentin Hirschi for collaboration at an early stage of this project, and in particular for sharing a private code to generate all color connections in a parton ensemble. [^2]: These events are stored in Les Houches event files [@Alwall:2006yp], and read by <span style="font-variant:small-caps;">Pythia</span>, which acts also as an interface to the parton shower. [^3]: For details on running coupling choices, see Appendix \[app:direqed\]. [^4]: Various approaches have been proposed to reliably separate gluon-fusion from weak-boson fusion in this channel [@Rainwater:1998kj; @Englert:2012ct; @Andersen:2012kn].
--- abstract: 'We report on the first observation of the Sunyaev–Zel’dovich (SZ) effect, a distortion of the Cosmic Microwave Background radiation (CMB) by hot electrons in clusters of galaxies, with the Diabolo experiment at the IRAM 30 telescope. Diabolo is a dual–channel 0.1K bolometer photometer dedicated to the observation of CMB anisotropies at 2.1 and 1.2. A significant brightness decrement in the 2.1 channel is detected in the direction of three clusters (Abell 665, Abell 2163 and CL0016+16). With a 30 arcsecond beam and 3 arcminute beamthrow, this is the highest angular resolution observation to date of the SZ effect. Interleaving integrations on targets and on nearby blank fields have been performed in order to check and correct for systematic effects. Gas masses can be directly inferred from these observations.' address: - 'Laboratoire d’Astrophysique de l’Observatoire de Grenoble, 414 rue de la Piscine, BP 53, F–38041 Grenoble Cedex 9' - 'Institut d’Astrophysique Spatiale, Bât. 121, Université Paris XI, F–91405 Orsay Cedex France' - ' Centre de Recherche sur les Très Basses Températures, 25 Avenue des Martyrs BP166, F–38042 Grenoble Cedex 9 France' - 'IRAM, avd Divina Pastora 7 Nucleo Central 18012 Granada Spain' - 'Centre d’Études Spatiales sur les Rayonnements, 9 Avenue du Colonel Roche, BP 4346, F–31029 Toulouse Cedex France' - ' Enrico Fermi Institute, University of Chicago, 5460 South Ellis Avenue, Chicago, IL 60637, USA' author: - 'F.-X. Désert' - 'A. Benoit' - 'S. Gaertner' - 'J.–P. Bernard' - 'N. Coron' - 'J. Delabrouille' - 'P. de Marcillac' - 'M. Giard' - 'J.–M. Lamarre' - 'B. Lefloch' - 'J.–L. Puget' - 'A. Sirbi' title: | Observations of the Sunyaev–Zel’dovich effect\ at high angular resolution towards\ the galaxy clusters A665, A2163 and CL0016+16 --- , , , , , , , , , , and Introduction ============ After the discovery of the Cosmic Microwave Background (CMB) radiation by Penzias & Wilson [@Penz65], and the observation of hot ionised gas in clusters of galaxies through its X–ray emission [@Lea73], Sunyaev & Zel’dovich [@Suny70] soon realised that the scattering of the CMB photons by the hot electrons of the intracluster medium (ICM) should generate a distinctive spectral distortion of the CMB blackbody spectrum in the (sub)millimetre and radio domain. Several millimetre and radio detections towards a dozen of clusters have recently been obtained using various techniques [@Birk91a; @Birk91b; @Carl96; @Jone93; @Wilb94; @Herb95]. These results, which are compatible with the expected brightness decrement, constitute a direct evidence for the SZ effect and have profound cosmological importance: - They are a strong confirmation of the cosmological origin of the CMB radiation. - The mass of the ionised gas in clusters of galaxies can be obtained from SZ measurements, even for unresolved clusters [@Delu95]. If hydrostatic equilibrium is assumed, the total mass can also be deduced from the SZ profile, and compared with cluster mass estimates by other methods (gravitational lensing, velocity fields) for consistency. This, together with cluster number counts, yields an estimate of $\Omega$ at cluster scales. - The detection via the SZ effect of very distant clusters ($z \simeq 1$ and above) would put severe constraints on $\Omega$, as only in a low-density Universe could structures form so early ([[*e.g.* ]{}]{} [@Barb96]). - The angular diameter distance to a cluster can be estimated from the CMB intensity change due to the SZ effect combined with the observed X-ray surface brightness. For low redshift clusters, the combination of SZ and X-ray data thus allows estimating the Hubble constant $H_0$ [@Birk79; @Cava79; @Silk78]. For high redshift clusters, because of the additional dependence of the angular diameter distance on the deceleration parameter $q_0$ [@Silk78], it is also possible, in principle, to constrain $\Omega$ and $\Lambda$ (see [@Koba96] for an application of the method to available SZ measurements). - The measurement of the kinetic SZ effect on many clusters using an optimal filtering technique would make a measurement of very large scale velocity flows possible [@Haeh96; @Agha97]. - The SZ effect is the strongest “contamination” source for the measurement of the primary CMB anisotropies at high angular resolution and in the millimetre spectral window, and therefore deserves careful studies (one’s noise is the other’s signal), especially in the light of the preparation to the Planck mission [@COSA96]. In an effort to detect the SZ effect in clusters at high redshift, we installed the Diabolo photometer at the focus of the IRAM 30 millimetre radiotelescope (MRT). This photometer saw its first light (Benoit [@Beno]) at the Millimetre Infrared Testa Grigia Observatory (MITO) in Italy on a 2.6 telescope. The task of detecting a signal which is a part in a million of the background is very challenging but at a wavelength around 2, the confusion by other astrophysical sources (dust, point sources, CMB anisotropies [@Fran91; @Fisc93]) is minimal. In addition, the high angular resolution achieved with the 30 facility (about 30 arcseconds for the two Diabolo channels) reduces the beam dilution on distant clusters. Owing to major improvements in bolometer and cooling technology, this task can now be achieved in a reasonable integration time (a few hours). The observations and data reduction method are described in section \[se:obse\], and the results are presented and discussed in section \[se:resu\]. Observations at the IRAM 30 m telescope {#se:obse} ======================================= The Diabolo experiment is a dual–channel photometer of which the innovative cooling system, bolometers and readout electronics are prototypes for space submillimetre astronomical applications (the ESA Planck Surveyor mission [@COSA96] and FIRST cornerstone [@Pill97]). It is a cryostat with two bolometers observing around 1.2 and 2.1, cooled to 0.1 K by a – compact dilution fridge. The two bands matching the atmospheric windows are obtained with low pass filters common to the two channels and free-standing bandpass meshes after the light is selected by a dichroic beam splitter. The bolometer at 1.2 provides a constant monitoring of the so-called atmospheric noise in a co–aligned and co–extensive beam with respect to the 2.1 “astrophysical” bolometer channel. The instrument, described in length by Benoît  [@Beno], has been modified as follows for the present observations: - Only one bandpass filter is used for the 2.1 channel, instead of two, in order to increase the detection efficiency. We checked that the small spectral leaks that appeared at high frequency have no influence on the SZ measurements. - New readout electronics, now fully digital, have been used. Each bolometer is AC square–wave modulated in opposition in a Winston bridge with a stable capacity. The out-of-equilibrium voltage is amplified by a cold FET and warm amplifiers, AD converted, and then numerically demodulated after the electrical transients have been blanked. The digital signal is proportional to the total power received by the bolometer up to an arbitrary offset constant. A complete discussion of the readout electronics scheme can be found in [@Gaer97]. - A NbSi thermometer has been installed on the dilution fridge to monitor the $100\, {\rm mK}$ cold base plate temperature. Another resistance used as a heater now allows an active regulation of this base plate temperature within about $30 \mu\rm K$. This is especially useful for skydips (see Paragraph \[se:skd\]) and to avoid changes in the responsivity. - A warm polyethylene lens (90% transmission) has been installed in front of the cryostat to match the f–ratio of the telescope (about ten) with that of the instrument (about five). The photometer has been installed at the Nasmyth focus of the telescope for a test run from November 10th to November 14th 1995, when the precipitable water vapor was too large (typically 5 to 9 ) for sensitive measurements. The sensitivity and calibration of the instrument could nevertheless be measured on bright sources. Some 100 hours of observing time were allocated from December 1st to 4th from which the following results have been obtained. These observations were complemented with a few more hours in December 1996. Calibration ----------- ### Alignment The alignment of the cryostat with respect to the telescope axes was achieved using a movable hot load situated between the entrance of the cryostat and the secondary. The recording of the signal in total power mode gives the beam direction and the appropriate corrections to be done for the cryostat optical axis to be pointed at the center of the secondary, which is crucial for straylight minimisation. ### Pointing Pointing corrections were made every two hours, using data obtained by scanning across a strong (several Jy) source (planet, quasar) situated near the target. The signal was modulated by the wobbling secondary at about 1 Hz. Fig. \[fig:pointing\] shows the demodulated signal as a function of telescope direction along lines of constant elevation and constant azimuth. A Gaussian fit is made to determine pointing corrections if necessary. ### The beam pattern The beam pattern has been measured on Saturn in the November 95 test run with a simple azimuth-elevation mapping technique. It is shown in Fig. \[fig:beama\] for the two wavelengths. The beam centers (as defined by Gaussian one-dimensional fits) are within less than 2 seconds of arc from each other, confirming the accuracy of optical positioning of the two bolometers with respect to the system optical axis inside the cryostat. Fig. \[fig:beamb\] shows the two integrated beam profiles as defined by the function of the angular radius $\theta$ starting from the center of the beam: $$B(\theta)=\int_0^\theta {\rm d}\theta'\, \theta'\int_0^{2\pi} {\rm d}\phi \frac{S(\theta',\phi)}{S(0,0)} {\rm\,}, \label{eq:beam}$$ where the measured signal $S$ is in cylindrical coordinates and where an offset, estimated in the outskirts of the beam ($\theta > 45 {\rm\,}arcsec$) has been taken out. $B$ has units of a solid angle and represents the integrated beam efficiency which levels off at large $\theta$. The beams for the two channels are much alike, except that the longer wavelength channel one is slightly more extended because of diffraction effects. Saturn is not point-like (17 arcsec diameter) and slightly distorts the real beams. The integrated beamwidth, calculated from the integrated beam solid angle $\Omega_{\rm mb}$ as $\theta=\sqrt{4\Omega_{\rm mb}/\pi}$ is larger than the one–dimensional Gaussian FWHM (34 instead of 25 arcseconds), because of near–sidelobe wings. =12.cm =10.cm ### Skydips {#se:skd} Skydips must be performed in order to compare fluxes measured at different elevations $\beta$. If the optical depth at the zenith $\tau_0(\lambda)$ is known, all the measurements $F$ can be put on the same scale “outside” the atmosphere, yielding corrected measurements $F_c$. Assuming a plane-parallel geometry, this can be written as: $$F_c(\lambda)=F(\lambda) \exp\left({\tau_0(\lambda)\over\sin\beta}\right){\rm\,}. \label{eq:skf}$$ Skydips were done in total power mode without any modulation, by having a scan of the whole telescope at constant azimuth through 10 steps of elevation with a constant cosecant increment. The skydip technique has been pionneered by Chini & Kreysa [@Chin86] at the IRAM 30 telescope. Here, we did not need a chopper for reference. The signal $S_i$ in a given channel and at elevation $\beta_i$, for an average atmospheric temperature $T_{atm}$, can be written as: $$S_i= C+ B_f T_{atm}\left(1-\exp\left(-{\tau_0\over\sin\beta_i}\right)\right){\rm\,}. \label{eq:skd}$$ At each wavelength (1.2 and 2.1), the constant $C$ represents an arbitrary zero level. The forward beam efficiency $B_f={\rm d} S/{\rm d} T$ is compared to the main beam efficiency $B_m$ (see below Paragraph \[se:sens\]). Before formula \[eq:skd\] can be applied, one has to correct for the drifts of the $100\,{\rm mK}$ base plate temperature $T_{bath}$, induced by the increasing heat load that occurs with the skydip. The NbSi thermometer gives a sufficiently sensitive measurement of $T_{bath}$. With a simple linear correlation technique, the coefficients of which are established independently of the skydip, the contribution ${\rm d} S/{\rm d} T_{bath}\times T_{bath}$ can be subtracted. Fig. \[fig:skd\] shows the non-linear fit of the data based on formula \[eq:skd\]. The correction which is applied to the data via Eq. \[eq:skf\] is deduced by interpolating between the two observed skydip values of $\tau_0(1.2{{\rm\,}mm})$ closest in time to the observation. It is found to be only of the order of 30% or less at 2.1 where the SZ effect is expected. During the December 95 observations, the zenith optical depths at 1.2 varied between 0.1 and 0.3, which corresponds to 2–4 of precipitable water vapour. This definitely is an acceptable range of opacity values for SZ measurements. ### Sensitivity {#se:sens} The calibration is done with planets which partially fill the beam. Mars (angular diameter 4.1 arcsec.), Saturn (16.7 arcsec.) and Jupiter (30.5 arcsec.) have been used for the present observations assuming a blackbody emission with temperatures of respectively 214, 150, and 170$\,{\rm K}$. After correcting for atmospheric opacity effects (Eq. \[eq:skf\]), and taking into account the beam dilution, the responsivity of each bolometer $B_{mc}(\lambda)={\rm d} S/\d T$ is deduced. It represents the response of the bolometer to 1 Rayleigh-Jeans Kelvin filling the main beam. The noise level is measured on blank fields. The instrument noise on the sky was found to be above the bolometer noise (as measured in the laboratory) by a factor of 3. The additional noise is likely related to an imperfect isolation from vibrations in the Nasmyth cabin, which generates noise of microphonic origin by optical modulation of straylight. The final sensitivities are given in Table \[ta:sens\], when the sky noise is minimal and the zenith opacity is 0.1 at 1.2. FWHM is given by the point source profile Gaussian fit and $\Omega_{\rm mb}$ is the integrated beam solid angle up a 45 arcseconds radius. Brightness sensitivities are for a filled beam, and flux sensitivities are for a point source on axis. These best performances are degraded whenever the source is not at the zenith, the sky is less transparent or the sky is more noisy. The overall noise degradation can be by as much as a factor of 3 at 1.2, but rarely exceeds fifty percent at 2.1. In all cases, sky noise can be reduced by a decorrelation technique (see Section \[se:resu\]). The corresponding noise levels are given in parentheses in Table \[ta:sens\]. For the observation technique described in section \[se:os\], the effective sensitivity is worse than in Table \[ta:sens\] by a factor of 2. --------- -------------- ----------------------------------------------- -------------- ------------------------ --------------- Channel FWHM arcsec. $\sqrt{\frac{4\Omega_{\rm mb}}{\pi}}$ arcsec. mK.s$^{1/2}$ MJysr$^{-1}$.s$^{1/2}$ mJy.s$^{1/2}$ 1.2 $24\pm 3$ $34 \pm 2$ 25 50 900 2.1 $27\pm 3$ $34 \pm 2$ 13(11) 8(7) 170(140) --------- -------------- ----------------------------------------------- -------------- ------------------------ --------------- : Best sensitivities obtained with Diabolo at the IRAM 30 telescope in 1995. Sensitivities in parentheses are for the 2.1 mm channel after spectral decorrelation of the atmospheric noise (see text).[]{data-label="ta:sens"} The ratio between the corrected main beam efficiency $B_{mc}(\lambda)$ (obtained from mapping planets) and the forward beam efficiency $B_f(\lambda)$ (obtained from skydips: Eq. \[eq:skd\]) is only 25% $\pm 5$ (50% $\pm 5$) at 1.2 (resp. 2.1). These values are in agreement with the telescope efficiencies measured by Garcia-Burillo  [@Garc93]. The far sidelobe pattern implied by these results can be troublesome for the observation of weak sources. This question is addressed in the discussion of Section \[se:resu\]. Observing strategy {#se:os} ------------------ Four types of modulation were simultaneously used in order to limit the various low–frequency noises and monitor systematics. 1. The electronic AC modulation, referred to in the beginning of this section, avoids using electronics at frequencies below $10{\rm\,}Hz$ (the typical $1/f$ knee frequency). Here we modulate the bolometers at $36{\rm\,}Hz$ and the readout electronics deliver one sample per bolometer at the rate of $72{\rm\,}Hz$. 2. The wobbling secondary provides the second modulation at the typical frequency of $1{\rm\,}Hz$ and with a beamthrow of 3 arc minutes. This allows the slowly varying background emission (sky and telescope) to be subtracted from the comparison of the on-axis measurement with that from an offset position at the same elevation. 3. The whole telescope is nodded in azimuth every 20 seconds with an amplitude of 3 arcminutes in an ABBA cycle which is repeated 4 times to form one scan. This permits to compensate for any imbalance between the two beams provided by the wobbling. Each scan obtained in this way lasts about 2 minutes (repointing overheads included). 4. Each scan above is done consecutively on a reference field offset from the target by a lag of a few minutes of time in RA ($R$ at coordinates $(\alpha - {\rm lag}, \delta)$), on target twice ($T$ and $T'$ at coordinates $(\alpha, \delta)$), and on a second reference field offset by the same number of minutes of time in RA in the other direction ($R'$ at coordinates $(\alpha + {\rm lag}, \delta)$). With this method, the reference fields are followed in the same way as the target in local coordinates. This ensures that sidelobe effects (ground pickup), if any, are subtracted. This technique has been used by Herbig  [@Herb95] for single-dish measurements of very weak sources with proper baseline subtraction. Reduction procedure {#ss:red} ------------------- The data reduction proceeds as follows. 1. Cosmic ray hits are removed by interpolation from the data flow by a running median algorithm. Typical time constants are 10 milliseconds and the glitch rate is less than one hit per bolometer every 10 seconds, so that few samples are affected. The particles which deposit their energy into the bolometer are thought to mainly be muons, more abundant at the telescope site than in the laboratory. 2. The data are then synchronously demodulated with the help of the wobbler position (which is recorded along with the bolometer signals). The mean and dispersion values are computed for each position of the nodding cycle $ABBA$. Typical offsets (the imbalance between the positive and negative wobbler positions) are of the order of $0.2{\rm\,}K$. 3. A complete scan is reduced by averaging the differences of values between the two nodding positions: $v=\sum_1^N(v_A-v_B)/(2N)$. The noise on the final value is obtained from the dispersion of the individual differences. The sensitivity quoted in Table \[ta:sens\] corresponds to the best noise figure obtained after a scan and corrected for the square root of the scan integration time. 4. The third channel, which is a thermometer measuring the base plate temperature $v_3=T_{bath}$, is treated the same way as the two others in order to check for a possible systematic effect or additionnal noise possibly induced by drifts of the thermal bath temperature. None have been found. 5. A linear combination of the first two bolometers, $v_4=v_2- r v_1$ is calculated. The ratio $r$ is chosen so as to minimise the noise of $v_4$. It can be shown that $r$ can be deduced from a simple linear correlation between $v_1$ and $v_2$ even if both measurements are noisy, and $r$ is always smaller than the color of the sky emission. This procedure is intended to specifically work at removing sky noise from the second channel when little or no signal is expected from the first one (in particular, in case of the SZ effect). An histogram of the values of $r$ during the December observations is shown in Fig. \[fig:atmos\]. The correlation coefficient $C$ tells us by how much we can reduce the initial noise of $v_2$ to that of $v_4$: $\sigma_4=\sigma_2\sqrt{1-C}$. The typical correlation coefficient $C$ of 0.4 leads to a small improvement in the signal to noise ratio of weak sources. On the other hand the statistical distribution which is obtained with the corrected $v_4$ is much closer to Gaussian than that of the data for the original channels, $v_1$ and $v_2$ (see \[ss:res\]). 6. For each channel, an elementary block of data, made of 4 scans ($R$, $T$, $T'$, $R'$), is reduced by computing an average signal $s= (v_T+v_{T'}-v_R-v_{R'})/2$ and a difference signal $d= (v_{T'}-v_T-v_R+v_{R'})$ with associated errors. Four rich clusters of galaxies (A665, A2163, A2218 and CL0016+16) have been selected for observations due to their small angular core radius (less than 2 arcminutes), adapted to a large millimetre antenna. In addition to the 1995 data, we gathered a few more hours of observation towards two of these clusters (A2163 and CL0016+16) in 1996. The observation and data reduction schemes were very different, in an attempt to measure SZ profiles. =12.cm =10.cm Tentative detections of the SZ effect {#se:resu} ===================================== Results {#ss:res} ------- The parameters of the observations towards the four clusters are given in Table \[ta:log\], and the full results are summarised in Table \[ta:res1\] and \[ta:res2\] (antenna Rayleigh-Jeans equivalent temperature in , corrected for atmospheric absorption). Rayleigh-Jeans temperature differences (the corrected signal $s$ see \[ss:red\]) for all cycles of measurements are plotted in Fig. \[fig:res665\] to Fig. \[fig:res0016\]. For the 1995 data, the final result for each cluster has been obtained by averaging the measurements obtained from each cycle of four scans, weighted proportionally to the inverse square of the noise for the individual sets. For each measurement, we compare the internal error obtained with this optimal averaging by using the internal noise value, and the external error obtained from the dispersion between the scan values. The square of the ratio between the two is the reduced $\chi^2$. The values listed in Tables \[ta:res1\] and \[ta:res2\] show the internal consistency of the measurement and its estimated noise, except for the first channel where the $\chi^2$ value is systematically larger than unity. This discrepancy can be explained by the statistics of the atmospheric noise, which is not Gaussian and affects more the 1.2 channel than the 2.1 one. To first order, it should not affect much the decorrelated channel, as observed. For the 1996 data, which use a different scanning technique, results have been obtained from the difference between the average of the signal from the scans on 30 arcseconds centered on the target source and the average value of the signal at more than 40 arcseconds of the target. A significant negative signal is detected in the 2.1 decorrelated channel for the three clusters A665, A2163 and CL0016+16. This detection is particularly significant for the latter cluster. If we interpret those measurements as due to the Sunyaev-Zel’dovich effect, one can convert the obtained value from antenna temperature to the $y$ parameter (see [@Suny70; @Suny80]), neglecting the spectral dependency on the cluster gas temperature ([@Reph95; @Giar95]). The final results (1995, 1996, and combination of the two years) are given in Table \[ta:sz\]. The correction $\eta$ for the 1995 beam dilution is calculated by convolving the measured beam profile obtained on Saturn (modulated at 3 arcmin) with a theoretical SZ profile using core parameters from X-ray measurements. We do not detect any significant signal in the blank field positions ($v_R+v_{R'}$): in contrast to cm radio observations, no systematic signal is seen in the blank field measurements. Indeed, we find that the average signal obtained by keeping only the on-source component ($v_T+v_{T'}$) is about $\sqrt{2}$ more significant than that shown in Tables \[ta:res1\] and \[ta:res2\]. This gives us confidence in the final results for the $y$ parameters of Table \[ta:sz\]. Moreover, for A2163 and CL0016+16, the 1995 and 1996 results are compatible with each other. An additional outcome of the observations are temperature differences from blank field measurements ($d$: see \[ss:red\]). No signal is detected in any of the 4 differences (around the 4 observed clusters) at the $\Delta T/T= 2\times 10^{-4}$ level ($1\sigma$). If this result were improved by repeated measurements on a larger number of clusters, it could yield interesting constraints on the level of CMB anisotropies at small angular scales (30 arcseconds to few arcminutes) in a wavelength range where the smallest contamination from radio and infrared galaxies [@Fran91] is expected. All clusters show no signal at 1.2 within the observational errors. In principle, both the thermal and kinetic SZ effect could contribute to this channel, but the upper limit that we can put on cluster radial velocities is not stringent enough to be relevant. No galactic dust emission is detected either. ----------- --------- --------------------- --------------------- --------------------- ----- 1995 1996 all Cluster $\eta $ $y_0$ $\, /10^{-4}$ $y_0$ $\, /10^{-4}$ $y_0$ $\, /10^{-4}$ S/N A665 0.498 $2.92\pm 1.15$ $2.92\pm 1.15$ 2.5 A2163 0.548 $4.99\pm 1.97$ $4.60\pm 2.00$ $4.80\pm 1.40$ 3.4 A2218 0.607 $-0.37\pm 2.21$ $-0.37\pm 2.21$ 0.2 CL0016+16 0.668 $3.30\pm 0.90$ $2.90\pm 1.60$ $3.20\pm 0.78$ 4.1 ----------- --------- --------------------- --------------------- --------------------- ----- : Final calibrated results[]{data-label="ta:sz"} Cluster $z $ $T_e\, ({\rm keV})$ $\theta_c$ arcmin $\beta$ $Y \; (10^{-4}\,{\rm arcmin}^2)$ $M_g / 10^{14}M_\odot$ ----------- ------- --------------------- ------------------- --------- ---------------------------------- ------------------------ A665 0.182 8.2 1.60 0.66 439 $20.2 \pm 8.0$ A2163 0.201 14.6 1.20 0.62 491 $14.6 \pm 4.2$ A2218 0.171 6.72 1.00 0.65 $< 408$ $< 20.9 \,(3\sigma)$ CL0016+16 0.541 8.22 0.64 0.68 70 $11.1 \pm 2.7$ : Physical parameters of the observed clusters taken from [@Birk91b; @Elba95; @Birk94; @Neum97; @Hugh95]. The total gas mass is computed from the present measurements. Uncertainties are statistical only.[]{data-label="ta:amas"} Interpretation -------------- The mass of hot gas can be directly deduced from these observations by using: $$\begin{aligned} M_g= 8.2\times 10^{14}M_\odot \left(h\over 0.5 \right)^{-2} \left({Y \over {10^{-4} {{\rm\,}arcmin^2}}}\right) \nonumber\\ \left({k T_e} \over {10 {{\rm\,}keV}} \right)^{-1} {(\sqrt{1+z} -1)^2 \over {(1+z)^3}} {\rm\,}, \label{eq:gasm}\end{aligned}$$ a formula derived by De Luca  [@Delu95]. Here we have assumed $\Omega_0=1$ and $h= H_0 / (100 {\rm\,}km/s/Mpc)$, and the measurement $y_0$ has been converted into $Y = \int y d\Omega = y_0 \Omega_{\rm eff}$. The effective solid angle $\Omega_{\rm eff}$ is calculated with $$\begin{aligned} {\Omega_{\rm eff}\over{\theta_c^2}}= f_{\rm geom}= 2\pi\int x {\rm d}x(1+x^2)^{(1-3\beta)\over 2}\, , \end{aligned}$$ and $x=\theta/\theta_c$, assuming a King profile with an angular core radius of $\theta_c$. The resulting masses are given in Table \[ta:amas\]. Parameters for the clusters, $\theta_c$, $\beta$, and $T_e$, have been taken from recent ROSAT X-ray measurements. These estimated masses do not depend on the absolute X-ray fluxes. Our result for A2163, $y_0 = 4.8 \pm 1.4 \times 10^{-4}$ is in agreement with the determination by Wilbanks  [@Wilb94] of $y_0 = (3.78^{+0.74}_{-0.65}) \times 10^{-4}$ and that of Holzapfel  [@Holz97] of $y_0 = (3.73^{+0.47}_{-0.61}) \times 10^{-4}$, both obtained at the same wavelength ($2.1{\hbox{$\, {\rm mm}$}}{}$) as the present measurements with the 1.4’ beam (2’ throw) of the SuZie experiment. It is also in agreement with the submillimeter detection by the SPM photometer onboard the PRONAOS balloon (with a 3.7’ beam and 6’ beamthrow). A detailed discussion of the combined bolometer results for A2163 is given by Lamarre  [@Lama98]. The gas mass we deduce is $14.6 \pm 4.2 \times 10^{14} M_\odot$, very close to the X-ray determined gas mass [@Elba95] $14.3 \pm 0.5 \times 10^{14} M_\odot$. Our most significant detection (at the 4 $\sigma$ level) concerns the distant cluster CL0016+16 at a redshift of 0.541. This cluster is the highest redshift object detected with the SZ effect in the millimetric domain. Our result of $y_0 = 3.20 \pm 0.78 \times 10^{-4}$ is larger than but compatible with the central parameter $y_0= 2.18\times 10^{-4} (h/0.5)^{-1/2}$ predicted by Birkinshaw [@Birk98] using ROSAT X-ray data (within $1.3\sigma$). It is in agreement with the SZ radio determination of Hughes and Birkinshaw [@Hugh98] with a larger beam (1.8’ with a 7’ beam throw) of $y_0 = 2.20 \pm 0.37 \times 10^{-4}$ (see also [@Birk91a]), and more marginally with the SZ map of the interferometer experiment of Carlstrom  [@Carl96] of $y_0 = 1.31 \pm 0.12 \times 10^{-4}$, which spans 1 to 10’ angular scales. Our gas mass estimate of $M_g = 11.1 \pm 2.7 \times 10^{14} M_\odot$ is twice as large as the X-ray gas mass deduced by Neumann and Böhringer [@Neum97] but still within errors. For A665, the observations were centered on the IPC X-ray center as given by Birkinshaw, Hughes & Arnaud [@Birk91b], which is offset by 2’ from the nominal Abell center. Although less significant, the measured central brightness decrement $y_0 = 2.92\pm 1.15 \times 10^{-4}$ is in agreement with the more accurate value $1.69\pm 0.15 \times 10^{-4}$ determined by Birkinshaw  [@Birk91b], albeit in the radio domain. The integration time was clearly insufficient for A2218 to reach a significant noise level for that cluster. The upper limit that we get is compatible with the radio measurements that were previously reported [@Birk91a; @Jone93]. Perspective ----------- We have reported here the highest angular resolution (30”) observations of the SZ effect on at least 2 clusters. These observations could be achieved thanks to the large millimetre Pico Veleta antenna and a total on source integration time of fifty hours. It is clear that SZ profiles or even maps of rich clusters can be measured with the Diabolo instrument, with sufficient winter integration time, when improvements in the overall efficiency are made (these are currently underway). These observations are complementary to X-ray measurements in the sense that they directly sample the gas pressure with similar angular resolution (the future XMM and AXAF will have few arcsecond resolutions). High resolution SZ observations in the millimeter atmospheric windows will also grow in importance after the unbiased survey of SZ clusters from the Planck Surveyor satellite. For resolved clusters the amplitude of the SZ distortion is independent of distance, and thus high-redshift clusters are adequate targets for millimetre observations of the SZ effect, whereas X-ray measurements of gas masses are more difficult. We wish to thank the IRAM staff especially for their help during the setup of the instrument, Bernard Fouilleux for his help during the observations, and Bernard Lazareff for his support of the mission. We thank the whole Diabolo team for the continuous improvements brought to the instrument, with a special attention to Jean-Pierre Crussaire, Gerard Dambier, Jacques Leblanc, Bernadette Leriche, and Marco De Petris along with the Testa Grigia MITO team for a previous test of the instrument. INSU, IAS, CESR, CRTBT, and the GdR Cosmologie contributed financially to this instrument. [llrrr]{} &&&&\ Channel & $\lambda ({\hbox{$\, {\rm mm}$}}{})$ & Signal () (noise) & $\chi^2$ (df) & Prob %\ &&&&\ A665 &&&&\ &&&&\ A2163 &&&&\ &&&&\ [llrrr]{} &&&&\ Channel & $\lambda ({\hbox{$\, {\rm mm}$}}{})$ & Signal () (noise) & $\chi^2$ (df) & Prob %\ &&&&\ A2218 &&&&\ &&&&\ CL0016+16 &&&&\ &&&&\ =12.cm =10.cm =12.cm =10.cm =12.cm =10.cm =12.cm =10.cm [20]{} Aghanim, N., De Luca, A., Bouchet, F. R., Gispert, R. and Puget, J.-L., 1997, A&A, 325, 9 Andreani, P., Pizzo, L., Dall’Oglio, G., 1996, ApJL, 459, L49 Barbosa, D., Bartlett, J. G., Blanchard, A. and Oukbir, J., 1996, A&A, 314, 13 Benoît A., Zagury, F., Coron, N., 1998, A&A Suppl.Ser., to be submitted Bersanelli, M. , 1996, COBRAS/SAMBA: Report on the Phase A Study, ESA report D/SCI(96)3. Birkinshaw, M., 1979, MNRAS, 187, 847 Birkinshaw, M., Gull, S. F., & Moffet, A. T. 1981, ApJL, 251, L69 Birkinshaw, M., 1991 in Proc. of [*Physical Cosmology*]{}, J. Trân Than Vân, ed., (Frontieres:Gif-sur-Yvette) Birkinshaw, M., Hughes, J. P., & Arnaud, K. A. 1991, ApJ, 379, 466 Birkinshaw, M., & Hughes, J. P., 1994, ApJ, 420, 33 Birkinshaw, M., 1998, Physics Reports, in press Cavaliere, A. and Danese, L. and De Zotti, G., 1979, A&A, 75, 322 Carlstrom, J. E., Joy, M., & Grego, L., 1996, ApJL, 456, L75 Chini, R., Kreysa, E., Mezger, P. G., & Gemuend, H.-P. 1986, A&A, 154, 8 De Luca, A., Désert, F.-X., and Puget, J.-L. 1995, A&A, 300, 335 Donahue, M., , submitted to ApJ (astro/ph 970710) Elbaz, D., Arnaud, M., & Böhringer, H. 1995, A&A 293, 337 Fischer, M. L., & Lange, A. E., 1993, ApJ, 419, 433 Franceschini, A., De Zotti, G., Toffolatti, L. , 1991 A&AS, 89, 285 Gaertner, S., Benoit, A., Lamarre, J.-M. 1997, A&AS, 126, 151 Garcia–Burillo, O. S., Guelin, M., & Cernicharo, J., 1993, A&A, 274, 123 Giard, M., 1995, “Proceedings of the XVth Moriond Astrophysics meeting”, J. Tran Thanh Van eds., éditions Frontières. Grainge, K., Jones, M., Pooley, G., Saunders, R., & Edge, A. 1993, MNRAS, 265, L57 Haehnelt, M., 1996, in “Proceedings of the XVIth Moriond Astrophysics meeting”, F.R Bouchet, R. Gispert, B. Guiderdoni and J. Tran Thanh Van eds., éditions Frontières. Herbig, T., Lawrence, C. R., Readhead, A. C. S., & Gulkis, S. 1995, ApJL, 449, L5 Hughes, J. P., Birkinshaw, M., & Huchra, J. P., 1995, ApJ, 448, L93 Hughes, J. P., & Birkinshaw, M., 1998, ApJ, submitted Holzapfel, W., Arnaud, M., Ade, P. A. R., , 1997, ApJ, 480, 449 Jones, M., Saunders, R., Alexander, P., 1993, Nature, 365, 320 Kobayashi, S., Sasaki, S., Suto, Y., 1996, Publ. Astron. Soc. Jap., 48, L107 Lamarre, J.-M., Giard, M., Pointecouteau, E. , 1998, ApJL, in press Neumann, D. M., & Böhringer, H., 1997, MNRAS, 289, 123 Penzias, A. A., Wilson, R. W., 1965, ApJ, 142, 419 Pillbratt, G., 1997, in Proceed. of the ESA Symp. “The Far Infrared and Submillimetre Universe”, ESA SP-401, 7 Pizzo, L., Andreani, P., Dall’Oglio, G. 1995, Experim. Astron., 6, 249 Rephaeli, Y., 1995, ARA&A, 33, 541 Saunders, R., 1995, Ap. Lett. Comm., 32, 339 Silk, J. and White, S., 1978, ApJ, 226, L103 Sunyaev, R. A., & Zel’dovich, Ya. B., 1970, Ap. Sp. Sci., 7, 3 Sunyaev, R. A., & Zel’dovich, Ya. B., 1980, MNRAS, 190, 413 White, S. D. M., Silk, J., & Henry, J. P. 1981, ApJL, 251, L65 Wilbanks, T. M., Ade, P. A. R., Fischer, M. L., Holzapfel, W. L., & Lange, A. E. 1994, ApJL, 427, L75 Lea, S. M., Silk, J., Kellogg, E., Murray, S., 1973, ApJL, 184, L105
--- abstract: 'We investigate an optically driven quantum computer based on electric dipole transitions within coupled single-electron quantum dots. Our quantum register consists of a freestanding n-type pillar containing a series of pair wise coupled asymmetric quantum dots, each with a slightly different energy structure, and with grounding leads at the top and bottom of the pillar. Asymmetric quantum wells confine electrons along the pillar axis and a negatively biased gate wrapped around the center of the pillar allows for electrostatic confinement in the radial direction. We self-consistently solve coupled Schrödinger and Poisson equations and develop a design for a three-qubit quantum register. Our results indicate that a single gate electrode can be used to localize a single electron in each of the quantum dots. Adjacent dots are strongly coupled by electric dipole-dipole interactions arising from the dot asymmetry, thus enabling rapid computation rates. The dots are tailored to minimize dephasing due to spontaneous emission and phonon scattering and to maximize the number of computation cycles. The design is scalable to a large number of qubits.' address: | Department of Electrical and Computer Engineering, North Carolina State University\ Raleigh, North Carolina 27695-7911 author: - 'G. D. Sanders, K. W. Kim, and W. C. Holton' title: 'A scalable solid-state quantum computer based on quantum dot pillar structures' --- Introduction ============ The possibility that a computer with exceptional properties could be built employing the laws of quantum physics has stimulated considerable interest in searching for useful algorithms and realizable physical implementations. Two useful algorithms, exhaustive search [@bib:Grover97] and factorization, [@bib:Shor94] have been discovered; others, including the suggestion that quantum computers will prove useful to model quantum systems, are being sought. Meanwhile, various physical implementations are being explored, including trapped ions, [@bib:Cirac95] cavity quantum electrodynamics, [@bib:Pellizzari95] ensemble nuclear magnetic resonance, [@bib:NMR] small Josephson junctions, [@bib:Shnirman97] optical devices incorporating beam splitters and phase shifters [@bib:Cerf98] and a number of solid state systems based on quantum dots. [@bib:BarencoDeutsch95; @bib:Wang; @bib:Kane; @bib:Tanamot099; @bib:Loss98] Although the advantages of quantum computing are enormous for particular key applications, the requirements for their implementation are extremely stringent, perhaps especially rigorous for solid-state systems. Nevertheless solid-state quantum computers are very appealing relative to other possible implementation schemes because of the well-known ability to customize the design through the use of artificially structured materials and the probable scalability of the resulting design. For example, integrated circuit manufacturing technology would be immediately applicable to quantum computers of the proper implementation; and such designs would not only be scalable to smaller dimensions along the “semiconductor learning curve” but also large ensembles of “identical” quantum computers could be manufactured, that could be individually fine-tuned electrically. To date, no solid-state implementation of quantum computing has been demonstrated. In this paper, we investigate a solid-state quantum computer implementation that is amenable to manufacturing with integrated circuit technology. We develop a three-dimensional (3D) device model and self-consistently solve coupled Schrödinger and Poisson equations to generate a quantum computer design for a three-qubit quantum register that is based on pair wise coupled asymmetric III-V quantum dots. The design is optimized for a long coherence time and a rapid computation rate. Our results indicate that this structure may provide a realistic scalable candidate for quantum computing in solid-state systems. Proposed Structure ================== The proposed quantum dot quantum computer (see Fig. \[pillar\]) consists of a pillar structure composed of a chain of asymmetric quantum dots separated by intervening layers of higher bandgap composition fabricated in a GaAs/AlGaAs technology by means of a sequence of planar MBE growth steps and subsequent etching to form the pillar. A sheath of similar AlGaAs composition is then grown surrounding the pillar and a wrap-around gate electrode deposited. A drain (source) is formed at the top (bottom), the series of asymmetric quantum dots are in the center region, and the gate surrounds the region of the pillar containing the quantum dots. Tarucha et al. [@bib:Tarucha96] have reported similar n-type single electron transistor (SET) structures. Electron confinement along the pillar axis is produced by the band gap discontinuity of the dot structure. Encasing the quantum dot structure in the pillar core by the cylindrical sheath and the gate electrode provides confinement in the radial direction. By applying a negative bias that depletes carriers near the surface, an additional parabolic electrostatic potential is formed that allows for tuning of the radial confinement and localization of one electron per dot. The simultaneous insertion of a single electron per dot is accomplished by lining up the quantum dot ground state levels so that they lie close to the Fermi level; a single electron is confined in each dot over a finite range of the gate voltage due to shell filling effects. [@bib:Tarucha96] Thus, the pillar consists of a vertical stack of coupled asymmetric GaAs/AlGaAs quantum dots of differing size and composition so that each dot possesses a distinct energy structure. Qubit registers, $\arrowvert 0 \rangle$ and $\arrowvert 1 \rangle$, are based on the ground and first excited state of the single electron within each quantum dot. Overall, parameters of the structure can be chosen to produce a well-resolved spectrum of distinguishable qubits. The asymmetric dots produce large built-in electrostatic dipole moments between the ground and first excited state, and electrons in adjacent dots are coupled through the electric dipole-dipole interaction, while coupling between non-adjacent dots is significantly weaker. This produces the desired quantum computer consisting of a linear array of binary states (qubits) with pair wise pillar-axis coupling between adjacent qubits. [@bib:collins] In addition to energy tuning, the asymmetry of each quantum dot can be designed so that dephasing due to electron-phonon scattering and spontaneous emission is minimized. The combination of strong dipole-dipole coupling and long dephasing times make it possible to perform many computational steps before loss of coherence, in fact, it is believed possible to design this device so that error correction substantially prohibits coherence loss. Quantum computations are performed by means of a series of coherent optical pulses in the far infrared. Final readout of the amplitude and phase of the qubit states can be achieved through quantum state holography. Amplitude and phase information are extracted through mixing the final state with a reference state generated in the same system by an additional delayed laser pulse and detecting the total time- and frequency-integrated fluorescence as a function of the delay. [@bib:Leichtle] Extracting the final state information using quantum state holography requires multiple experiments, one for each delay, as described in Ref. . Thus, the computation must be performed several times before an answer is arrived at. This is no real problem since the number of repetitions needed is only on the order of 40 or so, independent of the number of computational steps in a given quantum algorithm. Through the use of integrated circuit manufacturing technology, it is possible to simultaneously fabricate a large array of “identical” pillar quantum dot quantum computers, that is, on the order of $ 10^{10} $ per wafer. Each of these quantum registers could be electrically connected through deposited interconnect in such a manner so that each could be individually tunable to produce an array of identical units. In general, inhomogeneity among the quantum dots will result in slightly different energy levels. Sherwin et al. [@bib:Sherwin] have recently pointed out that one can perform accurate qubit operations in an inhomogeneous population of quantum dots arising from quenched disorder due to static charged defects, for example, provided that each SET is independently calibrated. This calibration can done by performing simple gate operations and tuning the gate electrodes appropriately. Efficient optical coupling to the resulting ensemble can be achieved through optical light guiding as suggested in Ref. . By this means direct observation of fluorescence is possible. Quantum computations are performed by means of a series of coherent optical pulses in the far infrared, and may be carried out in complete analogy with the operation of an NMR quantum computer. [@bib:cory] It should be remarked that while our scheme resembles NMR ensemble quantum computation in the use of a series of optical pulses to perform quantum logic gates, it differs from NMR quantum computation in that our use of a collection of single electron transistors is done to enable a stronger signal to noise ratio in the readout phase. In principle, the quantum computation could be done with only a single SET transistor structure if the readout measurements were sufficiently sensitive. Device Model ============ In the context of studies of the Coulomb blockade in self-organized quantum dots and planar single-electron transistors, self-consistent calculations of electronic structure, shell filling effects, electron-electron interaction, Coulomb degeneracy, and Coulomb oscillation amplitudes have been carried out for various quantum dot structures. [@bib:Averin91; @bib:Stopa93; @bib:Macucci93; @bib:Jovanovic94; @bib:Stopa96; @bib:Todorovic97; @bib:Macucci97; @bib:Nagaraja97; @bib:Fonseca98] Our quantum register can be analyzed using methods similar to those used to study the self-consistent electronic structure in single-electron transistors. The problem we address is similar to those addressed by other authors who are interested in obtaining current-voltage characteristics and studying Coulomb oscillations in single-electron transistors over a wide range of gate biasing and shell filling conditions. [@bib:Averin91; @bib:Stopa93; @bib:Macucci93; @bib:Jovanovic94; @bib:Stopa96; @bib:Todorovic97; @bib:Macucci97; @bib:Nagaraja97; @bib:Fonseca98] In our case, we are interested in obtaining the self-consistent electrostatic potential and electronic eigenstates in an equilibrium configuration in which the source and drain are grounded and the gate electrode is negatively biased. The electrostatic potential, $V(r)$, is obtained by solving the Poisson equation for n-doped semiconductors [@bib:Jovanovic94; @bib:Fonseca98] $$\nabla^{2} V(\vec{r}) = -\frac{4 \pi}{\varepsilon} \ q \left[ n(\vec{r})+N_{D}^{+}(\vec{r}) \right]$$ In the Poisson equation, $q$ is the absolute value of the electron charge, $\varepsilon$ is the static dielectric constant, $n(\vec{r})$ is the electron concentration and $N_{D}^{+}(\vec{r})$ is the known concentration of ionized donors in the structure. For the dielectric constant, we adopt the GaAs value $\varepsilon = 12$. [@bib:Pankove] The Poisson equation is solved subject to boundary conditions on the electrostatic potential, $V(\vec{r})$. At the interfaces between the semiconductor and the source, drain and gate electrodes, $V(\vec{r})$ is equal to the applied gate voltage while at the semiconductor-vacuum interfaces, the normal derivative of $V(\vec{r})$ vanishes. Following Ref. , the global electron concentration, $n(\vec{r})$, in the device is obtained by partitioning the pillar structure into “bulk” and “quantum” regions. In the “bulk” regions far from the quantum wells i.e. the source and drain regions, electrons are treated in the Thomas-Fermi approximation and the electron concentration is given by [@bib:Bethe] $$n(\vec{r}) = \left\{ \begin{array}{ll} \frac{1}{3 \pi^{2}} \left[ \frac{2m_{e}^{*}}{\hbar^2}\ (\mu-U(\vec{r})) \right]^{3/2} & \mbox{\ if \ $U(\vec{r}) < \mu$} \\ 0 & \mbox{\ otherwise} \end{array} \right.$$ where $\mu$ is the chemical potential and $U(\vec{r})$ is the effective electron potential. The chemical potential, $\mu$, is determined through the requirement that overall charge neutrality be maintained in the bulk regions, i.e. the chemical potential is adjusted until $$\int \left( \ n(\vec{r}) - N_{D}^{+}(\vec{r}) \ \right) \ d\vec{r} = 0$$ where the integration is carried out over the bulk source and drain regions. The effective potential, $U(\vec{r})$, in the bulk regions includes the Hartree potential, $U_{H}=-q \ V(\vec{r})$, and the conduction band offset, $\Delta E_{c}$, which depends on the local Al concentration, $x$. Thus, $$U(\vec{r}) = -q \ V(\vec{r}) + \Delta E_{c}$$ where the conduction band offset, $\Delta E_{c}$, is taken to be 60% of the difference between the $Al_{x}Ga_{1-x}As$ and $GaAs$ bandgaps. Using the bandgap variation of $Al_{x}Ga_{1-x}As$ determined by Lee et al., [@bib:Lee] we obtain the following expression for the conduction band offset as a function of the local Al concentration, $x$: $$\Delta E_{c}= 0.6 \ (1155 \ x +370 \ x^{2}) \ \text{meV}$$ In the “quantum” regions containing the quantum wells, the electron concentration, $n(r)$, is determined by the electron wavefunctions, $\psi_{i}(r)$, and energies, $E_{i}$, through the relation $$n(\vec{r}) = \sum_{i} n_{i} \ \arrowvert \psi_{i}(\vec{r}) \arrowvert^{2}$$ The electron occupancy in each level, $n_{i}$, is a function of the electron energy and the temperature. The electron wavefunctions and energy levels, $E_{i}$, are obtained by solving the Schrödinger equation in the effective mass approximation $$\left[-\frac{\hbar^{2}}{2 \ m_{e}^{*}} \nabla^{2}+U(\vec{r})-E_{i} \right] \ \psi_{i}(\vec{r}) = 0$$ The electron potential, $U(\vec{r})$, in the quantum regions is given by $$U(\vec{r}) = -q \ V(\vec{r}) + \Delta E_{c} + U_{xc}(\vec{r})$$ where $U_{xc}(\vec{r})$ is the exchange-correlation potential of Perdew and Zunger. [@bib:PerdewZunger] In the quantum register discussed in the next section, the gate voltage is negatively biased in such a way that a single electron is strongly localized in each electrostatically confined quantum dot. The radial confinement potential is strong enough that the lowest few electron wavefunctions are strongly localized near the center of the pillar and die away far from the semiconductor-electrode interface. In our design, the quantum wells are wide enough and the barriers between the quantum wells are thick enough so that the lowest few electron wavefunctions do not penetrate to the center of the barriers separating the quantum wells. Since all the wavefunctions of interest vanish at the center of these barriers, we can divide the quantum well region into several regions (one for each qubit). These regions are taken to be cylinders stacked along the pillar axis with top and bottom surfaces located at the centers of the barriers between adjacent wells. We solve the Schrödinger equation in each dot separtely subject to the boundary condition that all wavefunctions vanish at the region boundaries. Due to the cylindrical symmetry of the structure, the 3D Schrödinger equation can be reduced to a 2D equation in cylindrical coordinates. One might try to solve the 2D Schrödinger equation by finite differencing the partial differential equation and solving the resulting matrix eigenvalue equation. The size of the matrix to be diagonalized is equal to the number of interior mesh points in the 2D grid and this is much too large to be handled easily. Other authors have taken this brute-force approach to solving the Schrödinger equation in self-consistent Poisson-Schrödinger problems with the result that solving the Schrödinger equation is the most time consuming part of the computation. [@bib:Jovanovic94] We find that it is possible to do better. In solving the 2D Schrödinger equation, we first approximate $U(\rho,z)$ in each quantum dot by a separable potential $$U(\rho,z) \approx U_{s}(\rho,z) \equiv U_{r}(\rho) + U_{z}(z)$$ where the axial potential is defined as $$U_{z}(z) = \frac{2}{R^{2}} \int_{0}^{R} U(\rho,z) \ \rho \ d\rho$$ and the radial potential is given by $$U_{r}(\rho) = \frac{1}{\cal L} \int_{0}^{\cal L} \left( \ U(\rho,z) - U_{z}(z) \ \right) \ dz$$ In these last two expressions, $R$ and ${\cal L}$ are the radius and height of the cylindrical region over which $U(\rho,z)$ is defined in each dot. With the separable potential approximation, the 2D Schrödinger equation can be separated into two 1D equations which can be cast as finite difference eigenvalue equations and solved numerically for the electron energies and wavefunctions. The resulting 2D wavefunctions are the best product wavefunctions that approximate the solution of the 2D Schrödinger equation in each qubit. The electronic states in the separable potential approximation in our cylindrical pillar are labeled by three quantum numbers $(n_{\rho}$, $n_{\phi}$, $n_{z})$ which specify the number of nodes in the product wavefunctions associated with cylindrical coordinates $\rho$, $\phi$, and $z$. In this notation, the qubit state $\arrowvert 0 \rangle$ is denoted $(0,0,0)$ while $\arrowvert 1 \rangle$ is denoted $(0,0,1)$. We find that the separable wavefunctions are reasonable approximations to the true wavefunctions since we are starting with a separable potential which is already close to the true potential in some average sense. We next obtain the exact energies and wavefunctions of the original non-separable Schrödinger equation by treating the residual $U(\rho,z)-U_{s}(\rho,z)$ as a perturbation and expanding the exact wavefunctions as a sum of separable wavefunctions. Our expansion of the true wavefunctions in terms of separable wavefunctions is rapidly converging and we find that the dominant terms in the expansion of the true wavefunctions are the separable wavefunctions of the same symmetry. Our approach to solving the 2D Schrödinger equation is fast and most of computing time is spent solving the Poisson equation. To complete the specification of the electron charge density in the quantum dots, it is necessary to compute the electron occupation numbers, $n_{i}$. One might expect that $n_{i}$ would be given by the Fermi-Dirac distribution and indeed this would be the case if the electrons in the dots were delocalized and in tunneling contact with the leads. In this case, the qubits could exchange electrons with their environment and the total number of electrons in the dot $N = \sum_{i} n_{i}$ could take on non-integer values. But clearly this is not tolerable in a quantum computer and we must carefully arrange things so the dot wavefunctions exhibit a high degree of localization. In this situation, only an integer number of electrons can occupy the dot and this constraint gives rise to what is known as the Gibbs distribution. The number of electrons, $N$, in the dot is determined by minimizing the Gibbs free energy with respect to the integer number of electrons, $N$. The Gibbs free energy is $F(N) = -kT \ \ln[Z(N)]$, where the grand canonical partition function, $Z(N)$, is given by [@bib:Jovanovic94; @bib:Nagaraja97] $$Z(N) = \sum_{\{n_{i}\}} \exp \left[ \frac{\sum_{i} n_{i}E_{i} - E_{H}(N) - \mu N} {kT} \right]$$ The lack of diffusive contact between the quantum dots and the rest of the device means that the chemical potential, $\mu$, is determined by electrons in the leads and contacts. The summation in $Z(N)$ is carried out over all electron configurations $\{n_{i}\}$ for which $\sum_{i} n_{i} = N$. Double counting the Coulombic interaction is avoided by subtracting the Hartree energy $E_{H}(N)$ for the $N$ electrons. The Hartree energy appearing in the partition function is [@bib:Nagaraja97] $$E_{H}(N) = \frac{1}{8 \pi \varepsilon} \int \frac{n_{e}(\vec{r}) \ n_{e}(\vec{r}')} {\arrowvert \vec{r}-\vec{r}' \arrowvert} \ d\vec{r} \ d\vec{r}'$$ where $n_{e}(\vec{r})$ is the charge in the quantum dot and the integration is restricted to the dot region. Directly solving for the Hartree energy by performing a double integral over the quantum dot charge density is too time consuming and impractical due to the presence of the singularity in the integrand at $\vec{r} = \vec{r}'$. An alternative method of calculating the Hartree energy is to use the equivalent expression $$E_{H}(N) = \frac{1}{8 \pi \varepsilon} \int V_{e}(\vec{r}) \ n_{e}(\vec{r}) \ d\vec{r}$$ where the potential $V_{e}(\vec{r})$ is obtained by solving the Poisson equation in the pillar using the charge density, $n_{e}(\vec{r})$, [*in the quantum dot*]{}. [@bib:Fonseca98] The boundary condition on $V_{e}(\vec{r})$ at the surface of the pillar is determined by asymptotically expanding $V_{e}(\vec{r})$ in a multipole expansion in the quantum dot charge density up through quadrupole terms and using this expansion to specify $V_{e}$ at the surface. This is a good approximation since the pillar boundaries are far from the localized quantum dot charge. [@bib:Fonseca98] To obtain a self-consistent solution to the coupled Poisson and Schrödinger equations, we first specify the device structure including the $Al_{x}Ga_{1-x}As$ alloy composition, the doping concentration, and the arrangement of the electrodes. In all our runs, the source and drain are assumed to be grounded and the gate is assumed to be negatively biased. We initially assume complete depletion in the structure and solve the Poisson equation to obtain an initial guess for the electrostatic potential, $V(\rho,z)$. With this electrostatic potential and the quantum well band offset potentials, we solve the Schrödinger equation for the unoccupied quantum dot energies and wavefunctions. The chemical potential in the depleted structure is set to the minimum of the Thomas-Fermi electron potential, $U = -q \ V(\rho,z) + \Delta E_{c}$, in the source and drain regions. Starting with these initial guesses for the chemical potential in the leads and the solutions of the Poisson and Schrödinger equations, we obtain self-consistent solutions through the following relaxation procedure. First, electron densities in the leads and the quantum dots are obtained from the chemical potential, the temperature, and the quantum dot electronic states. The global charge density, including the given doping charge, is then obtained. Next the Poisson equation is solved for $V(\rho,z)$. The Hartree potential and exchange-correlation potentials are then obtained from $V(\rho,z)$ and the electron charge density. With the electron potentials in hand, the Schröodinger equation is solved in each quantum dot region. The procedure is then repeated until convergence is achieved. In updating the electrostatic potential and electron charge density, the new solutions are mixed with the old to obtain the updated solutions. For the electrostatic potential $$V(\rho,z) \rightarrow \lambda \ V_{new}(\rho,z) + (1-\lambda) \ V_{old}(\rho,z)$$ where $\lambda < 1$ is a relaxation parameter which is dynamically adjusted to accelerate convergence. A similar scheme is used to update the electron charge density. The above procedure is iterated until the chemical potential, electrostatic potential, electron charge density, and quantum dot energy levels all change by less than some small relative tolerance between successive iterations at which point convergence is achieved. Typically about 400 iterations are required to achieve convergence to within one part in $10^{4}$. A Three qubit quantum register: 1D analysis =========================================== We can use the device modeling program described in the last section to obtain a design for a three-qubit quantum register. We could, in principle, do a full 3D analysis of the device and obtain suitable design parameters (i.e., pillar dimensions, doping concentrations, asymmetric well shapes, electrode placement and biasing, etc.) based on our computationally intensive 3D model. Clearly this would be prohibitively time consuming due to the size of the parameter space that would need to be investigated as well as the time required to perform each run. To narrow down the design parameters, we can take advantage of the fact that our quantum computer is operated in the extreme depletion limit and do a simple 1D analysis to gain some useful insight. Let’s assume that inside the core of stacked quantum wells (radius $R_{c}$) we have [*complete*]{} depletion and uniform doping. In this limit, the quantum dot electron potential, $U(r)$, can be expressed in cylindrical coordinates as $U(\vec{r}) = U(z) + U(\rho)$, where $U(\rho)$ is a radial potential arising from the uniform donor density and $U(z)$ is the conduction band offset potential along the growth direction. This separable potential assumption is a good approximation in the strong depletion regime where only a single electron resides in each dot. The assumption of a separable potential is commonly used in the study of quantum dot structures and enables us to consider the $z$ and $\rho$ motions separately. [@bib:Tarucha96; @bib:Jacak98] The z-directional potential $U(z)$, shown schematically in the inset of Fig. \[density\], is a step potential formed by a layer of $Al_{x}Ga_{1-x}As$ of thickness $B$ ($0 < z < B$) and a layer of $GaAs$ of thickness $L - B$ ($B < z < L$). The resulting asymmetric quantum dot/well is confined by $Al_{y}Ga_{1-y}As$ barriers with $y > x$ and the asymmetry is parameterized by the ratio $B/L$ where $0 < B/L < 1$. In the effective mass approximation, the qubit wavefunctions are $\arrowvert i \rangle = R(\rho) \ \psi_{i}(z) \ u_{s}(\vec{r})$ ($i = 0,1$). Here $R(\rho)$ is the ground state of the radial envelope function, $\psi_{i}(z)$ is the envelope function along $z$, and $u_{s}(\vec{r})$ is the $s$-like zone center Bloch function including electron spin. For simplicity, we assume complete confinement by the $Al_{y}Ga_{1-y}As$ barriers along the z direction. Then, the envelope function $\psi_{i}(z)$ is obtained by solving the time-independent Schrödinger equation subject to the boundary conditions $\psi_{i}(0) = \psi_{i}(L) = 0$. The energies of the qubit wavefunctions are given by $E = E_{\rho} + E_{i}$ where $E_{\rho}$ is the energy associated with $R(\rho)$ and $E_{i}$ is the energy associated with $\psi_{i}(z)$. Figure \[density\] shows the probability density, $\arrowvert \psi_{i}(z) \arrowvert ^{2}$, as a function of position, $z$, for the two qubit states $\arrowvert 0 \rangle$ and $\arrowvert 1 \rangle$ in a $20 \ nm$ $GaAs/Al_{0.3}Ga_{0.7}As$ asymmetric quantum dot. The barrier thickness $B = 15 \ nm$ and the overall length of the dot is $L = 20 \ nm$. By choosing $B/L = 0.75$ and $x = 0.3$, it is found that the ground state wavefunction $\arrowvert 0 \rangle$ is strongly localized in the $GaAs$ region while the $\arrowvert 1 \rangle$ wavefunction is strongly localized in the $Al_{0.3}Ga_{0.7}As$ barrier. By appropriately choosing the asymmetric quantum dot parameters, the qubit wavefunctions can be spatially separated and a large difference in the electrostatic dipole moments can be achieved. The transition energy $\Delta E_{0} = E_{1} - E_{0}$ between $\arrowvert 1 \rangle$ and $\arrowvert 0 \rangle$ is shown in Fig. \[transition\] as a function of $B/L$ in a $20 \ nm$ $GaAs/Al_{x}Ga_{1-x}As$ asymmetric quantum dot ($ L= 20 \ nm$). Several values of Al concentration $x$ are considered. In Fig. \[qubit\_energy\], we fix the Al concentration at $x = 0.2$ and plot $\Delta E_{0}$ as a function of $L$ for several values of $B/L$. The continuous curves are based on our 1D analysis and the squares are the qubit energy gaps for a three qubit self-consistent quantum register calculation as described in the next section. It is clear that the transition energy can be tailored substantially by varying the asymmetry parameter. With three parameters available for adjustment ($B$, $L$, and $x$), we can make $\Delta E_{0}$ unique for each dot in the register. In this way, we can address a given dot by using laser light with the correct photon energy. It is desirable that the $\arrowvert 1 \rangle$ state be the first excited level of the quantum dot. Thus, the lowest lying radial state $(0,1,0)$ should lie above the $\arrowvert 1 \rangle$ state. The radial energy gap, $\Delta E_{1}$, between the ground state, $\arrowvert 0 \rangle$, and the first radial excited state, $(0,1,0)$, is found by solving a 2D Schrödinger equation for an electron in the radial potential, $U_{r}(\rho)$. If we take the barrier in the sheath to be infinite, then in the extreme depletion limit, we have $$U_{r}(\rho) = \left\{ \begin{array}{ll} -q \ V(\rho) & \mbox{\ if $\rho < R_{c}$} \\ \infty & \mbox{\ otherwise} \end{array} \right.$$ where $V(\rho)$ is the radial electrostatic potential. For complete depletion and uniform doping, the Poisson equation for $V(\rho)$ can be solved analytically. Thus, $$V(\rho) = \frac{\pi N_{D}^{+}}{\varepsilon} (R_{c}^2 - \rho^2)$$ where $R_{c}$ is the sheath radius and $N_{D}^{+}$ is the doping density. Numerically solving the 2D Schrödinger equation for an electron in the potential, $U_{r}(\rho)$, is straightforward. Figure \[radialgap\] shows the radial energy gap, $\Delta E_{1}$, between the $\arrowvert 0 \rangle$ and the lowest lying radial state, $(0,1,0)$, as a function of doping concentration, $N_{D}^{+}$, for several values of $R_{c}$. For narrow pillars with low doping concentrations, the radial energy gap is determined by size confinement. For large pillars with high doping concentrations the radial energy gap is determined by electrostatic confinement. From Fig. \[qubit\_energy\], we see that the qubit energy gaps reach a minimum near $\Delta E_{0} \sim 70 \ meV$ for quantum wells with $L \sim 20 \ nm$. The results of Fig. \[radialgap\] suggest that radial gaps in the range of $\Delta E_{1} \sim 100 \ meV$ with strong size confinement can be achieved with doping densities in the range of $N_{D}^{+} \sim 10^{17} \ cm^{-3}$ if $R_{c} \sim 70 \ \AA$. The electric field from an electron in one dot shifts the energy levels of electrons in adjacent dots through electrostatic dipole-dipole coupling. By appropriate choice of coordinate systems, the dipole moments associated with $\arrowvert 0 \rangle$ and $\arrowvert 1 \rangle$ equal in magnitude but oppositely directed. The dipole-dipole coupling energy is then defined as [@bib:BarencoDeutsch95] $$V_{dd}=2 \ \frac{\arrowvert d_{1} \arrowvert \ \arrowvert d_{2} \arrowvert} {\epsilon_{r} R_{12}^{3}}, \label{Vdd}$$ where $d_{1}$ and $d_{2}$ are the ground state dipole moments in the two dots, $\epsilon_{r} = 12.9$ is the dielectric constant for $GaAs$, and $R_{12}$ is the distance between the dots. Figure \[coupling\] shows the dipole-dipole coupling energy, $V_{dd}$, between two asymmetric $GaAs/Al_{x}Ga_{1-x}As$ quantum dots of widths $L1 = 19 \ nm$ and $L2 = 21 \ nm$ separated by a $10\ nm$ $Al_{y}Ga_{1-y}As$ barrier. The coupling energy is plotted as a function of $B/L$ for several values of $x$ where $B/L$ and $x$ are taken to be the same in both dots. The dipole-dipole coupling energies are a strongly peaked function of the asymmetry parameter, $B/L$. From the figure, we see that values of $V_{dd} \sim 0.15 \ meV$ can be achieved. Quantum dot electrons can interact with the environment through the phonon field, particularly the longitudinal-optical (LO) and acoustic (LA) phonons. The LO phonon energy, $\hbar \omega_{LO}$, lies in a narrow band around $36.2 \ meV$. As long as the quantum dot energy level spacings lie outside this band, LO phonon scattering is strongly suppressed by the phonon bottleneck effect. Acoustic phonon energies are much smaller than the energy difference, $\Delta E$, between qubit states. Thus, acoustic phonon scattering requires multiple emission processes which are also very slow. Theoretical studies on phonon bottleneck effects in GaAs quantum dots indicate that LO and LA phonon scattering rates including multiple phonon processes could be slower than the spontaneous emission rate [*provided that the quantum dot energy level spacing is greater than $\sim 1\ meV$ and, at the same time, avoids a narrow window around the LO phonon energy*]{}. [@bib:Inoshita; @bib:Bockelman; @bib:Benisty] In Ref. , Inoshita and Sakaki compute multi-phonon relaxation rates in spherical single-electron $GaAs$ quantum dots due to one- and two-phonon scattering by LO and LA phonons at $T = 0 \ K$ and $T = 300 \ K$. Using the results of this calculation, we estimate that multi-phonon scattering dominates the spontaneous emission only if the qubit energy level spacing is within $\sim 4 \ meV$ of the LO phonon energy. Likewise, multi-phonon LA scattering becomes important if the qubit energy gaps are smaller than $\sim 1 \ meV$ While dephasing via interactions with the phonon field can be strongly suppressed by proper designing of the structure, quantum dot electrons are still coupled to the environment through spontaneous emission and this is the dominant dephasing mechanism. Decoherence resulting from spontaneous emission ultimately limits the total time available for a quantum computation. [@bib:Ekert96] Thus, it is important that the spontaneous emission lifetime be large. The excited state lifetime, $T_{d}$, against spontaneous emission is [@bib:Ekert96] $$T_{d} = \frac{3 \hbar \ (\hbar c)^{3}}{4 e^{2}\ D^{2}\ \Delta E^{3}} ~ , \label{Td}$$ where $D = \langle 0 \arrowvert z \arrowvert 1 \rangle$ is the dipole matrix element between $\arrowvert 0 \rangle$ and $\arrowvert 1 \rangle$. Figure \[lifetime\] shows the spontaneous emission lifetime of an electron in qubit state $\arrowvert 1 \rangle$ for a $20 \ nm$ $GaAs/Al_{x}Ga_{1-x}As$ quantum dot as a function of asymmetry parameter, $B/L$, for several values of Al concentration, $x$. It is immediately obvious from Fig. \[lifetime\] that the lifetime depends strongly on $B/L$. Depending on the value of $x$ chosen, the computed lifetime can achieve a maximum of between 4000 $ns $ and 6000 $ns$. In general, the maximum lifetime increases with $x$. In Eq. (\[Td\]), the lifetime is inversely proportional to $\Delta E^{3}$ and $D^{2}$, but the sharp peak seen in Fig. \[lifetime\] is due [*primarily*]{} to a pronounced minimum in $D$. Based on these results, we can estimate parameters for a solid state quantum register containing a stack of several asymmetric $GaAs/Al_{0.3}Ga_{0.7}As$ quantum dots in the $L \sim 20 \ nm$ range separated by $10 \ nm$ $Al_{y}Ga_{1-y}As$ barriers ($y > 0.4$). An important design goal is obtaining a large spontaneous emission lifetime and a large dipole-dipole coupling energy. From Figs. \[coupling\] and \[lifetime\], we see that both can be achieved by selecting an asymmetry parameter, $B/L = 0.8$. This gives us a spontaneous emission lifetime $T_{d} = 3100 \ ns$ and a dipole-dipole coupling energy $V_{dd} = 0.14 \ meV$. The transition energy between the qubit states is on the order of $100\ meV$ ($\lambda = 12.4 \ \mu m$). In a quantum computation, the quantum register is optically driven by a laser as described in Ref. . In our example, we require a tunable infra-red laser in the mid-$10\ \mu m$ range so we can individually address various transitions between coupled qubit states. A Three qubit quantum register: 3D analysis =========================================== Using the results of our simple 1D model as a starting point, we designed a three qubit quantum register by using the self-consistent device model described in Section III. Several criteria have to be met for a viable quantum register design and the structure we obtained through trial-and-error involved tradeoffs between several design goals. For a self-consistent quantum register calculation, we assume the parameters of the free-standing quantum dot pillar structure (shown in Fig. \[pillar\]) as follows: The height of the pillar is taken to be $ {\cal L} = 1000 \ nm$ while the radii of the core and sheath are taken to be $R_{c} = 7 \ nm$ and $R = 50 \ nm$. The drain and source contacts at the top and bottom of the pillar are grounded and a cylindrical gate with a height of $400 \ nm$ is placed around the center of the pillar. Near the source and drain contacts, layers of intrinsic semiconductor serve to inhibit gate-to-source and gate-to-drain currents. The central $600 \ nm$ of the pillar is uniformly n-doped with a doping concentration of $N_{D} = 5 \times 10^{17} \ cm^{-3}$. The cylindrical sheath surrounding the core region is composed of high band gap $Al_{0.45}Ga_{0.55}As$ and serves to confine electrons to the core region. The three qubits in the core are defined by the composition profile of $Al_{x}Ga_{1-x}As$ along the pillar axis. In our structure, the Al concentration, $x$, in the core region is uniform in the radial direction. The composition profile along the pillar axis in the core region is shown in Fig. \[composition\]. The ground and first excited electronic states are the qubit states $\arrowvert 0 \rangle$ and $\arrowvert 1 \rangle$ and the electron charge densities for these states are shown schematically in the figure. We find that in thermal equilibrium the electrons reside entirely in the ground state $\arrowvert 0 \rangle$ for temperatures as high as $77 \ K$ since the energy gap between $\arrowvert 0 \rangle$ and $\arrowvert 1 \rangle$ is much greater than $kT$. This is indicated schematically by the solid circles in the diagram. The widths $L$ of the asymmetric quantum wells/dots defining qubits $1$ through $3$ are $19.0$, $20.5$ and $22.0 \ nm$ while the $B/L$ ratios are $0.670$, $0.683$ and $0.675$ respectively. Our 1D analysis suggests that asymmetry parameters in this range will result in long spontaneous emission lifetimes and strong dipole-dipole coupling between neighboring qubits. The asymmetric quantum wells are composed of $GaAs$ and $Al_{0.2}Ga_{0.8}As$ layers and the barriers between the asymmetric dots/wells are composed of $Al_{0.45}Ga_{0.55}As$. With a properly chosen reverse gate bias, $V_{g}$, the doping region in the center of the pillar is depleted and the equilibrium Fermi level lines up so that there is exactly one electron in each dot. Single electron occupancy in the dots is necessary in order for there to be a [*well defined qubit Hilbert space*]{}. Due to shell filling effects, single electron occupancy in all three dots holds over a finite range of the gate voltage. By running our device model for several values of $V_{g}$, we find that single electron occupancy is obtained over the range $-1.56 ~V ~ \leq V_{g} \leq ~ - 1.48 ~ V$. Thus, the requirement for single electron occupancy in the quantum dots is [*maintained in the presence of gate voltage fluctuations*]{} on the order of $\Delta V_{g} \approx 0.08 \ V$. For $V_{g} = -1.5 \ V$, the self-consistent electron potential along the pillar axis, (i.e. $\rho = 0$) is shown in Fig. \[electron\_potential\] as a function of position along the pillar axis. The position along the pillar axis is measured from the drain contact at $z = 0 \ nm$ to the source contact at $z = 1000 \ nm$. Figure \[electron\_potential\] is centered on the active region of the register containing the three quantum dots and the origin of the energy scale is chosen to be the equilibrium Fermi level. The total electron potential is approximately the sum of the self-consistent electrostatic Hartree potential and the conduction band offset potential, the self-consistent exchange-correlation potential being negligible. The self-consistent electron levels are obtained by solving the Schrödinger equation in the self-consistent potential shown in Fig. \[electron\_potential\]. In our structure, the $\arrowvert 0 \rangle$ ground states have $(n_{\rho},n_{\phi},n_{z}) = (0,0,0)$ symmetry and the $\arrowvert 1 \rangle$ states (the first excited level) in all three qubits are $(n_{\rho},n_{\phi},n_{z}) = (0,0,1)$ states. The self-consistent qubit energy gap, $\Delta E_{0}$, between the $\arrowvert 0 \rangle$ and $\arrowvert 1 \rangle$ states, the radial energy gap, $\Delta E_{1}$, and the spontaneous emission lifetime of the $\arrowvert 1 \rangle$ state, $\tau_{s}$, and dipole moment, $d$, for the three qubits are listed in Table \[qubit\_levels\]. From Table \[qubit\_levels\], we see that the radial energy gaps are larger than the qubit energy gaps. Another thing to note is that the qubit energy gaps are large compared to $k T$ at $T = 77 \ K$. Thus, in thermal equilibrium the electrons reside entirely in the $\arrowvert 0 \rangle$ level at $77 \ K$. This means that the initial state of the quantum register is characterized by a pure state density matrix $\hat{\rho}_{0} = \arrowvert 0,0,0 \rangle \ \langle 0,0,0 \arrowvert$. Consequently, there is [*no need for initial state preparation*]{} in our quantum register. In Fig. \[electron\_density\], the self-consistent electron probability densities in the three quantum dots are plotted as a function of position along the pillar axis. Each dot traps one electron and the probability densities in the ground and first excited states are shown as solid and dot-dashed lines, respectively. The barriers are thick enough so that electron wavefunctions in adjacent dots do not overlap. The energy levels for the three qubit quantum computer are shown in Table \[multi-qubit\] with and without the inclusion of dipole-dipole coupling between the qubits. From Table \[multi-qubit\] we see that a different energy corresponds to each three-electron state $\arrowvert i_{1}, \ i_{2}, \ i_{3} \rangle$ of the register where $i_{n} = (0,\ 1)$ labels the state of the $n$-th qubit. Transition energies between the states $\arrowvert 0 \rangle$ and $\arrowvert 1 \rangle$ for a given qubit are obtained by taking differences between the appropriate entries in Table \[multi-qubit\]. For the first qubit, we take differences between all three-particle states $\arrowvert 0, \ i_{2}, \ i_{3} \rangle$ and $\arrowvert 1, \ i_{2}, \ i_{3} \rangle$. In general, the transition energy between $\arrowvert 0 \rangle$ and $\arrowvert 1 \rangle$ for an electron in the first qubit will depend on the states, $i_{2}$ and $i_{3}$, occupied by the second and third qubits, and there can be as many as four such conditional transitions. In the absence of dipole-dipole coupling between qubits, all four conditional transition energies between $\arrowvert 0 \rangle$ and $\arrowvert 1 \rangle$ for a given qubit are degenerate. When dipole-dipole interactions between the qubits are considered, the four-fold degenerate conditional transition energies split into multiplets depending on which states are occupied by the electrons in neighboring qubits. The conditional transition energies between $\arrowvert 0 \rangle$ and $\arrowvert 1 \rangle$ states for our three qubit register are shown in Fig. \[spectrum\] as a function of photon energy. In the absence of dipole-dipole coupling, the transition energies for the three qubits are $40.86 \ meV$, $47.14 \ meV$, and $52.31 \ meV$, respectively. When dipole-dipole interactions between qubits are taken into account, the conditional transition energies split into multiplets as shown in this figure. Each transition in the spectrum is labeled by the neighboring electron states which give rise to it. By performing optical $\pi$-pulses at selected conditional transition frequencies, quantum logic operations can be performed. For example, a $\pi$ pulse performed on the lowest energy transition in Fig. \[spectrum\] performs a bit flip on the first qubit provided the second qubit is in state $\arrowvert 1 \rangle$. This operation is just a Controlled-Not gate with qubit 2 as the control bit and qubit 1 as the target bit. The need to selectively perform $\pi$-pulses at the conditional transition frequencies allows us to make some preliminary estimates on the parameters of the laser system needed to drive a quantum computation. If we want to selectively drive a given transition without exciting neighboring transitions, then the bandwidth of the $\pi$-pulse needs to be less than the splitting between the two most closely spaced lines in the spectrum. From Fig. \[spectrum\], the two most closely spaced lines are spaced $\Delta \hbar \omega \approx 0.0776 \ meV$ apart. If we require that the $\pi$-pulse bandwidth is $\Delta E_{\pi} \approx 0.01 \ meV$, then the pulse length can be estimated from Heisenberg’s uncertainty principle, $\Delta E_{\pi} \Delta T_{\pi} \approx \hbar/2$, to be $T_{\pi} \approx 33 \ ps$. If we assume a square $\pi$-pulse, the magnitude of the optical electric field is given by [@bib:Mahler98] $$E_{0} = \frac{\pi \hbar}{q \ d \ T_{\pi}}$$ where $d$ is the optical dipole from Table \[qubit\_levels\] and the average Poynting vector during the pulse is [@bib:Lorraine70] $$S_{av} = \frac{1}{2} c \epsilon_{0} E_{0}^{2}$$ For $d \approx 10 \ \AA$ and $T_{\pi} \approx 33 \ ps$, we obtain $E_{0} \approx 0.627 \ kV/cm$ and $S_{av} \approx 522 \ W/cm^{2}$. Summary ======= In this paper, we have studied a solid state implementation of quantum computing based on coupled quantum dots. Our quantum register consists of a free standing n-type pillar with grounding leads at the top and bottom of the structure. Asymmetric quantum wells confine electrons along the pillar axis and a high bandgap $AlGaAs$ sheath wrapped around the center of the pillar allows for confinement in the radial direction. The ground and first excited electronic states of the quantum dots act as qubit states $\arrowvert 0 \rangle$ and $\arrowvert 1 \rangle$, respectively. We have developed a 3D device model for a general SET structure containing several quantum dots. We self-consistently solve coupled Schrödinger and Poisson equations for the device and develop a design for a three qubit quantum register with asymmetric quantum dots tailored for long dephasing time and large dipole-dipole coupling between the dots. Our results indicate that a single gate electrode can be used to localize a single electron in each of the quantum dots. Adjacent dots are strongly coupled by electric dipole-dipole interactions arising from the dot asymmetry thus enabling rapid computation rates. This study was supported, in part, by the Defense Advanced Research Project Agency and the Office of Naval Research. L. K. Grover, Phys. Rev. Lett. [**79**]{}, 325 (1997). P. W. Shor, “Algorithms for quantum computation: Discrete logarithms and factoring,” in [*Proceedings of the 35th Annual IEEE Symposium on Foundations of Computer Science*]{} (IEEE Computer Society Press, Los Alamitos, CA, 1994), pp. 124-134. J. I. Cirac and P. Zoller, Phys. Rev. Lett. [**74**]{}, 4091 (1995); C. Monroe, D. M. Meekhof, B. E. King, W. M. Itano, and D. J. Wineland, Phys. Rev. Lett. [**75**]{}, 4714 (1995). T. Pellizzari, S. A. Gardiner, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. [**75**]{}, 3788 (1995); Q. A. Turchette, C. J. Hood, W. Lange, H. Mabuchi, and H. J. Kimble, Phys. Rev. Lett. [**75**]{}, 4710 (1995). I. L. Chuang, N. Gershenfeld, and M. Kubinec, Phys. Rev. Lett. [**80**]{}, 3408 (1998); D. G. Cory, M. D. Price, W. Maas, E. Knill, R. Laflamme, W. H. Zurek, T. F. Havel, and S. S. Somaroo, Phys. Rev. Lett. [**81**]{}, 2152 (1998). A. Shnirman, G. Schön, and Z. Hermon, Phys. Rev. Lett. [**79**]{}, 2371 (1997). N. J. Cerf, C. Adami, and P. G. Kwiat, Phys. Rev. A [**57**]{}, R1477 (1998). A. Barenco, D. Deutsch, A. Ekert, and R. Jozsa, Phys. Rev. Lett. [**74**]{}, 4083 (1995). A. A. Baladin and K. L. Wang, private communication. B. E. Kane, Nature [**393**]{}, 133 (1998). T. Tanamoto, e-print quant-ph/9902031. D. Loss and D. P. DiVincenzo, Phys. Rev. A [**57**]{}, 120 (1998); G. Burkard, D. Loss, and D. P. DiVincenzo, Phys. Rev. B [**59**]{}, 2070 (1999). S. Tarucha, D. G. Austing, T. Honda, R. J. van der Hage, and L. P. Kouwenhoven, Phys. Rev. Lett. [**77**]{}, 3613 (1996). D. Collins, K. W. Kim, W. C. Holton, H. Sierzputowska-Gracz, and E. O. Stejskal, e-print quant-ph/9910006. C. Leichtle, W. P. Schleich, I. Sh. Averbukh, and M. Shapiro, Phys. Rev. Lett. [**80**]{}, 1418 (1998); T. C. Weinacht, J. Ahn, and P. H. Bucksbaum, Phys. Rev. Lett. [**80**]{}, 5508 (1998). M. S. Sherwin, A. Imamoglu, and T. Montroy, Phys. Rev. A [**60**]{}, 3508 (1999). A. Mekis, J. C. Chen, I. Kurland, S. Fan, P. R. Villeneuve, and J. D. Joannopoulos, Phys. Rev. Lett. [**77**]{}, 3787 (1996). D. G. Cory, M. D. Price, and T. F. Havel, Physica D [**120**]{}, 82, (1998); D. G. Cory, A. E. Dunlop, T. F. Havel, S. S. Somaroo, and W. Zhang, e-print quant-ph/9809045. D. V. Averin, A. N. Korotkov, and K. K. Likharev, Phys. Rev. B [**44**]{}, 6199 (1991). M. Stopa, Phys. Rev. B [**48**]{}, 18340 (1993). M. Macucci, K. Hess, and G. J. Iafrate, Phys. Rev. B [**48**]{}, 17354 (1993). D. Jovanovic and J. P. Leburton, Phys. Rev. B [**49**]{}, 7474 (1994). M. Stopa, e-print cond-mat/9609015. G. Todorovic, V. Milanovic, Z. Ikonic, and D. Indjin, Phys. Rev. B [**55**]{}, 15681 (1997). M. Macucci, K. Hess, and G. J. Iafrate, Phys. Rev. B [**55**]{}, R4879 (1997). S. Nagaraja, P. Matagne, V. Y. Thean, J. P. Leburton, Y. H. Kim, and R. M. Martin, Phys. Rev. B [**56**]{}, 15752 (1997). L. R. C. Fonseca, J. L. Jimenez, J. P. Leburton, and R. M. Martin, Phys. Rev. B [**57**]{}, 4017 (1998). J. I. Pankove, [*Optical Processes in Semiconductors*]{} (Dover, New York, 1975). H. A. Bethe and R. Jackiw, [*Intermediate Quantum Mechanics*]{} (W. A. Benjamin, Reading, MA, 1968). H. J. Lee, L. Y. Jurovel, J. C. Wolley, and A. J. Springthorpe, Phys. Rev. B [**21**]{}, 659 (1980). J. P. Perdew and Alex Zunger, Phys. Rev. B [**23**]{}, 5048 (1981). L. Jacak, P. Hawrylak, and A. Wòjs, [*Quantum Dots*]{} (Springer-Verlag, New York, 1998). T. Inoshita and H. Sakaki, Phys. Rev. B [**46**]{}, 7260 (1992). U. Bockelman and G. Bastard, Phys. Rev. B [**42**]{}, 8947 (1990). H. Benisty, Phys. Rev. B [**51**]{}, 13281 (1995). A. Ekert and R. Jozsa, Rev. Mod. Phys. [**68**]{}, 733 (1996). G. Mahler and V. A. Weberruss, [*Quantum Networks: Dynamics of Open Nanostructures*]{} (Springer-Verlag, New York, 1998). P. Lorraine and D. Corson, [*Electromagnetic Fields and Waves*]{} (W. H. Freeman, San Francisco, 1970). Qubit No. $\Delta E_{0} \ (meV)$ $\Delta E_{1} \ (meV)$ $\tau_{s} \ (ns)$ $d \ (\AA)$ ----------- ------------------------ ------------------------ ------------------- ------------- 1 40.86 63.7 28000 11.7 2 47.14 63.2 19000 11.5 3 52.31 61.9 14000 11.4 : Self-consistent qubit energy gap, $\Delta E_{0}$, radial energy gap, $\Delta E_{1}$, spontaneous emission lifetime, $\tau_{s}$, and optical dipole moment, $d$, for a three qubit quantum register.[]{data-label="qubit_levels"} [cdd]{} Register state & Energy (meV) & Energy (meV)\ & uncoupled & dipole-dipole coupled\ $\arrowvert 0 \ 0 \ 0 \rangle$ & -70.161 & -70.357\ $\arrowvert 0 \ 0 \ 1 \rangle$ & -17.852 & -17.833\ $\arrowvert 0 \ 1 \ 0 \rangle$ & -23.017 & -22.822\ $\arrowvert 0 \ 1 \ 1 \rangle$ & 29.292 & 29.273\ $\arrowvert 1 \ 0 \ 0 \rangle$ & -29.292 & -29.311\ $\arrowvert 1 \ 0 \ 1 \rangle$ & 23.017 & 23.213\ $\arrowvert 1 \ 1 \ 0 \rangle$ & 17.852 & 17.871\ $\arrowvert 1 \ 1 \ 1 \rangle$ & 70.161 & 69.966\
--- author: - 'C. M. Aegerter, R. Günther, and R. J. Wijngaarden' title: 'A two-dimensional rough surface: Experiments on a pile of rice.' --- Introduction ============ Roughening phenomena of interfaces have been studied extensively in recent years due to their wide range of applicability. Rough interfaces appear in such diverse systems as flux propagation in superconductors [@radu], the burning of papers [@jussi], diffusion waves [@marco], bacterial colonies [@bacteria], flow through porous media [@porous] and many more [@barabasi]. Even though all of these systems have very different microscopic physics governing the processes, they can be described by simple models from a very small number of universality classes [@barabasi]. The most famous such model is described by a non-linear diffusion equation known as the Kardar-Parisi-Zhang (KPZ) equation [@KPZ]: $$\partial_t h(x,t) = \nu \Delta h(x,t) + \lambda (\nabla h(x,t))^2 + \eta (x,t),$$ where $\nu$ is the diffusion coefficient, $\eta$ is a noise term, $\lambda$ quantifies the non-linearity and $h(x,t)$ is the position of the interface. In one dimension, the scaling behavior of an interface governed by the KPZ equation can be analytically solved. The roughness is parameterized by the width of the interface given by: $$\sigma(t,L) = \left(\langle (h(x,t) - \langle h(t) \rangle_L)^2 \rangle_L \right)^{1/2}.$$ Here, $\langle \cdot \rangle_L$ denotes the average over the interface in space. For a self-affine surface, the width growth as a power law in time $\sigma \sim t^\beta$, until saturation is reached when the correlation length becomes comparable to the system size [@barabasi]. This growth exponent, $\beta$, characterizes the dynamics of the process. After the saturation time, the width is constant in time at a value $\sigma_{sat} (L) \sim L^\alpha$, which grows as a power law with the system size [@barabasi]. This roughness exponent, $\alpha$, characterizes the structure of the interface. For the KPZ equation, one obtains $\alpha$ = 1/2 and $\beta$ = 1/3 [@KPZ]. For a multi-dimensional KPZ system however, the theoretical situation is unclear. Analytical treatments of the KPZ equation only exist in approximations [@amar; @cates; @toral], and results from numerical simulations vary greatly [@barabasi]. Similarly, experiments have up to now been restricted to a single dimension. The experimental problem of a two dimensional rough surface asks for a surface reconstruction technique with enough spatial resolution to span some orders of magnitude, while at the same time having the temporal resolution to capture the dynamics of the process, which is not easily achieved. Secondly, a system has to be found that exhibits KPZ roughening in 2d and is accessible experimentally. Presently, there is some interest to combine the exact results on the KPZ equation with the concept of self-organized criticality (SOC) [@SOC]. The surface of a sandpile, which is the archetypal system to study SOC, can be mapped onto a system which follows KPZ dynamics [@alava]. This is intriguing since it brings together two established fields of research, however has not yet been tested experimentally. We study here the front of a 2d rice-pile and its roughening behavior, showing that it does indeed obey KPZ dynamics. We choose rice, since it has been shown in 1d that a rice-pile does indeed show SOC behaviour [@frette]. With the surface of a rice-pile established as a roughening system, we can extend the study further to include the full 2d surface of the pile spanning an area of $\sim$1x1m$^2$. Using this system, we can determine roughening and growth exponents in 2d and compare them with theoretical predictions [@amar; @cates; @toral]. In section 2, we will discuss the experimental setup including the surface reconstruction technique based on active-light stereoscopy, as well as the growing mechanism of the pile. In section 3, we develop the analysis techniques, with special emphasis on the generalization from known 1d methods to the 2d problem. In section 4 we present the results of the rice-pile experiment. There, it is first shown that the front behaviour in 1d does in fact obey KPZ scaling before discussing the 2d results. Those results are used to put constraints on theoretical results for 2d KPZ behavior. Experimental setup ================== The rice-pile is grown by dropping rice, uniformly distributed along a line using a custom-built dispenser. The dispenser consists of a distribution board and a sowing machine. In the sowing machine, an eccentric rotor keeps the rice in motion such that a steady flow of rice is achieved at the rate of $\sim$5 g/s. This flow of rice is subsequently distributed along a line of 1 m in the distribution board using simple geometry (see ). The principle is related to that of a pin-board producing a Gaussian distribution. In order to study the surface properties of the rice-pile, a 3d reconstruction technique was developed, based on active-light stereoscopy [@rad]. A set of colored lines is projected onto the pile at approximately right angles using an overhead projector. In the stereoscopic view [@stereo], the projector takes the place of the second camera passing its information to the camera via the colored lines. The camera itself is placed at an angle of 45 degrees to the surface of the pile and the projected lines. From this view-point the projected lines are deformed and can be used to determine the 3d structure of the surface in the same way as iso-height-lines do on a map. An example of such a reconstruction is shown in . Measurements on test objects show that a surface of 1x1 m$^2$ can be reconstructed with an accuracy of 1-2 mm, which is comparable to the size of the rice grains and thus suited for the present purpose. The use of differently colored lines allows for better filtering and thus for better identification of the lines in the computer. Analysis methods ================ As noted in the introduction, rough surfaces are often analyzed using the width of the interface to characterize its structure and dynamics. However, in order to obtain reliable results, many experiments have to be averaged over, using such a method [@barabasi]. A more promising way of analysis, which has been extensively used in the analysis of 1d experiments is via the two-point correlation function [@barabasi] $$C(x,t) = \langle (h(\xi,\tau) - h(\xi+x,\tau+t))^2\rangle_{\xi,\tau}^{1/2}.$$ In both space and time the scaling behaviour of the correlation function is the same as that of the width thus making it possible to determine the growth and roughness exponents from $C(x,t)$. In addition, the growth exponent can be determined from data obtained after the saturation time, since in the correlation function only time differences are important. When generalizing the method to 2d, computational difficulties arise. Because of the number of points to compare, the number of operations to be carried out to determine the correlation function grows with the fourth power of the size of the surface. However, tests on small surfaces indicate that the radial average of $C(x,y,t)$ scales like the 2d local width, but due to the computational inefficiency we were using yet another method to determine the roughness and growth exponents. The power spectrum, or structure function [@power], can be determined easily for 1d and 2d systems from the square of the Fourier transform $\hat{h}(k_x,k_y)$ of the local height $h(x,y)$ $$S(k_x,k_y) = |\hat{h}(k_x,k_y)|^2.$$ Here, the computational load is just given by determining the Fourier transforms, which also in 2d only grows with the square of the size of the surface thus making it feasible to calculate the distribution function of the whole rice-pile surface. The square root of the integral of $S(k_x,k_y)$ over k-space, the distribution function $\sigma(k_x,k_y)$, $$\sigma(k_x,k_y) = \left(\int\int S(k_x,k_y) dk_xdk_y \right)^{1/2}$$ is equal to the rms-width of the interface [@power]. Thus the distribution function also has the same scaling behaviour as the width and can therefore be used to determine the roughness and growth exponents. Again, a radial average of $\sigma(k_x,k_y)$, $\sigma(k)$, scales like the 2d local width, thus making comparisons with previous simulations of ballistic deposition models possible. Moreover, real 2d measures like the distribution function can also give information about anisotropies of the scaling in the x- and y-directions. The distribution function is also useful in investigations of the dynamics of the processes. In that case, the square of the Fourier transform $\hat{h}(\omega)$ of the time dependence $h(t)$ has to be determined. The Fourier transforms are determined using an FFT algorithm after padding the data with zeros to the next power of two. Results and discussion ====================== In order to determine that the rice-pile surface does in fact follow KPZ behaviour we first determined the roughness and growth exponents of the front of the pile, given by the line of equal height of the pile at 0.1 m. The distribution functions determined in both space, $\sigma(k)$, and time, $\sigma(\omega)$, as well as the correlation functions $C(x,t)$ can be seen in , where the values of $\alpha = 0.48(3)$ and $\beta = 0.33(3)$ can be inferred. These values are in excellent agreement with the expectations from the KPZ equation thus establishing that KPZ behaviour does appear in SOC systems. The 2d distribution function, $\sigma(k_x,k_y)$, which characterizes the roughening of the whole surface is shown in on a triple-logarithmic plot. In the insert, the angular dependence of a power-law fit to $\sigma$ is shown. This indicates a dependence of the roughness exponent $\alpha$ on the direction, which shows the anisotropy of the system. Such an anisotropy most probably arises from the growth mechanism of the pile, which is seeded from a horizontal line, thus breaking the symmetry of the x- and y-directions. It should be noted here that the exponents corresponding to the x- and y-directions do not have to agree with those determined in a 1d analysis. This is because $\sigma(k_x,0)$ already includes data from the y-direction due to the complex nature of the Fourier transform. Thus, $\sigma(k_x,0)$ already presents an effective 2d measure. The radial average of the distribution function, $\sigma(k)$, is shown in a, where the value of the roughness exponent can also be determined. We obtain $\alpha_{2d} = 0.39(3)$, which is also in agreement with the average of the exponents determined as a function of angle. In addition, the temporal behavior, $\sigma(\omega)$ is shown in b, where we determine the 2d growth exponent to be $\beta_{2d} = 0.27(3)$. Both the roughness and growth exponents determined experimentally are in very good agreement with the conjecture derived from solid-on-solid models by Kim and Kosterlitz [@kim] for higher dimensional exponents given by $\alpha$ = 2/(d+3) and $\beta$ = 1/(d+2). Numerical results from integrating the 2d KPZ equation vary greatly, with values of $\alpha_{2d}$ ranging from 0.18 [@toral] to 0.39 [@amar] and $\beta_{2d}$ ranging from 0.1 [@toral] to 0.25 [@amar]. Our experimental results are in good agreement with the numerical values of Amar and Family [@amar], as well as Bouchaud and Cates [@cates] corresponding to the high range of the values, while excluding most of the other numerical investigations into 2d KPZ behavior. Conclusions =========== We have presented an experimental study on roughening in a 2d system. The surface of a rice-pile is measured with a reconstruction technique based on active-light stereoscopy. In 1d, the fronts of the rice as the pile is grown shows excellent agreement with the 1d KPZ universality class with exponents $\alpha$ = 0.48(3) and $\beta$ = 0.33(3) from a scaling-regime spanning more than two decades. Thus having established the KPZ nature of the system under study, we analyze the full 2d surface of the pile, where find a roughness exponent of $\alpha_{2d}$ = 0.39(3) and $\beta_{2d}$ = 0.27(3). This is consistent with numerical simulations for ballistic deposition models [@kessler; @meakin] and puts a strong experimental constraint on the available results on 2d KPZ simulations. Our results are in good agreement though with the results of Amar and Family [@amar], as well Bochaud and Cates [@cates] from numerical integration of the 2d KPZ equation. In addition however, we have studied the dependence of the exponent on the direction, where we find that the system is anisotropic with a somewhat higher exponent along the front direction. This is probably related to the difference between the two directions due to the growth mechanism. Acknowledgements ================ This work was supported by the Swiss National Science Foundation and by FOM (Stiching voor Fundamenteel Onderzoek der Materie), which is financially supported by NWO (Nederlandse Organisatie voor Wetenschappelijk Onderzoek). [99]{} , R., [*et al.*]{}, “Kinetic roughening of penetrating flux fronts in high T$_c$ superconducting thin fimls.”, [*Phys. Rev. Lett. **83***]{} (1999), 2054–2057. , J., [*et al.*]{}, ”Kinetic roughening in slow combustion of paper”, [*Phys. Rev. Lett. **79***]{} (1997), 1515–1518; [Myllys]{}, A., [*et al.*]{}, ”Kinetic roughening in slow combustion of paper”, [*Phys. Rev. E **64***]{} (2001), 036101. , M.S., private communication. , E., [*et al.*]{}, ”Communication, regulation and control during complex patterning of bacterial colonies”, [*Fractals **2***]{} (1994), 15–44. , S., G.L.M.K.S. Kahanda, and P.-Z. Wong, ”Roughness of wetting fluid invasion fronts in porous media”, [*Phys. Rev. Lett. **69***]{} (1992), 3731–3734. , A.-L., and H.E. Stanley [*Fractal Concepts in Surface Growth*]{}, Cambridge University Press (1995). , M., G. Parisi, and Y.-C. Zhang, ”Dynamic scaling of growing interfaces”, [*Phys. Rev. Lett. **56***]{} (1986), 889–892. , J.G., and F. Family, ”Numerical solution of a continuum equation for interface growth in 2+1 dimension”, [*Phys. Rev. A **41***]{} (1990), 3399–3402. , J.P., and M.E. Cates, ”Self-consistent approach to the KPZ equation”, [*Phys. Rev. E **47***]{} (1993), R1455–R1458. , A., and R. Toral, ”Numerical study of a model for interface growth.”, [*Phys. Rev. B **40***]{} (1989), 11419–11421. , P., C. Tang, and K. Wiesenfeld, ”Self-organized criticality: An explanation of 1/f noise.”, [*Phys. Rev. Lett. **59***]{} (1987), 381–384 and ”Self-organized criticality.”, [*Phys. Rev. A **38***]{} (1988), 364. See also [Bak]{}, P., [*How nature works*]{} (Oxford Univ. Press, 1995). , G.J., M.J. Alava, and J. Kertesz, ”Self-organized criticality in the Kardar-Parisi-Zhang-equation”, cond-mat/0112297; [Alava]{}, M.J., and K.B. Lauritsen, ”Quenched noise and over-active sites in sandpile dynamics”, [*Europhys. Lett. **53***]{} (2001), 569–572. , V., [*et al.*]{}, ”Avalanche dynamics in a pile of rice”, [*Nature (London) **379***]{} (1996), 49–51. , R., [*Reconstruction and Roughening of two-dimensional granular surfaces*]{}, Masters Thesis, Vrije Universiteit (2002). , Z., and G. Xu, [*Epipolar Geometry in Stereo, Motion and Object Recognition: A unified Approach*]{}, Kluwer Academic Publishers, (1996). , J., J.-P. Vilotte, and S. Roux, ”Reliability of self-affine measurements”, [*Phys. Rev. E **51***]{} (1995), 131; [Siegert]{}, M., ”Determining exponents in models of kinetic surface roughening”, [*ibid. **53***]{} (1996), 2309; [Lopez]{}, J.L., M.A. Rodriguez, and R. Cuerno, [*ibid. **56***]{} (1997), 3993. , J.M., and J.M. Kosterlitz, ”Growth in a restricted solid-on-solid model.”, [*Phys. Rev. Lett. **62***]{} (1989), 2289–2292. , R., [*et al.*]{}, ”Dynamical scaling of the surface of finite-density ballistic aggregation.”, [*Phys. Rev. A **38***]{} (1988), 3672–3678. , P., [*et al.*]{}, ”Ballistic deposition on surfaces.”, [*Phys. Rev. A **34***]{} (1986), 5091–5103.
--- abstract: 'Very recently the Dark Energy Survey (DES) Collaboration has released their second group of Dwarf spheroidal (dSph) galaxy candidates. With the publicly-available Pass 8 data of Fermi-LAT we search for $\gamma-$ray emissions from the directions of these eight newly discovered dSph galaxy candidates. No statistically significant $\gamma-$ray signal has been found in the combined analysis of these sources. With the empirically estimated J-factors of these sources, the constraint on the annihilation channel of $\chi\chi \rightarrow \tau^{+}\tau^{-}$ is comparable to the bound set by the joint analysis of fifteen previously known dSphs with kinematically constrained J-factors for the dark matter mass $m_\chi>250$ GeV. In the direction of Tucana III (DES J2356-5935), one of the nearest dSph galaxy candidates that is $\sim 25$ kpc away, there is a weak $\gamma-$ray signal and its peak test statistic (TS) value for the dark matter annihilation channel $\chi\chi\rightarrow \tau^{+}\tau^{-1}$ is $\approx 6.7$ at $m_\chi \sim 15$ GeV. The significance of the possible signal likely increases with time. More data is highly needed to pin down the physical origin of such a GeV excess.' author: - Shang Li - 'Yun-Feng Liang' - 'Kai-Kai Duan' - 'Zhao-Qiang Shen' - 'Xiaoyuan Huang$^\ast$' - 'Xiang Li$^\ast$' - 'Yi-Zhong Fan$^\ast$' - 'Neng-Hui Liao' - Lei Feng - Jin Chang bibliography: - 'refs.bib' title: 'Search for gamma-ray emission from eight dwarf spheroidal galaxy candidates discovered in Year Two of Dark Energy Survey with Fermi-LAT data' --- Introduction ============ The nature of dark matter particles is still unknown and among various speculated particles weakly interacting massive particles (WIMPs) are the most popular candidates [@Jungman:1995df; @Bertone:2004pz; @Hooper:2007qk; @Feng:2010gw]. WIMPs may annihilate or decay and then produce stable high-energy particle pairs such as electrons/positrons, protons/antiprotons, neutrinos/anti-neutrinos, $\gamma-$rays and so on. The main goal of the so-called indirect detection experiments is to identify cosmic rays or $\gamma-$rays with a dark matter origin [@Jungman:1995df; @Bertone:2004pz; @Hooper:2007qk; @Feng:2010gw]. The charged cosmic rays are deflected by the magnetic fields and their energy spectra would also be (significantly) modified during their propagation. As a result, the dark-matter origin of some cosmic ray anomalies$-$for example, the well-known electron/positron excesses [@Chang:2008aa; @Adriani:2008zr; @Adriani:2011xv; @FermiLAT:2011ab; @Aguilar:2013qda; @Aguilar:2014fea] $-$ is hard to reliably establish. The morphology of prompt $\gamma-$rays from annihilation or decay, instead, directly traces the dark matter spatial distribution and is therefore possible to choose regions in the sky with high dark matter density to investigate the dark matter properties. The annihilation signal is expected to be the brightest in the Galactic center but the astrophysical backgrounds are very complicated there [@Bertone:2004pz; @Hooper:2007qk]. That is why the dark matter annihilation origin of the GeV excess in the inner Galaxy [@Goodenough:2009gk; @2009arXiv0912.3828V; @Hooper:2010mq; @Hooper:2010im; @Abazajian:2012pn; @Gordon:2013vta; @Hooper:2013rwa] has not been widely accepted yet though its significance has been claimed to be as high as $\sim 40 \sigma$ [@Daylan:2014rsa] and this excess is found to be robust across a variety of models for the diffuse galactic $\gamma-$ray emission [@Zhou:2014lva; @Calore:2014xka; @Huang:2015rlu; @TheFermi-LAT:2015kwa]. The dwarf Spheroidal (dSph) galaxies are widely believed to be favorable targets with high signal-to-noise ratio [@Lake:1990du; @Baltz:2004bb; @Strigari:2013iaa], because on the one hand these objects are very nearby and on the other hand they are far away from complicated emission regions. Several searches for gamma-ray emissions from dwarf galaxies detected by Sloan Digital Sky Survey (SDSS), which covers the northern-hemisphere [@Simon:2007dq], and earlier experiments [@York:2000gk; @McConnachie:2012vd] have been performed using Fermi-LAT data, and none of them reported a significant detection [@Ackermann:2011wa; @GeringerSameth:2011iw; @Tsai:2012cs; @Mazziotta:2012ux; @2012PhRvD..86b3528C; @Ackermann:2013yva; @Ackermann:2015zua; @2015PhRvD..91h3535G; @2015arXiv151000389B]. The ongoing Dark Energy Survey (DES) [@Abbott:2005bi; @2016arXiv160100329D] is instead a southern-hemisphere optical survey and in early 2015 the DES Collaboration released their first group of dSph galaxy candidates [@Bechtol:2015cbp; @Koposov:2015cua]. Shortly after that, another dSph galaxy candidate (Triangulum II) was discovered with the data from the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) [@Laevens:2015una] and a few additional candidates were reported by other collaborations [@martin15_hydra; @kim15_pegasus; @kim15_horologium; @laevens15_3dsph]. Though a reliable J-factor is not available for most newly-discovered sources, the velocity dispersion measurements strongly suggest that some sources (e.g. Triangulum II, Horologium II) are indeed dark matter-dominated dSphs [@kim15_horologium; @kirby15_tri2]. The analysis of the publicly-available Fermi-LAT Pass 7 Reprocessed data found moderate evidence for $\gamma-$ray emission from Reticulum 2 [@Geringer-Sameth:2015lua; @Hooper:2015ula] and the signal was found to be consistent with the Galactic GeV excess reported in [@Hooper:2010mq; @Hooper:2010im; @Abazajian:2012pn; @Gordon:2013vta; @Hooper:2013rwa; @Daylan:2014rsa; @Zhou:2014lva; @Calore:2014xka; @Huang:2015rlu; @TheFermi-LAT:2015kwa]. Interestingly, later, the J-factor of Reticulum 2 is found to be among the largest of Milky Way dSphs [@2015ApJ...808L..36B]. The analysis of the Fermi-LAT Pass 8 data in the direction of Reticulum 2, however, just found a $\gamma-$ray signal with a largest local significance of $\sim 2.4\sigma$ for any of the dark matter masses and annihilation channels [@Drlica-Wagner:2015xua]. Very recently the DES Collaboration has released their second group of new dSph galaxy candidates [@Drlica-Wagner:2015ufc]. In this work we search for possible $\gamma-$ray emission from the directions of these very recently-discovered dSph galaxy candidates by the DES collaboration (hereafter we call them the DES Y2 dSph galaxy candidates). Data analysis ============= In this paper, we used the newly released Pass 8 data to search for gamma-ray emission from these DES Y2 dSph galaxy candidates. The Pass 8 data benefit from an improved energy reach (changing from the range of $0.1-300$ GeV to $60~{\rm MeV}-500~{\rm GeV}$), effective area in particular in the low energy range, and the point-spread function [@Atwood:2013rka]. Thanks to such improvements, the differential point-source sensitivity improves by 30-50% in P8R2\_SOURCE\_V6 data relative to P7REP\_SOURCE\_V15 data [@Drlica-Wagner:2015xua], which make it more sensitive to faint sources like dSph galaxies. We used the Fermi-LAT data collected from 2008 August 4 to 2015 August 4 that have passed the P8R2 SOURCE event class selections from 500 MeV to 500 GeV. To suppress the effect of the Earth’s limb, the $\gamma-$ray events with zenith angles greater than $100^{\circ}$ were rejected. We use the updated standard Fermi Science Tools package with version v10r0p5 to analyze Fermi-LAT data. The regions of interest (ROI) are selected as regions centered at the position of each DES Y2 dSph galaxy candidate. The selected data using criterion described above were divided into 100$\times$100 spatial bins with 0.1$^{\circ}$ bin size. Following Fermi team’s recommendation, we adopted a diffuse emission model based on the Pass 7 Reprocessed model for Galactic diffuse emission but has been scaled to account for differences in energy dispersion between Pass 7 reprocessed data and Pass 8 data [^1]. We took the approach developed in [@Tsai:2012cs; @Ackermann:2013yva; @Ackermann:2015zua] to analyze the gamma-ray emission for each dSph candidate, we refer readers to these literature for details of the approach we used. First, we carried out a standard binned likelihood fit over the entire energy range with 24 logarithmically spaced energy bins to determine the background sources using [*[gtlike]{}*]{} tool in Fermi Science Tools. All point-like sources from 3FGL [@Acero:2015hja] within $15^{\circ}$ of the center of each dSph galaxy were included and new excesses in TS map with ${\rm TS}\geq 25$ were identified as new sources and then included in the fit. The normalization of point sources within $5^{\circ}$ and the two diffuse backgrounds (Galactic diffuse emission and an isotropic components) were set free, while other parameters were fixed at the 3FGL values. No component associate with dSph was included in this step. Next, we adopted a bin-by-bin analysis as [@Tsai:2012cs; @Ackermann:2013yva; @Ackermann:2015zua]. For each ROI, a point source is added to the best fitting model in last step in the position of each dSph galaxy to consider the signal from the given direction. We modeled these dSph galaxy candidates as point-like sources rather than spatially extended sources due to the lack of the information of the spatial extension of dark matter halos of these newly-discovered objects. Likelihood profile, which is a curve of how likelihood varying as the flux of the newly added putative point source, are generated for everyone of 24 logarithmically evenly spaced energy bins. Within each energy bin, we fixed all the model parameters but the normalization of the newly added dSph point source, and use a power-law spectral model ($dN/dE \propto E^{-\Gamma}$) with spectral index of $\Gamma=2$ to fit the putative dwarf galaxy source. We scanned the likelihood as a function of the flux normalization of the assumed dark matter signal independently by varying the flux normalization to derive the profile. These likelihood profile will be used in later analysis in section \[sec2\_1\]. Using these profiles, we can also derive bin-by-bin energy-flux upper limits at 95% confidence level for each dSph candidate, which are shown in Figure 1. ---------------- ----------------- ---------------------- ------------------------------------- Name $(l,b)^{\rm a}$ ${\rm Distance^{b}}$ $\log_{10}{{\rm (Est. J)}^{\rm c}}$ (deg) (kpc) $\log10{\rm (GeV^{2}cm^{-5})}$ DES J2204-4626 (351.15,-51.94) $53\pm 5$ 18.8 DES J2356-5935 (315.38,-56.19) $25\pm 2$ 19.5 DES J0531-2801 (231.62,-28.88) $182\pm 18$ 17.8 DES J0002-6051 (313.29,-55.29) $48\pm 4$ 18.9 DES J0345-6026 (273.88,-45.65) $92\pm 13$ 18.3 DES J2337-6316 (316.31,-51.89) $55\pm 9$ 18.8 DES J2038-4609 (353.99,-37.40) $214\pm 16$ 17.6 DES J0117-1725 (156.48,-78.53) $30\pm 3$ 19.3 ---------------- ----------------- ---------------------- ------------------------------------- $^{\rm a}$ Galactic longitudes and latitudes are adopted from [@Drlica-Wagner:2015ufc]. $^{\rm b}$ The distances are taken from [@Drlica-Wagner:2015ufc]. $^{\rm c}$ J-factors are estimated with the empirical relation $J(d)\approx 10^{18.3\pm 0.1}(d/100~{\rm kpc})^{-2}$ [@Drlica-Wagner:2015xua]. ![image](f1.eps){width="100.00000%"} Combined constraint on the dark matter physical properties with the newly discovered eight dwarf galaxies {#sec2_1} --------------------------------------------------------------------------------------------------------- The dSph galaxies are known to be dominated by dark matter. The ${\gamma}-$ray flux expected from annihilation of dark matter particles in a dSph galaxy is given by [@Jungman:1995df; @Bertone:2004pz; @Hooper:2007qk; @Feng:2010gw] $${\Phi}(E)={\frac{<{\sigma}v>}{8{\pi}m_{\chi}^{2}}\times \frac{dN_{\gamma}}{dE_{\gamma}}\times J},$$ where ${m_{\chi}}$ is the rest mass of the dark matter particle, ${<{\sigma}v>}$ is the thermal average annihilation cross section, $dN_{\gamma}/dE_{\gamma}$ is the spectrum of prompt ${\gamma}-$rays resulting in dark matter particle annihilation and $J={\int}dld{\Omega}{\rho}(l)^{2}$ is the line-of-sight integral of the square of the dark matter density (i.e., the so-called J-factor). ![image](f21.eps){width="45.00000%"} ![image](f22.eps){width="45.00000%"} ![image](f31.eps){width="45.00000%"} ![image](f32.eps){width="45.00000%"} Utilizing the likelihood profile derived above, we reconstructed a broadband likelihood function by multiplying the bin-by-bin likelihood functions evaluated at the predicted fluxes for a given dark matter model. Then we combined the eight DES Y2 dSph candidates’ broad-band likelihood functions and added an extra J-factor likelihood term for each dSph candidate to take into account the J-factor’s statistical uncertainties. The J-factor likelihood term for each dwarf galaxy is given by $$L_{\rm J}(J_{\rm obs,i},{\sigma}_{\rm i})={1 \over \ln(10)J_{\rm obs,i}\sqrt{2\pi}{\sigma}_{\rm i}} \exp^{-[\log_{10}(J_{\rm i})-\log_{10}(J_{\rm obs,i})]^{2}/{2\sigma_{\rm i}^2}},$$ where $i$ represent different target, $J_{\rm i}$ is the J-factor’s “real" value and the $J_{\rm obs,i}$ is the J-factor’s empirically-estimated value with an error of ${\sigma}_{\rm i}$ [@Ackermann:2015zua]. After combining the J-factor likelihood term and the broad-band likelihood functions, the likelihood function for target $i$ reads ${\widetilde L_{\rm i}}(\boldsymbol{\mu},\boldsymbol{\theta}_{\rm i}={\lbrace}\boldsymbol{\alpha}_{\rm i},J_{\rm i}{\rbrace}{\vert}D_{\rm i})=L_{\rm i}(\boldsymbol{\mu},\boldsymbol{\theta}_{\rm i}{\vert}D_{\rm i}){L_{\rm J}(J_{\rm obs,i},{\sigma}_{\rm i})}$, where $\boldsymbol{\mu}$, $\boldsymbol{\alpha}_{\rm i}$, ${J_{\rm i}}$ and $D_{\rm i}$ represent the parameters of the dark matter model, the parameters of astrophysical background, the dSph J-factor and the gamma-ray data, respectively; and $\boldsymbol{\theta}_{\rm i}$ incorporates $\boldsymbol{\alpha}_{\rm i}$ and ${J_{\rm i}}$ [@Ackermann:2015zua]. To reduce the uncertainty on the direction of gamma-rays, we took into account four PSF event types (PSF0, PSF1, PSF2 and PSF3) when constructing the likelihood function, for which the broadband likelihood function for target $i$ is given by $L_{\rm i}(\boldsymbol{\mu},\boldsymbol{\theta}_{\rm i}{\vert}D_{\rm i})={\prod\limits_{\rm j}}L_{\rm i}(\boldsymbol{\mu},{\boldsymbol\theta}_{\rm i}{\vert}D_{\rm i,j})$, where $j$ represents the different PSF event type [@Ackermann:2015zua]. ![The TS value of the possible dark matter annihilation signal in the combined $\gamma-$ray data in the directions of seven “nearby" dSph galaxies (candidates), including Segue I, Segue II, Ursa Major II, Reticulum II, Tucana III, Cetus II and Willman I. The dark matter annihilation channels are labeled in the plot.[]{data-label="Fig.4"}](f4.eps){width="50.00000%"} So far the reliable J-factors for these eight DES Y2 dSph galaxy candidates are unavailable. An empirical relation between the heliocentric distances and J-factors of ultra-faint and classical dwarf galaxies is suggested to be $J(d)\approx 10^{18.3\pm 0.1}(d/100~{\rm kpc})^{-2}$ in [@Drlica-Wagner:2015xua], where $d$ is the distance of the object to the Sun and a symmetric logarithmic uncertainty on the J-factor of $\pm0.4$ dex for each DES dSph galaxy candidate is assumed [@Drlica-Wagner:2015xua]. The estimated values of J-factors of the eight DES Y2 dSph galaxy candidates are presented in Table 1. The individual and combined constraints on the dark matter annihilation channels of $\chi\chi \rightarrow b\bar{b}$ or $\tau^{+}\tau^{-}$ with these sources are presented in Figure 2. If the real J-factors are similar to our estimates, we can rule out the thermal relic cross section for WIMP with $m_\chi \lesssim 25$ GeV annihilating into either $b\bar{b}$ or $\tau^{+}\tau^{-}$. We have also analyzed all the 16 dSph candidates reported in [@Bechtol:2015cbp; @Koposov:2015cua; @Drlica-Wagner:2015ufc] and found out that the combined constraints on the dark matter models are similar to that set by the DES Y2 data. Hence we do not present them in this work. $\gamma-$ray emission in the direction of Tucana III ---------------------------------------------------- At a distance of 25 kpc, Tucana III (also known as DES J2356-5935) is one of the nearest dSph galaxy candidates locating at a high latitude that is suitable for dark matter indirect detection. We first used a global binned likelihood fit in the energy range from 500MeV to 500GeV with a power-law spectral model (i.e., $\frac{dN}{dE} {\propto}E^{-2}$) for this dSph candidate. Interestingly we found a weak “excess" of gamma-ray in the direction of Tucana III with ${\rm TS}\approx6.0$ after adding a possible weak point source (${\rm ra \approx 0.74^{\circ},~dec \approx -59.72^{\circ}}$) that is about 0.8 degree away. In addition, we used bin-by-bin method to make a further analysis of Tucana III. In Figure 3 we present TS values of $\gamma-$ray signal in the direction of Tucana III for various annihilation channels and dark matter masses. Note that Fig.3a is for the 7 year LAT data (i.e., from 2008 August 4 to 2015 August 4), in which one can find that in each channel the significance of the signal peaks above $2\sigma$. Particularly, in the case of $\chi\chi\rightarrow \tau^{+}\tau^{-}$, the TS value of the fit peaks about 6.7 at $m_\chi \sim 15$ GeV. For the adopted empirical J-factor, a $\langle \sigma v\rangle_{\chi\chi\rightarrow \tau^{+}\tau^{-}}\sim 5\times 10^{-27}~{\rm cm^{3}~s^{-1}}$ is needed to reproduce the signal. In the case of $\chi\chi\rightarrow b\bar{b}$, the TS value of the fit peaks about 6 at $m_\chi \sim 66$ GeV. With the adopted empirical J-factor, a $\langle \sigma v\rangle_{\chi\chi\rightarrow b\bar{b}}\sim 2\times 10^{-26}~{\rm cm^{3}~s^{-1}}$ is needed to account for the signal. Such $m_\chi$ and $\langle \sigma v\rangle$ are similar to that “preferred" by the Galactic GeV excess data as well as the possible gamma-ray signal in the direction of Reticulum 2 [@Hooper:2010mq; @Hooper:2010im; @Abazajian:2012pn; @Gordon:2013vta; @Hooper:2013rwa; @Daylan:2014rsa; @Zhou:2014lva; @Calore:2014xka; @Huang:2015rlu; @TheFermi-LAT:2015kwa; @Geringer-Sameth:2015lua; @Hooper:2015ula]. In view of the similar though weak signals in the directions of Reticulum 2 [@Geringer-Sameth:2015lua; @Hooper:2015ula; @Drlica-Wagner:2015xua] and Tucana III, following the same data analysis approach we make a combined analysis for these two nearby dSph candidates. The TS values for a dark matter annihilation signal (for the representative channels $\chi\chi \rightarrow b\bar{b}$, $\tau^{+}\tau^{-}$ and $\mu^{+}\mu^{-}$) are evaluated. Interestingly, the TS values of this GeV-excess like signal increase sizeably and in the case of $\chi\chi\rightarrow \tau^{+}\tau^{-}$ we have the largest ${\rm TS}\approx 14$ for $m_{\chi} \approx 16$ GeV. This corresponds to a local significance of $\sim3.7\sigma$, which decreases to $\sim2.3\sigma$ if we take into account the so-called trail-factor correction since here we have just chosen two sources from in total 16 DES dSph candidates. We have also analyzed the gamma-ray emission in the directions of dSph galaxies (candidates) within a distance $\leq 40$ kpc from the sun, including Segue I, Segue II, Ursa Major II, Reticulum II, Tucana III, Cetus II and Willman I but excluding Sagittarius and Canis Major since they are close to the Galactic plane. In the case of $\chi\chi\rightarrow \tau^{+}\tau^{-}$ we have the largest ${\rm TS}\approx 9.2$ at $m_{\chi} \approx 16$ GeV (see Figure 4). Now we briefly examine the possible astrophysical origin of the weak $\gamma-$ray signal in the direction of Tucana III. The “signal" is too weak to directly get the variability information. Instead, we calculate the TS values of the potential ‘GeV excess’ component in another time interval from 2008 August 4 to 2012 February 4 (i.e., the 3.5 year Fermi-LAT data) and the results are presented in Fig.3b. Interestingly the TS values of the annihilation channels shown in Fig.3a are larger than those in Fig.3b, implying that the significance is indeed increasing. Such an increase is expected in the models of dark matter annihilation or alternatively a steady astrophysical source. It is well known that radio loud active galactic nuclei (RLAGNs) could be possible counterparts because of the high galactic latitude of the possible $\gamma-$ray signal. We note that there is a radio source PMN J2355-5948 about $\rm 0.3^{\circ}$ away from the optical position of Tucana III. It is included in the Parkes-MIT-NRAO (PMN) surveys [@1994ApJS...91..111W] and Sydney University Molonglo Sky Survey (SUMSS) [@Mauch:2003zh] and the fluxes at 4.85 GHz and 843 MHz are $55\pm8$ mJy and $259\pm8$ mJy, respectively. Assuming its radio emission follows a power-law distribution, the radio spectrum index can be estimated as $\alpha_{\rm r}\simeq 0.9$ (note that we refer to a spectral index $\alpha$ as the energy index such that $F_{\nu}\propto\nu^{-\alpha}$). Since blazars characterized by the flat radio spectrum ($|\alpha_{\rm r}|\leq$ 0.5) are dominated the extragalactic $\gamma$-ray sky and $\gamma-$ray emissions from only a handful of steep radio spectrum RLAGNs have been detected [@Ackermann:2015yfk; @Liao:2015jfj], it is less likely that PMN J2355-5948 is capable to produce significant $\gamma-$ray emission. Discussion and conclusion ========================= Dwarf spheroidal galaxies are one of the best targets for the indirect detection of dark matter annihilation signal. However, the reliable identification of dwarf spheroidal galaxies in optical is a hard job. Before 2015, just 25 dSphs have been reported [@Simon:2007dq; @York:2000gk; @McConnachie:2012vd] and the $\gamma-$ray data analysis of these sources have imposed very stringent constraints on parameters for dark matter annihilation [@Ackermann:2011wa; @GeringerSameth:2011iw; @Tsai:2012cs; @Mazziotta:2012ux; @Ackermann:2013yva; @Ackermann:2015zua]. In 2015, with the optical imaging data from Dark Energy Survey, 16 new dwarf spheroidal galaxy candidates, including a few “nearby" sources at distances of $20-30$ kpc, have been released [@Bechtol:2015cbp; @Koposov:2015cua; @Drlica-Wagner:2015ufc]. The sample of dSphs thus increased significantly and quickly. Although the reliable estimates of J-factors of most of these new dSph candidates are still unavailable, it is the time to carry out the $\gamma-$ray data analysis to check whether there are some interesting signals or not. The $\gamma-$ray search for the first group DES dSph candidates have been reported in [@Drlica-Wagner:2015xua]. No significant gamma-ray emission signal has been identified and strong constraints on the dark matter annihilation channels have been provided by adopting an empirical relation between the J-factor of the dSph and its distance to us [@Drlica-Wagner:2015xua]. A very weak signal resembling the Galactic GeV excess, however, may present in the direction of Reticulum 2 [@Geringer-Sameth:2015lua; @Hooper:2015ula; @Drlica-Wagner:2015xua]. In this work we have analyzed the publicly-available Pass 8 data of Fermi-LAT in the directions of eight new dSph galaxy candidates discovered in Year Two of Dark Energy Survey (see Fig.\[Fig.1\]). No statistically significant $\gamma-$ray signal has been found in the combined analysis of these new sources. With the empirically estimated J-factors of these sources, the constraint on the annihilation channel of $\chi\chi \rightarrow \tau^{+}\tau^{-}$ is found comparable to the bound set by the joint analysis of fifteen previously known dSphs with kinematically constrained J-factors for $m_\chi>250$ GeV (see Fig.\[Fig.2\]). Interestingly, in the direction of Tucana III, a dSph galaxy candidates that is $\sim 25$ kpc away, there is a very weak GeV-excess like $\gamma-$ray signal. We have a ${\rm TS}\approx 6.7$ for the annihilation channel $\chi\chi\rightarrow \tau^{+}\tau^{-1}$ and $m_\chi \approx 15$ GeV. The significance of the possible signal increases with time (see Fig.\[Fig.3\] for the comparison of the results for the 7 year and 3.5 year Fermi-LAT data), as expected in the models of dark matter annihilation or alternatively a steady astrophysical source. To further check the significance of the possible gamma-ray signal, we have also analyzed the Fermi-LAT Pass 8 data in the directions of seven “nearby" dSphs, including Segue I, Segue II, Ursa Major II, Reticulum II, Tucana III, Cetus II and Willman I. In the case of $\chi\chi\rightarrow \tau^{+}\tau^{-}$ we have the largest ${\rm TS}\approx 9.2$ at $m_{\chi} \approx 16$ GeV for the combined $\gamma-$ray data set (see Fig.\[Fig.4\]). Interestingly, the corresponding mass and annihilation cross section of dark matter for the weak gamma-ray signal are consistent with those needed for the dark matter interpretation of GeV excess. The origin of GeV excess from Galaxy inner region is still in heavy debate [@Cirelli:2015gux] and additional support to the dark matter interpretation could be from dSph galaxies that do not suffer from the contamination caused by the complicated background emission. Though our current results seems encouraging, we would like to remind that the astrophysical origin or even a statistical fluctuation origin of the very weak signal is still possible. More data is highly needed to draw a more formal conclusion. We thank the anonymous referee for helpful comments/suggestions. This work was supported in part by 973 Program of China under grant 2013CB837000, by NSFC under grants 11525313 (i.e., the National Natural Fund for Distinguished Young Scholars), 10925315, 11361140349 and 11103084, by Foundation for Distinguished Young Scholars of Jiangsu Province, China (No. BK2012047), and by the Strategic Priority Research Program (No. XDA04075500). $^\ast$Corresponding authors (huangxiaoyuan@gmail.com, xiangli@pmo.ac.cn, yzfan@pmo.ac.cn). [^1]: http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html
--- abstract: 'We examine the energetics of Coronal Mass Ejections (CMEs) with data from the LASCO coronagraphs on SOHO. The LASCO observations provide fairly direct measurements of the mass, velocity and dimensions of CMEs. Using these basic measurements, we determine the potential and kinetic energies and their evolution for several CMEs that exhibit a flux-rope morphology. Assuming flux conservation, we use observations of the magnetic flux in a variety of magnetic clouds near the Earth to determine the magnetic flux and magnetic energy in CMEs near the Sun. We find that the potential and kinetic energies increase at the expense of the magnetic energy as the CME moves out, keeping the total energy roughly constant. This demonstrates that flux rope CMEs are magnetically driven. Furthermore, since their total energy is constant, the flux rope parts of the CMEs can be considered to be a closed system above $\sim$ 2 $R_{\sun}$.' author: - 'A. Vourlidas, P. Subramanian' - 'K. P. Dere, R. A. Howard' title: LASCO Measurements of the Energetics of Coronal Mass Ejections --- Introduction ============ Material ejections are a common phenomenon of the solar corona. Since the first observation on 14 December 1971 [@tousey73], several thousands of CMEs have been seen [@howard85; @kahler92; @webb92; @hund97; @gos97]. Nevertheless, the mechanisms that cause a CME and the forces acting on it during its subsequent propagation through the corona are largely unknown. Of these two issues, the issue of CME propagation through the corona is by far more amenable. Past observations have provided insufficient coverage of the CME development for several reasons: restricted field of view of the coronagraphs, frequent orbital nights and low sensitivity of the instruments. Consequently, past studies were largely focused on either the phenomenological description and classification of CMEs or the measurement of average values for the physical properties of the events such as speed, mass, kinetic energy [@jackson78; @howard85]. The study of the CME energetics, in particular, was necessarily restricted to a handful of well observed events [@rust80; @webb80]. Their analysis revealed the importance of the (elusive) magnetic energy and established that the potential energy dominates the kinetic energy. It was also found that the energy residing in shocks, radio continua and other forms of radiation was insignificant in comparison to the mechanical energy of the ejected material. The lessons learned from the past resulted in a greatly improved set of instruments; the LASCO coronagraphs [@bru95], aboard the SOHO spacecraft [@dom95]. The location of the spacecraft at the L1 point permits the continuous monitoring of the Sun while the combination of the three LASCO coronagraphs provides an unprecedented field of view from 1.1 $R_\odot$ to 30 $R_\odot$. The replacement of videcons with CCD detectors and the very low stray light levels of the coronagraphs have led to a vast sensitivity improvement. It is now possible to routinely follow the dynamical evolution of a CME. Here, we compute basic quantities; mass, velocity and geometry and derive quantities such as the potential, kinetic and magnetic energies of CMEs as they progress through the outer corona into the heliosphere. To our knowledge, this is the first time that detailed observations of the dynamical evolution of these quantities has been presented. These measurements are expected to provide concrete observationally-based constraints on the driving forces in CME models. For this study, we focus on a group of CMEs that share a common characteristic; namely, they resemble a helical flux rope in the C2 and C3 coronagraph images. We choose these events for three reasons: (i) the area of a CME that corresponds to the flux rope is usually easily identifiable in the coronagraph images, (ii) their appearance can be related to the flux rope structures measured in-situ from Earth-orbiting spacecraft, and (iii) there has been extensive theoretical and observational interest for this class of CMEs. Several CMEs observed with the LASCO instrument exhibit a helical structure like that of a flux rope [@chen97; @dere99; @wood99]. The theoretical basis for flux rope configurations in solar and interplanetary plasmas is well established [e.g., @gold63; @gold83; @chen93; @low96; @kumar96; @guo96; @wu97]. These treatments envisage the helical flux rope as a magnetic structure that resides in the lower corona and erupts to form a CME. There is some debate about whether the flux rope is formed before the eruption, or whether it is formed as a consequence of reconnection processes that lead to the eruption. These arguments are related to those which consider whether the reconnection occurs above the sheared arcade which presumably forms the flux rope, or below it [@spyro99]. Neither the physical mechanisms of the initial driving impulse, nor the conditions in the corona which determine the subsequent propagation of the flux rope are very well known from observations. Theoretical models often rely on educated guesses to model both the initiation of the CME as well as its propagation through the corona. Statements about the energetics, or driving forces behind CMEs are made on these bases; for instance, @chen96 and @wu97_2 show plots of the variation of kinetic, potential and magnetic energies of CMEs as calculated from their models. The measurements we present in this paper are expected to yield some clues about the validity of the assumptions made in these models. It may be emphasized that our measurements are made only in the outer corona (2.5 $R_{\odot}$ - 30 $R_{\odot}$). They are therefore not expected to shed much light on the energetics of the flux rope CMEs immediately following initiation, or on the initiation process itself. Our estimates of the magnetic energy of flux-rope CMEs are made on the basis of in-situ measurements of magnetic clouds near the earth. This is because flux-rope CMEs ejected from the Sun are often expected to evolve into magnetic clouds [@rust94; @kumar96; @chen97; @gopal98]. Conversely, in-situ measurements of magnetic clouds near the earth suggest that their magnetic field configuration resembles a flux rope [@burlaga88; @lep90; @farrugia95; @marubashi97]. Radio observations of moving Type-IV bursts can also probe the magnetic field in CMEs [@steward85; @rust80] but they are so rare that near-Earth measurements are the most reliable estimates of the magnetic flux. It should be borne in mind, however, that the precise relationship between CMEs and magnetic clouds and the manner in which CMEs evolve into magnetic clouds is not very well understood [@dryer96; @gopal98]. The main reason for this situation is the simple observational fact that while CMEs are best observed off the solar limb, magnetic clouds are measured near the Earth. This issue will hopefully be addressed in the near future by the next generation of space-borne instruments. The rest of the paper is organized as follows: We describe our methods of measuring the mass and position of a CME and of calculating the different forms of energy associated with it in § 2. § 3 presents the results of our measurements. We discuss caveats that accompany these results in § 4 and draw conclusions in § 5. Data Analysis ============= ![image](f1.ps){width="16cm"} ![ *Solid line:* Thomson scattering calculation of the angular dependence of the total brightness of a single electron at a heliocentric distance of 10 R$_\odot$. The curve is normalized to the brightness at $0^\circ$. *Dash-dot line:* Ratio of the observed relative to the actual mass of a simulated CME, centered on the plane of the sky, as a function of its angular width (see text). \[siml\]](f2.ps){width="6cm"} Mass calculations ----------------- White light coronagraphs detect the photospheric light scattered by the coronal electrons and therefore provide a means to measure coronal density. Transient phenomena, such as CMEs, appear as intensity (hence, density) enhancements in a sequence of coronagraph images. We compute the mass for a CME in a manner similar to that described by @poland81. After the coronagraph images are calibrated in units of solar brightness, a suitable pre-event image is subtracted from the frames containing the CME. The excess number of electrons is simply the ratio of the excess observed brightness, $B_{obs}$, over the brightness, $B_e(\theta)$, of a single electron at some angle, $\theta$, (usually assumed to be 0) from the plane of the sky. $B_e(\theta)$ is computed from the Thomson scattering function [@bill66]. The mass, $m$, is then calculated assuming that the ejected material comprises a mix of completely ionized hydrogen and 10% helium. Namely, $$m = {B_{obs}\over B_e(\theta)}\cdot 1.97\times10^{-24} {\rm gr}$$ After the mass image is obtained, we delineate the flux rope by visual inspection, as shown in Figure \[cartoon\]. We attempt to circumscribe the cross section of the helical flux rope as seen in the plane of the sky. The cavity seen in the white light/mass images is taken to be the interior of the flux rope, bounded by the helical magnetic field (Figure \[cartoon\]). The mass contained in the flux rope is computed by summing the masses in the pixels encompassed by the flux rope. The accuracy of the mass calculations depends on three factors: the CME depth and density distribution along the line of sight and the angular distance of the CME from the plane of the sky. All three factors are unknown since the white light observations represent only the projection of the CME on the plane of the sky. Some additional information can be obtained from pB measurements, but these are only occasionally available. Therefore, to convert the observed brightness to a mass measurement we have to make an assumption. Namely, we assume that all the mass in the CME is concentrated in the plane of the sky. Since CMEs are three-dimensional structures, our calculations will tend to underestimate the actual mass. To quantify the errors arising from our assumption, we performed two brightness calculations shown in Figure \[siml\]. The solid line shows the angular dependence of the quantity $B_e(\theta)$ in equation (1) normalized to its value at $0^\circ$. We see that our assumption that the ejected mass is always in the sky plane ($\theta=0^\circ$) underestimates the mass by about a factor of 2 at angles $\sim 50-60^\circ$. We expect that the CMEs in our sample are relatively close to the plane of the sky ($\theta<50^\circ$) since their flux rope morphology is clearly visible. Next, we investigate the effect of the finite width of a CME. We simulate a CME with constant density per angular bin along the line of sight, centered in the plane of the sky at a heliocentric distance of 10 R$_{\sun}$. Using equation (1) we calculate the observed mass, $m_{obs}$, for various widths and compare it to the actual mass, $m_{cme}$ for the same widths. The dashed line in Figure \[siml\] shows the dependence of this ratio, $m_{obs}/m_{cme}$ on the width of the CME. For angular widths similar to those of the CMEs in our sample ($\lesssim60^\circ$) the mass would be underestimated by about $\sim15\%$. Finally, we estimate the noise in the LASCO mass images from histograms of empty sky regions. The statistics in these areas show a gaussian distribution centered at zero, as expected. We define the noise level as one standard deviation or about $5\times10^8$ gr in the C2 frames and $3\times10^{10}$ gr in the C3 frames. The average C2 pixel signal in the measured CMEs is 10 times the noise and the C2 pixel signal-to-noise ratio in the mass measurements is between 10-100. The CMEs get fainter as they propagate farther from the sun. Therefore, the pixel signal-to-noise ratio in the C3 images drops to about 3-4. These figures refer to single pixel statistics and demonstrate the quality of the LASCO coronagraphs. Our measurements are based on statistics of hundreds or thousands of pixels for each image. Therefore, the “mass” noise in our images is insignificant compared to the systematic errors involved in the calculation of a CME mass as discussed previously. In summary, these calculations suggest that the LASCO measurements tend to underestimate the CME mass by about 50%, for realistic widths and propagation angles. A more detailed analysis of CME mass calculations will appear elsewhere. CME Energy calculations ----------------------- In this analysis we consider only three forms of energy — potential, kinetic, and magnetic energy. These energies can be estimated from quantities measured directly in the LASCO images like CME area, mass and speed. Two of the many other forms of energy that can exist in the CME/corona system can be estimated based on some assumptions and educated guesses: the CME enthalpy $U$ and the thermal energy $E_T$. We will show in § 4 that the thermal energy $E_T$ is insignificant. There are several uncertainties involved in calculating the enthalpy of a CME. Firstly, the temperature structure of a CME is far from known. It is conceivable that is composed of multithermal material. In situ measurements of magnetic clouds near the earth reveal a temperature range of $10^4-10^5$ K. Furthermore, it is not clear if the gas in the CMEs in the outer corona is in local thermodynamic equilibrium. Nonetheless, if we assume the CME to be a perfect gas in local thermodynamic equilibrium with equal electron and ion temperatures, the enthalpy $U$ can be as large as $5E_T\,= 5nkT$. If we assume a temperature of a million degrees K and a mass of $10^{15}$ gr, this yields $U\approx 3\times10^{29}$ ergs. As will be seen later, even this upper limit for the enthalpy $U$ is lower than the kinetic and potential energies by at least one order of magnitude, except in the lower corona where it can be comparable to the kinetic energy. Furthermore, the enthalpy is directly proportional to the mass, which, as will be seen later, remains approximately constant as the CME propagates outwards. We therefore conclude that the enthalpy is a small, constant magnitude correction which can be safely neglected without affecting the overall conclusions regarding CME energetics. #### Potential Energy We define the potential energy of the flux rope as the amount of energy required to lift its mass from the solar surface. The gravitational potential energy is calculated using $$E_P = { \sum_{{\rm flux}\,{\rm rope}}} \, \int^{R}_{R_{\odot}} \, \frac{G\, M_{\odot} \, m_{i}}{r_{i}^{2}} \, dr_{i} \, ,$$ where $m_{i}$ and $r_{i}$ denote the mass and distance from sun-center respectively, of each pixel, $M_{\odot}$ is the mass of the sun, $R_{\odot}$ is the solar radius and $G$ is the gravitational constant. The summation is taken over the pixels comprising the flux rope (Figure \[cartoon\]). #### Kinetic Energy We use the center of mass of the flux rope to describe its movement. The location of the center of mass relative to the sun center is given by $$\vec{r}_{CM} = \frac{{\sum_{{\rm flux}\,{\rm rope}}}\,m_{i}\,\vec{r_{i}}}{{\sum_{{\rm flux}\,{\rm rope}}}\,m_{i}} \, ,$$ where $\vec{r}_{CM}$ is the radius vector of the center of mass and $\vec{r_{i}}$ is the radius vector for each pixel. The summation, as before, is taken over the pixels comprising the flux rope. We calculate $\vec{r}_{CM}$ for each CME frame as it progresses through the LASCO field of view. In other words, we compile a table of center-of-mass locations versus time, ($\vec{r}_{CM}\, , t$). By fitting a second degree polynomial to ($\vec{r}_{CM}\, , t$) we obtain the center of mass velocity, $\vec{v}_{CM}$ and acceleration $\vec{a}_{CM}$. The calculation of the speed and acceleration as described above has the advantage of involving only the measurement of the CME center of mass. Once the flux rope is delineated, its mass, speed and energetics follow. The kinetic energy is simply $$E_K = \frac{1}{2}\, \sum_{{\rm flux}\,{\rm rope}} \, m_{i} \, v_{CM}^{2} \, .$$ Note that these measurements are based on the plane of the sky location of the center of mass. The speed used in the calculations is therefore a projected quantity and not the true radial speed. It follows that the derived kinetic energies are lower limits. The same applies for all of our observed and derived quantities which facilitates the comparison among the different events. #### Magnetic Energy The calculations of the potential and kinetic energies of flux rope CMEs are made directly from the mass images. On the other hand, the values we use for the magnetic energy of these CMEs are only estimates because the magnetic field strength in a CME is unknown. In-situ measurements by spacecraft like WIND yield the magnetic field contained in magnetic clouds observed near the earth. As mentioned in § 1, helical flux-rope CMEs are thought to evolve into magnetic clouds similar to those observed at the earth. Therefore, measurements of the magnetic flux contained in such magnetic clouds are expected to be fairly representative of that carried by flux rope CMEs. The magnetic energy carried by a flux rope CME is defined by $$E_M = \frac{1}{8\,\pi} \int_{{\rm flux}\,{\rm rope}} B^{2} dV\, ,$$ where $B$ is the magnetic field carried by the flux rope, and the integration is carried out over the volume of the flux rope. For a highly conducting medium such as the heliosphere, the magnetic flux, $\int B dA$, is frozen into the CME as it evolves to form a magnetic cloud. The magnetic flux measured in-situ is therefore taken to be the same as that contained in the CME as it passes through the LASCO field of view. We use this frozen flux assumption since we feel that it is a simple, physically motivated one. Another assumption which gives very similar results is conservation of magnetic helicity [@kumar96]. The volume integral in equation (5) contains another unknown; the volume occupied by the flux rope. Assuming a cylindrical flux rope with constant magnetic field, equation (5) is approximated as $$E_M \sim \frac{1}{8\,\pi}\, \frac{l}{A} \, (B\cdot A)^2 ,$$ where $A$ is the area of flux rope as measured in the LASCO images and $l$ is the length of flux rope. The quantity $B\cdot A$ is the magnetic flux frozen into the flux rope and is conserved. For our purposes, we need, in equation (6), a representative value for the magnetic flux of a flux rope. We obtain such an estimate from model fits [@lep90] to several magnetic clouds observed by WIND between 1995–1998 available at <http://lepmfi.gsfc.nasa.gov/mfi/mag_cloud_pub1p.html>. We only consider clouds that occurred at the same time interval as the LASCO CMEs (1997-98). From this sample we get the average magnetic flux, $<B\cdot A> = 1.3\pm1.1 \times10^{21}$ G cm$^2$ which we put in equation (6). The resulting magnetic energy uncertainty is then $(1.1/1.3)^2 \approx 70\%$. To calculate the magnetic energy, we also need the length $l$ of the rope along the line of sight. Since the true length of the rope cannot be obtained observationally, we assume that the flux rope is expanding in a self-similar manner, with its length being proportional to its heliocentric height; namely, $l\sim r_{CM}$. [ccccc]{} 970223 & 02:55 & 89 & 63 & 920\ 970413 & 16:12 & 260 & 42 & 520\ 970430 & 04:50 & 83 & 70 & 330\ 970813 & 08:26 & 272 & 36 & 350\ 971019 & 04:42 & 90 & 77 & 263\ 971030 & 18:21 & 85 & 50 & 215\ 971031 & 09:30 & 260 & 54 & 476\ 981101 & 20:11 & 272 & 57 & 264\ 980204 & 17:02 & 284 & 43 & 420\ 980224 & 07:55 & 88 & 32 & 490\ 980602 & 09:37 & 246 & 47 & 600\ Finally, we emphasize that the magnetic cloud data used here are only representative. They are not measurements from the same LASCO events we analyzed. Also the magnetic flux in individual events can differ from the average value we adopted. Furthermore, the magnetic field values we use refer to the total (toroidal + poloidal) magnetic field contained in the flux rope. The definition of $B\cdot A$, however, refers only to the toroidal component of the magnetic field which is normal to the cross-sectional area of the flux rope. For these reasons, it is difficult to ascribe errors to our magnetic energy calculations of individual events. Therefore, we decided to use the statistical uncertainty in the average flux to compute the error in the magnetic energy which is about 70% as shown above. It is unfortunate that the magnetic energy measurements are so uncertain and they will continue to be so until direct observations of the coronal magnetic field become available. Results ======= ![image](f3.ps){width="15cm"} ![image](f4.ps){width="15cm"} ![image](f5.ps){width="15cm"} ![image](f6.ps){width="15cm"} For our analysis, we searched the LASCO database for CMEs with clear flux rope morphologies. We picked 11 events for which we compiled the evolution of the mass and velocity of the center of mass and the potential, kinetic and magnetic energies as the CME progressed through the LASCO C2 and C3 fields of view. For reference purposes we present a list of the events in Table 1. The information for the 1997 CMEs is taken from the LASCO CME list compiled by Chris St. Cyr (<http://lasco-www.nrl.navy.mil/cmelist.html>) except for the final speeds in the last column that refer to the center of mass of the fluxropes and were calculated by us. Further information on source regions and associated photospheric/low corona emissions for some of these events can be found in the references noted in the table. Our measurements are shown in Figures \[res1\] – \[res4\]. The horizontal axis denotes heliocentric height in solar radii. Each row is a separate CME event, labeled by its date of observation by the LASCO/C2 coronagraph. The left panels show the evolution of the potential, kinetic, magnetic and total energy in the CME. The total energy is the sum of the potential, kinetic and magnetic energies. The right panels show the evolution of the flux rope mass and the center-of-mass speed. As discussed in § 2, a second degree fit to ($\vec{r}_{CM}\, , t$) yields the acceleration of the center of mass $\vec{a}_{CM}$. The radial component of $\vec{a}_{CM}$ is also shown in this panel. The dash-dot line, visible in some plots, marks the escape speed from the Sun as a function of height. An inspection of the plots leads to the following overall conclusions that hold for most of the events: - The total energy (curves marked with +) is relatively constant, to within a factor of 2, for the majority of the events despite the substantial variation seen in the individual energies. This suggests that, for radii between approximately $3 R_{\odot}$ and $30 R_{\odot}$, the flux rope part of these CMEs can be considered as an isolated system; i.e., there is no additional “driving energy” other than the energies we have already taken into account (potential and kinetic energies of the flux rope, and magnetic energy associated with the magnetic field inside the flux rope). - We see that the kinetic and, (to a lesser degree) potential energies increase at the expense of the magnetic energy, keeping the total energy fairly constant. The decrease in magnetic energy is a direct consequence of the expansion of the CME. It could imply that the untwisting of the flux rope might be providing the necessary energy for the outward propagation of the CME in a steady-state situation. - The center of mass accelerates for most of the events, and the CMEs achieve escape velocity at heights of around 8-10 $R_{\sun}$, well within the LASCO/C3 field of view. - The mass in the flux rope remains fairly constant for some events (e.g., 97/08/13 or 97/10/30) while other events (e.g., 97/11/01 or 98/02/04) exhibit a significant mass increase in lower heights and tend to a constant value in the outer corona, above about $10-15$ R$_{\sun}$. This observation raises the question: why is pile up of preexisting material observed only in some flux rope CMEs? We plan to investigate this effect further in the future. It would also be interesting to examine how the mass increase close to the Sun relates to interplanetary “snowplowing” observations [@webb96]. The only notable exception is the event of 98/06/02 which is also the most massive and its total energy increases with distance from the center of the sun. This CME is associated with an exceptionally bright prominence which may affect the measurements. A detailed analysis of this event is presented in @plunk99. Discussion ========== The conclusions of the previous section are based on a set of broadband white light coronagraph observations. The accuracy of the measurement of any structure (i.e., CME) in such images is inherently restricted by three unknowns: the amount and distribution of the material and the extent of the structure along the line of sight. We addressed the first two problems in § 2 where we showed that for the case of a uniformly filled CME extending $\pm 80$ degrees out of the plane of the sky, we will measure about 65% of its mass. Since the potential and kinetic energies are directly proportional to the mass, our measurements in Figures \[res1\] - \[res4\] could be higher by as much as 35%. The spatial distribution of the material will also affect the visibility of the structures we are trying to measure. Because we delineate the area of the flux rope by visual inspection, we might not be following the same cross section as the structure evolves. This might account for some of the variability of the energy curves. However, we chose the CMEs based on their clear flux rope signatures. The measurements involve hundreds or even thousands of pixels per image and therefore we don’t expect that the trends seen in the data are affected by the slight changes in the visibility of the structure. The widths along the line of sight of the observed CMEs are difficult to quantify. There is no way to measure this quantity with any instrumentation in existence today. Only the magnetic energy depends on the width of the flux rope. In § 2.2, we assumed that the width of the flux rope is equal to the height of its center of mass which implies that its preeruption length is about a solar radius. Prominences and loop arcades of this length are not uncommon features on the solar surface. ![image](f7.ps){width="12cm"} As described in § 1, flux rope CMEs are expected to evolve into magnetic clouds near the earth. This is the basis on which we use in-situ data to estimate the magnetic energy carried by the flux rope CMEs (§ 2). In § 2, we also estimated that the overall normalization of the magnetic energy curve is uncertain by about 70%. In summary, none of the above errors can affect the trends of the curves for a given event. Only the magnitudes of the various energies could change. Finally, some of the variability of the measured quantities could be attributed to the intrinsic variability of the corona and/or of the CME structure itself and cannot be removed without affecting the photometry. For this reason, it is rather difficult to associate an error estimate to individual measurements. Therefore, we decided not to include any error bars in our figures. The analysis of the CME dynamics in Figures \[res1\]-\[res4\] reveals an interesting trend; namely, the total energy remains constant. It appears that the flux rope part of a CME propagates as a self-contained system where the magnetic energy decrease drives the dynamical evolution of the system. All the necessary energy for the propagation of the CME must be injected in the erupting structures during the initial stages of the event. The notions that these CMEs are indeed magnetically driven and that the thermal energy contribution can be ignored are further reinforced by the magnitude of the plasma $\beta$ parameter (Fig. \[beta\]). The calculations were performed with the assumption that the CME material is at a coronal temperature of $10^6$ K. We see that the CMEs have a very small $\beta$ (except the events on 98/02/04 and 98/06/02) which increases slightly outwards. It appears to tend towards a constant value. Such a behavior for the plasma $\beta$ parameter was predicted in the flux rope model of @kumar96. We also find that the potential energy is larger than the kinetic energy. These results confirm the conclusions from earlier Skylab measurements (see @rust80 for details). The relation between the helical structures seen in the coronagraph images and eruptive prominences is still unclear. In our sample, only half of the CMEs have clear associations with eruptive prominences (e.g., 97/02/23). No helical structures are visible in pre-eruption EIT 195Å images, in agreement with past work [@dere99]. On the other hand, the flux rope of the event on 98/06/02 is very clearly located above the erupting prominence and there is strong evidence that it was formed before the eruption [@plunk99]. It seems, therefore, likely that the process of the formation of the flux rope is completed during the early stages of the eruption at heights below the C2 field of view ($<2$ R$_{\sun}$). Such an investigation, however, is beyond the scope of this paper. ![image](f8.ps){width="12cm"} Finally, we turn our attention to the evolution of the flux rope shape as a function of height. We proceed by comparing the velocity of the CME front to its center of mass velocity. Because the visual identification of points along the front can be influenced by visibility changes as the CME evolves, it is susceptible to error. A better method is to use a statistical measure for the location of the front such as the center of mass. Hence, the location of the front is defined as the center of mass of the pixels that lie within 0.1 $R_{\sun}$ of the front of the flux rope and within $\pm 25^\circ$ of the radial line that connects the sun center with the center of mass. The velocity of the front, $v_{f}$ is calculated in the same manner as $v_{CM}$ (§ 2.2). The comparison of the two velocity profiles for some representative events is shown in Figure \[front\]. Six of the eleven CMEs have profiles similar to 97/08/13 (self-similar expansion) or 97/10/30 (no expansion), while five show a progressive flattening such as 97/04/13 or 97/11/01, similar to that found in @wood99. Some theoretical flux rope models also predict flattening of the flux rope as it propagates outwards [@chen97; @wood99]. Conclusions =========== We have examined, for the first time, the energetics of 11 flux rope CMEs as they progress through the outer corona into the heliosphere. The kinetic and potential energies are computed directly from calibrated LASCO C2 and C3 images, while the magnetic energy is based on estimates from near-Earth in-situ measurements of magnetic clouds. These results are expected to provide constraints on flux rope models of CMEs and shed light on the mechanisms that drive such CMEs. These measurements provide no information about the initial phases of the CME (at radii below $\sim 2 R_{\odot}$). All the measurements and conclusions hold for heights in the C2 and C3 fields of view; between 3 and $30 R_{\odot}$. The salient conclusions from an examination of 11 CMEs with a flux rope morphology are: - For relatively slow CMEs, which constitute the majority of events, - The potential energy is greater than the kinetic energy. - The magnetic energy advected by the flux rope is given up to the potential and kinetic energies, keeping the total energy roughly constant. In this sense, these events are magnetically driven. - For the relatively fast CMEs with velocities $\geq$ 600 km/s (97/02/23, 98/06/02), - The kinetic energy exceeds the potential energy by the time they reach the outer corona (above $\sim 15 R_{\sun}$). - The magnetic energy carried by flux rope is significantly below the potential and kinetic energies. We thank D. Spicer for the initial discussions that led to this paper and the referee for his/her constructive comments. SOHO is an international collaboration between NASA and ESA and is part of the International Solar Terrestrial Physics Program. LASCO was constructed by a consortium of institutions: the Naval Research Laboratory (Washington, DC, USA), the University of Birmingham (Birmingham, UK), the Max-Planck-Institut für Aeronomie (Katlenburg-Lindau, Germany) and the Laboratoire d’Astronomie Spatiale (Marseille, France). Andrews, M. D., & Howard, R. A. 1999, in AIP Conf. Proc. 471, Solar Wind Nine, eds. S.R. Habbal et al. (Woodbury: AIP), 629 Antiochos, S. K., Devore, C. R., & Klimchuk, J. A. 1999, , 510, 485 Billings, D. E. 1966, a Guide to the Solar Corona, (New York: Academic Press) Brueckner, G. E., et al. 1995, , 162, 291 Burlaga, L. F. 1988, , 93, 7217 Chen, J., & Garren, D. A. 1993, , 20, 2319 Chen, J. 1996, , 101, 27499 Chen, J. et al. 1997, , 490, L191 Dere, K. P. et al. 1999, , 516, 465 Domingo, V., Fleck, B., and Poland, A. I. 1995, , 162, 1 Dryer M. 1996, , 169, 421 Farrugia, C. J., Osherovich, V. A., and Burlaga, L. F., 1995, , 100, 12293 Gold, T. 1963, Proc. Pontificial Acad. of Sciences, Vatican 25, 431 Goldstein, H. 1983, in NASA Conf. Publ. 2280, Solar Wind Five, ed. M. Neugebauer, NASA CP-2280, 731 Gopalswamy, N. et al. 1998, , 25, 2485 Gosling, J. T. 1997, in AGU Geophys. Monograph 99, Coronal Mass Ejections, ed. N. Crooker, J. A. Joselyn & J. Feynman (Washington, D.C.: AGU), 9 Guo, W. P., Wu, S. T., & Tandberg-Hanssen, E., 1996, , 469, 944 Howard, R. A., Sheeley, N. R., Koomen, M., J., Michels, D. J. 1985, , 90, 8173 Hundhausen, A. G. 1997, in Cosmic Winds and the Heliosphere, eds. J. R. Jokipi, C. P. Sonett, & M. S. Giampapa (Tucson: Univ. of Arizona Press) Jackson, B. V., & Hildner, E. 1978, , 60, 155 Kahler, S. 1992, , 30, 113 Lepping, R. P., Jones, J. A., & Burlaga, L. F. 1990, , 95, 11957 Low, B. C. 1996, , 167, 217 Kumar, A., & Rust, D. M. 1996, , 101, 15667 Marubashi, K. 1997, in AGU Geophys. Monograph 99, Coronal Mass Ejections, ed. N. Crooker, J. A. Joselyn & J. Feynman (Washington, D.C.: AGU), 147 Poland, A. I. et al., 1981, , 69, 169 Plunkett, S. P., Vourlidas, A., & Simberova, S. 2000, , in print Rust, D. M. et al. 1980 in Solar Flares: A Monograph from Skylab Solar Workshop II, ed. P. A. Sturrock (Boulder: Colorado Univ. Press), 273 Rust, D. M., & Kumar, A. 1994, , 155, 69 Stewart, R. T. 1985 in Solar Radiophysics, eds. D. J. McLean & N. R. Labrum (Cambridge: Cambridge Univ. Press), 361 Tousey, R. 1973 in Space Research XIII, eds. M. J. Rycroft & S. K. Runcorn (Berlin: Akademie-Verlag), 713 Webb, D. F. et al. 1980 in Solar Flares: A Monograph from Skylab Solar Workshop II, ed. P. A. Sturrock (Boulder: Colorado Univ. Press), 471 Webb, D. F. 1992, in Eruptive Solar Flares, eds. Z. Svestka, B.V. Jackson & M. Machado (New York: Springer-Verlag), 234 Webb, D. F., Howard, R. A., & Jackson, B. V. 1996, in Proceedings of the Eighth Solar Wind Conf., ed. D. Winterhalter et al. (New York: AIP), 540 Wood, B. E. et al. 1999, , 512, 484 Wu, S. T., Guo, W. P., & Dryer, M., 1997, , 170, 265 Wu, S. T. et al. 1997, , 175, 719
--- abstract: | We derive a Lorentz invariant distribution of velocities for a relativistic gas. Our derivation is based on three pillars: the special theory of relativity, the central limit theorem and the Lobachevskyian structure of the velocity space of the theory. The rapidity variable plays a crucial role in our results. For $v^2/c^2 \ll 1$ and $1/\beta=kT/2 m_0 c^2 \ll 1$ the distribution tends to the Maxwell-Boltzmann distribution. The mean $\langle v^2 \rangle$ evaluated with the Lorentz invariant distribution is always smaller than the Maxwell-Boltzmann mean and is bounded by $\langle v^2 \rangle/c^2=1$. This implies that for a given $\langle v^2 \rangle$ the temperature is larger than the temperature estimated using the Maxwell-Boltzmann distribution. For temperatures of the order of $T \sim {10^{12}}~ K$ and $T \sim {10^{8}}~ K$ the difference is of the order of $10 \%$, respectively for particles with the hydrogen and the electron rest masses.\ PACS numbers: 05.20.Dd, 05.20.Jj author: - | Evaldo M. F. Curado$^{\mathrm{a,b}}$ and Ivano Damião Soares$^{\mathrm{a}}$\ *$^{\mathrm{a}}$ Centro Brasileiro de Pesquisas Fisicas*\ *$^{\mathrm{b}}$ Instituto Nacional de Ciência e Tecnologia - Sistemas Complexos*\ *Rua Xavier Sigaud 150, 22290-180 - Rio de Janeiro, RJ, Brazil* title: A Lorentz invariant velocity distribution for a relativistic gas --- In physics, it is difficult to overestimate adequately the importance of the Maxwell-Boltzmann (MB) distribution of velocities for gases, introduced by Maxwell in 1860 [@maxwell1860]. It was the first time a probability concept was introduced in a physical theory, as the existing theories at that time, like Newtonian mechanics and wave theory, were purely deterministic theories. Actually, the work of Maxwell was the starting point for Boltzmann to elaborate his research program, on the evolution of a time dependent distribution of velocities for gases, culminating in the articles of 1872 [@boltzmann1872] and 1877[@boltzmann1877], among other important papers, which are among the first fundamental cornerstones of the modern kinetic theory of gases and of statistical mechanics. With the implicit use of the atomic theory of matter (at that time a controversial theory), the new concept of entropy was established having also as its starting point the introduction, by Maxwell, of the concept of probability. Since then the MB distribution played a fundamental role in the statistical description of gaseous systems with a large number of constituents. Actually, in many cases it is considerably simpler, and even as accurate as, to use the MB distribution instead of Bose-Einstein and Fermi-Dirac distributions[@balian]. However a clear limitation of the MB distribution is its nonrelativistic character, encompassing velocities larger than the velocity of light in contradiction with the special theory of relativity. Our purpose in this paper is to derive a Lorentz invariant distribution of velocities for a relativistic gas which has the MB distribution as a limit for velocities which are much smaller than the velocity of light (relatively small temperatures). In our derivation we will make use of an important property connected to the Riemannian structure of the velocity spaces of Galilean relativity and of Einstein special relativity, namely, both spaces are maximally symmetric three dimensional (3D) Riemannian spaces, differing only by the fact that in the Galilean relativity the space is flat (an Euclidean space) while in the Einstein special relativity the space has a negative constant curvature ($R=-1/c^2$). This will lead us (i) to use the additivity of velocities in the first case and of rapidities in the latter case, and (ii) to use Galilean transformations and Lorentz transformations as rigid translations which map, respectively, each space into itself. We will start by presenting a derivation of the MB distribution which differs from the derivation used by Maxwell but which is more appropriate for our derivation. Let us first consider the Galilean addition of velocities in an Euclidean space. We know that the velocities add according the rule ${\bf v} = \sum_i {\bf v}_i $. Assuming that the velocities ${\bf v}_i$ are random variables, with zero mean, and considering that the sum is over a very large number of particles, we have - by the central limit theorem[@central] - that the probability distribution of velocities for the random variable $\vec{v}$ is given by $P({\bf v}) \propto \exp(-b{\bf v}^2)$ recovering thus the famous MB distribution. In the special theory of relativity let us consider a fixed inertial reference frame, say the laboratory frame. To simplify our discussion we initially assume one dimensional (1D) motion only. Let the velocity of two particles be $v_1$ and $v_2$, with opposite signs, as measured in this inertial frame. According to the special theory of relativity the relative velocity of the two particles must be [$$\begin{aligned} \label{eq1} v=\frac{v_1+v_2}{1+v_1 v_2/c^2}\end{aligned}$$ ]{} which is invariant under Lorentz transformations. We note that the addition law (\[eq1\]) does not satisfy the usual arithmetic addition, as in Galilean relativity, which constitutes a basic ingredient in the central limit theorem, as is shown above, in our derivation of the MB distribution. However (\[eq1\]) can be rewritten as [$$\begin{aligned} \label{eq2} \frac{1-v/c}{1+v/c}=\Big( \frac{1-v_1/c}{1+v_1/c}\Big) \Big(\frac{1-v_2/c}{1+v_2/c}\Big),\end{aligned}$$ ]{} and can be extended to any number of particles, [$$\begin{aligned} \label{eq3} \frac{1-v/c}{1+v/c}=\prod_{i}\Big( \frac{1-v_i/c}{1+v_i/c}\Big).\end{aligned}$$ ]{} Therefore, if we take the logarithm on both sides (\[eq3\]) and define [$$\begin{aligned} \label{si} \sigma_i \equiv \frac{1}{2} \ln \left(\frac{1+v_i/c}{1-v_i/c} \right) = \tanh^{-1}(v_i/c) \,\,,~~\sigma_i \in (-\infty,\infty)\end{aligned}$$ ]{} we can express Eq.(\[eq3\]) as [$$\begin{aligned} \label{ssoma} \sigma= \sum_i \sigma_i \, .\end{aligned}$$ ]{} Let us consider, then, a relativistic gas with a large number of particles. If we assume that the velocities $v_i$ are independent and random variables, with zero mean, the variables $\sigma_i$’s are also independent and random with zero mean, and we have – in accordance with the central limit theorem – that the probability distribution of $\sigma$ in an interval $\sigma$ and $\sigma+d\sigma$ is given by, [$$\begin{aligned} \label{probs} P(\sigma)d\sigma= C_1 e^{-\beta \sigma^2} d\sigma \,,\end{aligned}$$]{} Using (\[si\]) and that $d \sigma = \gamma^2 dv$, where [$\gamma = 1/\sqrt{1-v^2/c^2}$]{}, we can write the probability distribution for the velocities of a one-dimensional relativistic gas as [$$\begin{aligned} \label{eq4} P(v)dv= C_1 e^{-\beta \Big( \tanh^{-1}(v/c) \Big)^2} \gamma^2 dv,\end{aligned}$$ ]{} with the normalization constant $C_1=\frac{1}{c}\sqrt{\beta/\pi}$. The factor [$\gamma^2 dv$]{} in (\[eq4\]) is the invariant line element as shown later. As we will see the variable $\sigma$ is in fact a particular case of the invariant distance measure in a 3D Lobachevsky space, which is the space of velocities in the special theory of relativity. For the general case of 3D velocities we have that the relative velocity ${\bf v}$ of two particles with arbitrary velocities ${\bf v_1}$ and ${\bf v_2}$, with respect to a fixed inertial frame, is given by [$$\begin{aligned} \label{eq5} {\bf v}=\frac{{\bf v_1}-{\bf v_2}+(\gamma(v_2)-1)({\bf v_2}/v_2^2)[{\bf v_1}\cdot{\bf v_2}-v_2^2]}{\gamma(v_2)(1-{\bf v_1}\cdot {\bf v_2}/c^2)}.\end{aligned}$$ ]{} The above expression also holds with the interchange ${\bf v_1}\leftrightarrows{\bf v_2}$. The square of the modulus of the relative velocity is given by [$$\begin{aligned} \label{eq6} v^2=\frac{({\bf v_1}-{\bf v_2})^2 - (1/c^2)[{\bf v_1}\wedge{\bf v_2}]^2}{(1-{\bf v_1}\cdot {\bf v_2}/c^2)^2},\end{aligned}$$ ]{} which is symmetric with respect to [${\bf v_1}$]{} and [${\bf v_2}$]{}. The square of the relative velocity (\[eq6\]) is invariant under Lorentz transformations. Now following Fock[@fock] let us consider the 3D space of velocities in the special theory of relativity and take two velocities infinitesimally close, namely, ${\bf v}$ and ${\bf v}+d{\bf v}$. Let $ds$ be the magnitude of the associated relative velocity divided by $c$. According to (\[eq6\]) we have [$$\begin{aligned} \label{eq7} d{s}^2=\frac{1}{c^2} \Big( \frac{d{\bf v} \cdot d{\bf v}}{1-v^2/c^2} + \frac{{\bf v} \cdot d{\bf v}}{(1-v^2/c^2)^2} \Big).\end{aligned}$$ ]{} Defining ${\bf v}=(v^1,v^2,v^3)$ and introducing the spherical coordinate system in the velocity space, [$$\begin{aligned} \label{eq8} v^1= v \sin \theta \cos \phi,~~v^2= v \sin \theta \sin \phi,~~v^3= v \cos \theta,\end{aligned}$$ ]{} we may express (\[eq7\]) as [$$\begin{aligned} \label{eq9} d {s}^2=\frac{\gamma^4}{c^2} d v^2+ \frac{v^2 \gamma^2}{c^2} d \Omega^2\end{aligned}$$ ]{} where $d \Omega^2=d \theta^2 + \sin^2 \theta d \phi^2$. The determinant of the metric in (\[eq9\]) is $\sqrt{g}=\gamma^4 v^2 \sin \theta/c^3$ so that the invariant element of volume of the velocity space in this coordinate system is given by [$$\begin{aligned} \label{eq10} dV=c^3 ~\sqrt{g}~dv d\theta d\phi=\gamma^4 v^2 \sin \theta ~dv d\theta d\phi.\end{aligned}$$ ]{} In particular, the invariant line element along an arbitrary direction $\theta=\phi={\rm const. }$  is  $dl=\gamma^2 ~dv$. The 3D Lobatchevsky velocity space, with the metrics (\[eq7\]) or (\[eq9\]), is a maximally symmetric Riemannian space[@eisenhart] and therefore homogeneous, isotropic, with constant Gaussian curvature $R=-1/c^2$ at any point. Since the space is homogeneous and isotropic any particular velocity may be chosen as the origin. Also, under Lorentz transformations, the space is mapped rigidly into itself. As a matter of fact the velocity spaces of the Galilean relativity and of the Einstein special relativity are both 3D maximally symmetric spaces, differing by the fact that the first has constant curvature $R=0$ and the latter has constant curvature $R=-1/c^2$. This difference has some bold implications for the relativistic velocity distribution to be derived. We mention that the remaining possible case of 3D maximally symmetric space is the spherical space ($R=1/c^2$), which apparently has no physical realization[@leblond]. In general we can assert that, since the velocity space is a Lobatchevsky space, the relativistic addition theorem for velocity coincides with the vector addition theorem in the Lobatchevsky geometry. Since the space is homogeneous and isotropic , we can consider that our additive variable is the magnitude of the relative velocity ${\bf v}$, at an arbitrary point taken as the origin. This length is evaluated with the Lobachevsky metric (11) yielding [$$\begin{aligned} \label{eq11} s=\pm \tanh^{-1} (|{\bf v}|/c),\end{aligned}$$ ]{} which turns out to be the 3D extension of the additive random variable $\sigma=(1/2)\ln \Big(\frac{1+v/c}{1-v/c}\Big)$ used in the derivation of (\[eq4\]), and which also has zero mean. This Lorentz invariant quantity denotes the rapidity of ${\bf v}$ with respect to the origin. The plus or minus sign in (\[eq11\]) defines the direction of the rapidity. ![Plot of the 1D Lorentz invariant distribution (\[eq4\]) for a fixed mass and increasing temperatures (successively dashed, dashdotted and continuous curves, cf. text). The distribution is zero for $v^2/c^2 \geq 1$, as expected.[]{data-label="figdist1"}](evaldo-1D-New.pdf){height="15cm" width="13cm"} ![Plot of the 3D Lorentz invariant distribution (\[eq14\]) for increasing temperatures (successively dashed, dotted, dashdotted and continuous curves, cf. text). As the temperature increases the probability piles up towards $v \sim c$ (due to the constraint $v/c \leq 1$), presenting a delta-type divergence at $v=c$ as $T \rightarrow \infty$.[]{data-label="figdist2"}](evaldo-3D-New.pdf){height="15cm" width="13cm"} With the above ingredients, basically, the homogeneity, isotropy and the invariant volume of the velocity space, and using the rapidity (\[eq11\]) as the additive variable, the 1D law (\[eq4\]) is generalized to [$$\begin{aligned} \label{eq13} P({\bf v})dV= C_3 e^{-\beta \Big( \tanh^{-1}(|{\bf v}|/c) \Big)^2} \gamma^4 dv^1 dv^2 dv^3,\end{aligned}$$ ]{} which is the 3D Lorentz invariant distribution law for velocities. In the coordinate system of (\[eq9\]), with $dV$ given by (\[eq10\]), we obtain the expression [$$\begin{aligned} \label{eq14} P(v)dv= 4 \pi C_3 e^{-\beta \Big( \tanh^{-1}(v/c) \Big)^2} \gamma^4 v^2 dv,\\ \nonumber\end{aligned}$$ ]{} where the factor $4 \pi$ comes from the integration in angles. The normalization constant $C_3$ is given by [$$\begin{aligned} \label{eq15} C_3= \frac{{\sqrt \beta}}{c^3 {\pi}^{3/2} (e^{1/\beta} -1)}.\end{aligned}$$]{} Physically, for one particle, the parameter $\beta$, which must be Lorentz invariant, is taken as the ratio [$$\begin{aligned} \label{eq16} \beta=\frac{m_0 c^2}{2 k T},\end{aligned}$$ ]{} where $m_0$ is the rest mass of the particles of the ensemble considered, $T$ the temperature and $k$ the Boltzmann constant. The nonrelativistic limit corresponds to $v^2/c^2 \ll 1$ and $k~T/m_0 c^2 \ll 1$, resulting in [$$\begin{aligned} \label{eq16b} (\tanh^{-1}(v/c)~)^2 \simeq v^2/c^2, ~~ C_{3}\simeq \frac{1}{c^3}(\beta/\pi)^{3/2},\end{aligned}$$ ]{} which reproduces the MB distribution. Although the components of the two velocities ${\bf v_1}$ and ${\bf v_2}$ appear nonlinearly in Einstein addition law, the relative velocity ${\bf v}$ appearing in (\[eq11\]) is an invariant and independent variable in the sense that the Lobachevsky space of velocities is mapped on itself by a Lorentz transformation, this map being a rigid translation without fixed points, so that no correlation is introduced by a Lorentz transformation. Of course the additivity in the sense of the Lobachevsky space leads to a distribution which is not separable with respect to the components of ${\bf v}$. ![The means of $v^2/c^2$ using the MB (dashed curve) and the Lorentz invariant (continuous curve) 3D distributions. The mean $\langle v^2/c^2\rangle_{\rm rel}$ is asymptotically limited by $1$. The dashdotted curve is the mean $\langle v_{\rm eff}^{2}/c^2 \rangle_{\rm rel}$ of the effective relativistic velocity (\[eq18\]) using the Lorentz invariant distribution.[]{data-label="fig3"}](evaldo-VM.pdf){height="15cm" width="13cm"} $kT/m_0 c^2$ $\langle v_{\rm eff}^{2} \rangle_{\rm rel}/c^2 $ $\langle v^{2} \rangle_{\rm rel} /c^2$ ${\langle v^{2} \rangle}_{MB}/c^2$ $\Delta=(\langle v_{\rm eff}^{2} \rangle_{\rm rel}-{\langle v^{2} \rangle}_{MB})/ \langle v_{\rm eff}^{2} \rangle_{\rm rel}$ -------------- -------------------------------------------------- ---------------------------------------- ------------------------------------ ------------------------------------------------------------------------------------------------------------------------------ $0.01$ $0.03032 $ $0.02922$ $ 0.03$ $ 1.08~\%$ $0.10$ $0.33529 $ $0.23927$ $ 0.30$ $ 10.52~\%$ $0.25$ $1.00139 $ $0.46505$ $ 0.75$ $ 25.10~\%$ $0.50$ $2.77437 $ $0.68145$ $ 1.50$ $ 45.93~\%$ $1.00$ $11.83122 $ $0.87657$ $ 3.00$ $ 74.64~\%$ In Figs. \[figdist1\] and \[figdist2\] we illustrate the behavior of the 1D and 3D Lorentz invariant distributions. Contrary to the case of the nonrelativistic distributions, which extends to infinite velocities, the Lorentz invariant distributions are bounded to $v^2/c^2 \leq 1$. As a consequence we see that in the relativistic case, as the temperature increases (or $\beta$ decreases) we have a piling up of the probability towards $v^2/c^2 \sim 1$, diverging at $v^2/c^2=1$ as $\beta \rightarrow 0$. In both figures we adopted $m_0$ as the hydrogen mass. The temperatures used in Fig. \[figdist1\] were $T=3 \times 10^{11},~5 \times 10^{12} ~{\rm and}~ 1.2 \times 10^{13}~K$ while in Fig. \[figdist2\], $T=5 \times 10^{11},~1 \times 10^{12},~3.5 \times 10^{12} ~{\rm and}~ 5 \times 10^{12}~K$. Using the Lorentz invariant distribution (\[eq14\]) we now evaluate the mean of $v^2/c^2$, [$$\begin{aligned} \label{eq17} \langle v^2 \rangle_{\rm rel}/c^2= 4 \pi c^3 C_3 \int^{\infty}_{0} \exp(-\beta x^2)\tanh^{2} x~ \sinh^{2} x~ dx.\end{aligned}$$ ]{} In Fig. \[fig3\] we compare $\langle v^2 \rangle_{\rm rel}/c^2$ with the MB [${\langle v^2 \rangle}_{MB}=3 k T/m_0$]{}. We see that $\langle v^2 \rangle_{\rm rel}/c^2$ is asymptotically limited by $1$, in contrast with the straight line corresponding to the MB mean. As a consequence, when the Lorentz invariant distribution is considered, a given $\langle v^2 \rangle_{\rm rel}$ corresponds to a temperature which is higher than the temperature obtained with the MB mean. We note that a discrepancy of the order of $10\%$ between the two velocity means corresponds to a temperature $T \sim {10^{12}}~K$ for particles with the hydrogen mass. For the relativistic kinetic energy [$E_k= E-m_0 c^2$]{}, which includes higher order contributions in the velocity, we define an effective velocity $v_{\rm eff}$ by [$$\begin{aligned} \label{eq18} v_{\rm eff}^{2} \equiv \frac{2 E_k}{m_0}= 2 c^2 \Big(\gamma-1 \Big)=v^2 \Big(1+ \frac{3}{4}~\frac{v^2}{c^2}+... \Big),\end{aligned}$$ ]{} whose mean in the Lorentz invariant distribution is [$$\begin{aligned} \label{eq19} {\langle v_{\rm eff}^{2} \rangle_{\rm rel}}/c^2 = 2\Big(\frac{e^{1/4\beta} \sinh (1/\beta)}{1-e^{-1/\beta}}-1 \Big).\end{aligned}$$ ]{} The mean (\[eq19\]) has the MB mean as a lower bound, as illustrated in Fig. \[fig3\]. Actually (\[eq19\]) yields the relativistic extension for the mean kinetic energy, [$$\begin{aligned} \label{eq20} \langle E_k \rangle_{\rm rel}=\frac{1}{2} m_0 \langle v_{\rm eff}^{2} \rangle_{\rm rel} = m_0 c^2\Big(\frac{e^{1/4\beta} \sinh (1/\beta)}{1-e^{-1/\beta}}-1 \Big).\end{aligned}$$ ]{} For large $\beta$, the right hand side of (\[eq19\]) yields the MB limit $\langle v_{\rm eff}^{2} \rangle \simeq 3 k T/m_0$, as expected. The exact equation (\[eq20\]) corrects the mean relativistic energy expression given in Ref. [@tolman], where the MB distribution was used. Some comments are in order now. As mentioned before, the mean $\langle v^{2} \rangle_{\rm rel}$ is associated with a large temperature as compared to the MB mean, therefore leading to a even higher relativistic kinetic energy than the one obtained with the Maxwell-Boltzmann distribution. This has an important consequence for virialized gravitational systems since, for a fixed temperature and gravitational radius, the mass of the virialized system results larger for the Lorentz invariant distribution than for the MB distribution, as illustrated in Table I. For a difference $\Delta$ of the order of $10\%$, which might lead to substantial observable effects, we obtain temperatures $T \simeq 1.062 \times 10^{12}~K$ and $T \simeq 5.93 \times 10^{8}~K$, respectively for hydrogen and electrons in thermal equilibrium. Such high temperatures can possibly occur in astrophysical scenarios. High temperatures presently observed in astrophysical systems, say $T \sim {10^{8}}~K$, yield discrepancies $\Delta \simeq 0.001\%$ and $\Delta \simeq 1.81\%$, respectively for hydrogen and electrons. A distinct scenario where the relativistic effects could be dominant are quark-gluon plasma formed in ultrarelativistic nucleus-nucleus collisions, under the assumption of local thermal equilibrium. The temperatures involved in these systems are of the order of $10^{15}~K$[@bjorken; @adcox]. Finally a corollary following from our derivation (\[eq14\]) is the behavior of the temperature under a Lorentz transformation. Since the $\beta$ parameter (\[eq16\]) must be a invariant under a Lorentz transformation two outcomes are possible, as we are assuming that the Boltzmann constant is a relativistic invariant. Either (i) we consider $m_0 c^2 \equiv E^2/c^2 -p^2$ the invariant four-momentum norm, implying that the temperature is also invariant, or (ii) $m_0$ is the rest energy implying then that $T \rightarrow T^{\prime}=\gamma T$. We favor the option (i) in accord with the considerations of Landsberg [@landsberg]. We are grateful to Prof. Constantino Tsallis for the stimulating discussions and suggestions, which were fundamental to the development of this paper. We also acknowledge the Brazilian scientific agencies CNPq, FAPERJ and CAPES for financial support. [99]{} J. C. Maxwell, Philosophical Magazine XIX (1860) 19-32 and XX (1860) 21-37. L. Boltzmann, Wiener Berichte 66 (1872) 275-370. L. Boltzmann, Wiener Berichte 76 (1877) 373-435. R. Balian, [*From Microphysics to Macrophysics*]{}: Methods and Applications of Statistical Mechanics, Vol. I, Springer Verlag (Berlin, 1991). L. E. Reichl, [*A Modern Course in Statistical Mechanics*]{}, John Wiley & Sons (New York, 1998). V. Fock, [*The Theory of Space, Time and Gravitation*]{}, Pergamon Press (Oxford, 1964), Chapter I, Sec. 17. L. P. Eisenhart, [*Continuous Groups of Transformations*]{}, (Dover, New York, 1961), Chapter V, Sec. 53. J. M. Lévy-Leblond and J. P. Provost, Amer. J. Phys. 47, 1 (1979). R. C. Tolman, [*The Principles of Statistical Mechanics*]{} (Dover, New York, 1979). P. T. Landsberg, [*Thermodynamics and Statistical Mechanics*]{} (Dover, New York, 1990). L. D. Landau, Izv. Akad. Nauk SSSR, Ser. Fiz. 17, 51 (1953); J. D. Bjorken, Phys. Rev. D 27, 140 (1983). K. Adcox et al. (Phenix Collaboration) K. Adcox et al. (PHENIX Collaboration), Phys. Rev. Lett. 88, 022301 (2001); C. Adler et al. (STAR Collaboration), Phys. Rev. Lett. 89, 202301 (2002);
14.5cm 0.96cm 0.96cm -0.31cm [**Current conservation in two-dimensional\ AC-transport**]{} Jian Wang and Qingrong Zheng [*Department of Physics,\ The University of Hong Kong,\ Pokfulam Road, Hong Kong.* ]{} Hong Guo [*Centre for the Physics of Materials,\ Department of Physics, McGill University,\ Montreal, Quebec, Canada H3A 2T8.* ]{} The electric current conservation in a two-dimensional quantum wire under a time dependent field is investigated. Such a conservation is obtained as the global density of states contribution to the emittance is balanced by the contribution due to the internal charge response inside the sample. However when the global partial density of states is approximately calculated using scattering matrix only, correction terms are needed to obtain precise current conservation. We have derived these corrections analytically using a specific two-dimensional system. We found that when the incident energy $E$ is near the first subband, our result reduces to the one-dimensional result. As $E$ approaches to the $n$-th subband with $n>1$, the correction term diverges. This explains the systematic deviation to precise current conservation observed in a previous numerical calculation. [PACS number: 72.10.Bg, 73.20.Dx, 73.40.Gk, 73.40.Lq]{} Introduction ============ The dynamic conductance of a quantum coherent mesoscopic system under a time dependent external field is the subject of recent interests[@but1; @bruder; @pieper; @chen; @wang1]. In contrast to dc-transport where the internal potential distribution inside the sample does not appear explicitly, the AC-response depends sensitively on the internal potential distribution. This internal potential is due to the charge distribution generated by the applied AC-field at the leads and it has to be determined self-consistently[@but1]. So far there are two approaches to the coherent AC-transport problem. One is to derive a formal linear response to a given potential distribution in the sample[@baranger]. The difficulty with such an approach is that the potential distribution is not known a priori. Another approach is to investigate the AC-response to an external perturbation which prescribes the potentials in the reservoirs only[@pastawski; @but1]. The external potentials effectively determine the chemical potential of the reservoirs and the potential distribution in the conductor must be considered a part of the response which is to be calculated self-consistently. In this approach, Büttiker and his coworkers[@but1; @but3] have formulated a current conserving formalism for the low frequency admittance of mesoscopic conductors. In the theory of Büttiker, Prêtre and Thomas[@but1], it is necessary to consider the Coulomb interactions between the many charges inside the sample, in order to preserve the current conservation. For a multi-probe conductor the low frequency admittance is found to have the form[@but3; @but4] $G_{\alpha\beta}(\omega)=G_{\alpha\beta}(0)-i\omega E_{\alpha\beta} +O(\omega^2)$, where $G_{\alpha \beta}(0)$ is the dc-conductance, $E_{\alpha \beta}$ is the emittance[@but3], and $\alpha$ (or $\beta$) labels the probe. The emittance $E_{\alpha \beta}$ describes the current response at probe $\alpha$ due to a variation of the electro-chemical potential at probe $\beta$ to leading order with respect to frequency $\omega$. It can be written as[@but3; @wang1] $E_{\alpha \beta} = dN_{\alpha \beta}/dE - D_{\alpha \beta}$, where the term $dN_{\alpha \beta}/dE$ is the global partial density of states (GPDOS)[@gas1] which is related to the scattering matrix. It describes the density of states of carriers injected in probe $\beta$ reaching probe $\alpha$ and is a purely kinetic term. The term $D_{\alpha \beta}$ is due to the Coulomb interaction of electrons inside the sample and is a term of capacitive nature. $D_{\alpha\beta}$ can be computed from the local density of states[@but1; @but3] which is related to the electron dwell times. Electric current conservation, namely $\sum_{\alpha}G_{\alpha\beta}(\omega)=0$, means that $\sum_{\alpha}E_{\alpha\beta}=0$ or equivalently[@but1; @ian] $$\frac{dN_{\beta}}{dE}\equiv \sum_{\alpha} \frac{dN_{\alpha \beta}}{dE} = \sum_{\alpha} D_{\alpha\beta}=\frac{\tau_{d,\beta}}{h} \label{eq1}$$ where $dN_{\beta}/dE$ is the DOS and $\tau_{d,\beta}$ is the dwell time for particles coming from the probe $\beta$. Clearly the current conservation is established since one realizes that $\sum_{\alpha}dN_{\alpha\beta}/dE$ is the physical quantity called injectance which is identical[@but3] to $\sum_{\alpha}D_{\alpha\beta}$. Applying the above formalism to mesoscopic conductors, one needs to compute various physical quantities[@wang1] such as the partial density of states. These quantities have vivid physical meaning[@wang1] but are not easy to obtain exactly. For a large system, the GPDOS can be expressed approximately in terms of the energy derivative of the scattering matrix elements[@avishai]: $$\frac{dN_{\alpha\beta}}{dE} = \frac{1}{4\pi i} \left( s_{\alpha\beta}^{\dagger}\frac{ds_{\alpha \beta}}{dE} - \frac{ds_{\alpha\beta}^{\dagger}}{dE}s_{\alpha\beta}\right)\ \ . \label{eq2}$$ Because for a given system one may be able to obtain the scattering matrix, Eq.(\[eq2\]) thus provides a practical means of computing the GPDOS. On the other hand, in order to obtain current conservation [*precisely*]{}, a correction should be added to Eq. (\[eq2\]) which can be neglected for large systems and large energies[@gas1; @gas2]. For one-dimensional systems, such a correction has been derived by Gasparian [*et. al.*]{}[@gas2] which contains the reflection amplitude divided by the energy, $$\frac{dN_{\alpha}}{dE} = \frac{d\bar{N}_{\alpha}}{dE} + Im \{\frac{s_{\alpha \alpha}}{4\pi E} \}\ \ , \label{eq3}$$ where $d\bar{N}_{\alpha}/dE\equiv \sum_{\beta}dN_{\alpha\beta}/dE$ which is computed from Eq. (\[eq2\]). We have recently applied the above current conserving formalism to a [*two-dimensional*]{} mesoscopic conductor in the shape of a T-junction[@wang1]. To the best of our knowledge, it was the first 2D calculation with first principles. Among other things, an interesting and we believe useful discovery was that Eq.(\[eq3\]) turned out to be inaccurate in 2D. First of all, energy $E$ in the second term on the right hand side of Eq.(\[eq3\]) has to be replaced by the longitudinal part of the incident energy. Even with this change, there were small but systematic deviations to precise current conservation when the energy is approaching the second subband. In fact it was found that the DOS $d\bar{N}_{\alpha}/dE$ as defined above diverges near the onset of the second subband and this led to the observed systematic deviations[@wang1]. We are not aware of any 2D theory to account for the correction term which should appear in Eq. (\[eq3\]). The purpose of this paper is to investigate such correction terms in two dimensions. This not only provides further theoretical insights to the problem of AC-transport, but is also very helpful from a practical application point of view. From our own experience, numerical AC-transport calculations can be quite tricky and being able to obtain precise electric current conservation often serves as a very stringent check to numerical results. To this purpose, we have considered the simplest two-dimensional model which is a $\delta$-potential inside a quasi-1D ballistic conductor[@bag]. Since quantum scattering in this system leads to mode mixing which is the basic feature of a two-dimensional system, it provides answers to our 2D problem. The advantage of this system is that it can be solved exactly. We have thus derived analytically the correction term. In particular we found that when the incident energy $E$ is within the first subband, our result essentially reduces to the one-dimensional result Eq. (\[eq3\]). As $E$ is increased to approach the $n$-th subband edge with $n>1$, the correction term diverges. This explains the systematic deviation observed in our previous numerical calculation[@wang1]. The paper is organized as the following. In the next section we present the solution of the 2D scattering problem and derive the correction term. Section III contains our numerical tests of the analytical formula. The last section serves as a summary. Model and results ================= Figure 1 shows the system where a $\delta$-potential is confined inside a quasi-1D wire with width $a$. We assume, for simplicity of the calculation, that the boundaries of the ballistic conductor are hard walls, [*i.e.*]{} the potential $V=\infty$. Inside the conductor, the potential is zero except that a $\delta$ function potential $V(x,y) = \gamma \delta(x) \delta(y-y_0)$ is placed at $\vec{r}=(0,y_0)$. The scattering region $x_1 < x < x_2$ is assumed to be symmetric with $x_2 = -x_1 =L/2$. From now on we set $\hbar= 1$ and $m=1/2$ to fix our units. To compute the transmission and reflection amplitudes thus the scattering matrix, a mode matching method[@schult] is employed. The electron wave functions are written as follows. For region I (see figure 1): $$\Psi_I = \sum_n \chi_n(y) \left(a_n e^{i k_n x} + b_n e^{-i k_n x} \right) \ \ ,$$ where $\chi_n(y)$ is the wave function of the $n$-th subband along y-direction; $a_n$ is the incoming wave amplitude and taken as an input parameter; $b_n$ is the reflection amplitude; and $k_n^2 = E - (n\pi/a)^2$ is the longitudinal momentum for the $n$-th mode. Note that for electron traveling in the first subband, $k_n$ with $n>1$ is purely imaginary. Similarly for region II: $$\Psi_{II} = \sum_n \chi_n(y) \left(c_n e^{i k_n x} + d_n e^{-i k_n x} \right) \ \ ,$$ where $c_n$ is transmission amplitude and $d_n$ is set to zero in our calculation. After matching the boundary conditions at $x=0$, we obtain $$a_n+b_n = c_n$$ and $$i k_n c_n - i k_n (a_n - b_n) = \sum_m \Gamma_{nm} (a_m +b_m) \ \ ,$$ where $\Gamma_{nm} = \gamma \chi^*_n(y_0) \chi_m(y_0)$. Eliminating $c_n$, we have $$\vec{e} = P \vec{b} \ \ ,$$ where $e_n = -\sum_m \Gamma_{nm} a_m$ and $P_{nm} = \Gamma_{nm} -2 i k_n \delta_{nm}$. To find $\vec{b}$ we need to compute $P^{-1}$. Introducing a new matrix $\tilde{P} \equiv I + M$ with $M_{nm} = i \Gamma_{nm}/(2k_m)$ so that $\tilde{P}_{nm} (-2ik_m) = P_{nm}$. Expanding $\tilde{P}^{-1}$ in powers of $M$, we have $$\tilde{P}^{-1} = \frac{1}{I+M} = I - M + M^2 - M^3 ...$$ Since $\Gamma_{nm} \Gamma_{ml} = \Gamma_{nl} \Gamma_{mm}$, we find that $M^2 = (\alpha -1) M $ where $\alpha = 1+i \sum_n \Gamma_{nn}/(2k_n)$, from which we have $\tilde{P}^{-1} = 1-M/\alpha$. Finally, we obtain the matrix elements $$(P^{-1})_{nm} = \frac{i}{2k_n} (\delta_{nm} - \frac{i \Gamma_{nm}} {2 k_m \alpha} )\ \ . \label{eqp}$$ We shall specialize to consider the incident electron being in the first subband: $a_n = \delta_{n1}$. Using Eq.(\[eqp\]) the reflection and transmission amplitudes are $$b_n = \sum_m (P^{-1})_{nm} e_m = \frac{-i \Gamma_{n1}}{2k_n \alpha}\ \ ,$$ $$c_n = \delta_{n1} +b_n\ \ .$$ For our system the scattering matrix elements $s_{\alpha \beta}$ are given by $s_{11} = b_1 \exp(ik_1 L)$ and $s_{12} = c_1 \exp(i k_1 L)$. The approximate DOS becomes, using Eq. (\[eq2\]), $$\begin{aligned} \frac{d\bar{N}_{\alpha}}{dE} & = & \frac{1}{4\pi i} \sum_{\beta}\left(s_{\alpha \beta}^{\dagger} \frac{ds_{\alpha \beta}}{dE} - \frac{ds_{\alpha\beta}^{\dagger}}{dE} s_{\alpha\beta} \right) \nonumber \\ & = &\frac{L}{4\pi k_1} - Im\left(\frac{b_1}{4\pi k_1^2}\right) - \frac{1}{4\pi} \sum_n \frac{|b_n|^2}{i k_1 k_n} \ \ . \label{dosbad}\end{aligned}$$ To derive this expression we have used a relation $2b_1^* +1 = \alpha /\alpha^*$ which follows directly from the unitary condition of the scattering matrix. Next we compute the dwell time and hence the precise DOS (as opposed to the approximate DOS of Eq. (\[dosbad\])): $$\begin{aligned} \tau_{d,1} &=& \int_I |\Psi_I|^2 dx dy + \int_{II} |\Psi_{II}|^2 dx dy \nonumber \\ & = & \frac{L}{2k_1} + Re (b_1 \frac{e^{i k_1 L}-1}{2ik_1^2}) + \sum_n |b_n|^2 \frac{e^{ik_n L}-1}{2ik_1 k_n}\ \ . \label{dosgood}\end{aligned}$$ From Eqs.(\[eq1\]),(\[dosbad\]) and (\[dosgood\]), we arrive at the following central result of this work, $$\frac{dN_{\alpha}}{dE} = \frac{d\bar{N}_{\alpha}}{dE} + Im \{\frac{s_{\alpha \alpha}}{4\pi k_1^2}\} + \frac{1}{4\pi} \sum_{n=2} \frac{|b_n|^2}{i k_1 k_n} e^{i k_n L}\ \ . \label{result}$$ Hence we found that for this 2D system, there are two correction terms to the DOS. Clearly the first correction term, [*i.e.*]{} the 2nd term on the right hand side of Eq. (\[result\]), is generic, as it can be written in terms of the scattering matrix element. This term is similar to the corresponding term in Eq. (\[eq3\]) of the 1D case, except that the total energy $E$ in Eq. (\[eq3\]) is now replaced by the transport energy $k_1^2$. In fact this them has been guessed in our earlier work[@wang1]. There is a second correction term (the 3rd term of Eq.(\[result\])) which comes solely due to mode mixing in our 2D system, and understandably it does not exist in 1D cases[@gas2]. For small incident energies, [*i.e.*]{} as $k_1$ goes to zero, $|b_n|^2 \rightarrow k_1^2$ for $n>1$. Therefore the second correction term of (\[result\]) is actually negligible at small energies. Indeed, this is the case in our earlier numerical calculations[@wang1] where current conservation was very well satisfied at low energies using Eq.(\[eq3\]). However, as energy is approaching the $n$-th subband edge, for small $k_n \rightarrow 0$ with $n>1$, $|b_n|^2$ remains finite. Hence according to Eq. (\[result\]) the second correction term diverges at these higher subband edges. This explains the observation of our calculation[@wang1] where systematic numerical errors exist in current conservation near the 2nd subband edge. For energies within the first subband, as mentioned above $k_n$ are all pure imaginary numbers with $n>1$. Hence with large system size $L$, the factor $\exp(i k_n L)$ is very small as long as $k_n\neq 0$. However we emphasis that the second correction term becomes dominant very near each subband edge thus must be included in order to obtain precise current conservation. Finally we note that Eq.(\[result\]) is not coordinate independent, so care must be taken when using it. For instance, if we choose $x_1$ as the origin in figure 1, the factor $\exp(i k_n L)$ in the last term of Eq.(\[result\]) will be canceled due to the coordinate shift while the second term of Eq.(\[result\]) remains the same. In this sense, the new correction term is not generic and must be computed case by case for 2D systems. Numerical test ============== To gain further intuitive impression of the AC-transport, and in particular to check our analytical formula Eq.(\[result\]), we shall first present direct numerical calculations of the admittance for the quantum wire system studied in the last section (Fig.(1)). Obviously since this problem was solved exactly above, agreement is obtained with Eq.(\[result\]). We shall then study the validity of Eq.(\[result\]) using another more complicated 2D conductor in the shape of a T-junction (see below). Indeed, although Eq.(\[result\]) was derived using a specific example of Fig.(1), it dramatically improves the current conservation near the second subband edge for the T-junction as well. In order to compute the admittance, we have to know $D_{\alpha, \beta}$ which is related to the dwell time[@but1; @wang1]. For a metallic conductor, it is appropriate to use the Thomas-Fermi approximation. Under such an approximation $D_{\alpha, \beta}$ is given by[@but1; @but3] $$D_{\alpha, \beta} = \int d^3r \frac{(dn(\alpha, \vec{r})/dE ) (dn(\vec{r}, \beta)/dE)} {dn(\vec{r})/dE} \ \ , \label{d11}$$ where the local density of states $dn(\vec{r},\beta)/dE$ is the injectivity which measures the additional local charge density brought into the sample at point $\vec{r}$ by the oscillating chemical potential at probe $\beta$. The injectivity can be expressed as[@but1] $$\frac{dn(\vec{r}, \beta)}{dE} = \sum_n \frac{|\Psi_{\beta n}(\vec{r})|^2} {2\pi v_{\beta n}} \ \ , \label{dn}$$ where $v_{\beta n}$ is the velocity of carriers at the Fermi energy at mode $n$ in probe $\beta$. $dn(\alpha, \vec{r})/dE$ is called the emissivity which describes the local density of states of carriers at point $\vec{r}$ which are emitted by the conductor at probe $\alpha$. It is defined as $$\frac{dn(\alpha, \vec{r})}{dE} = -\frac{1}{4\pi i} \sum_{\beta} Tr \left[ s_{\alpha \beta}^{\dagger} \frac{\delta s_{\alpha \beta}}{e \delta U(\vec{r})} - \frac{\delta s_{\alpha \beta}^{\dagger}}{e \delta U(\vec{r})} s_{\alpha \beta} \right] \ \ .$$ It has been shown[@but4] that in the absence of magnetic field the injectivity is equal to the emissivity. Using Eqs.(\[dosbad\]), (\[d11\]) and (\[dn\]), we can calculate the emittance. Specifically, for the system of Fig.(1) we consider incident electron coming from probe 1 and set $a=L=1$, $y_0=0.3$, and $\gamma=-1$. In Fig.(2), we plot the global DOS together with the transmission coefficient $T$. As expected, the transmission coefficient $T(E)$ (solid line) has large values for almost all energies $E$ except at a special energy $E_r$ where we have complete reflection (reflection coefficient $R(E_r)=1$) due to the resonant state. This can also be seen from the behavior of the global partial DOS for reflection $dN_{11}/dE$ (dotted line) which peaks when $T(E=E_r)=0$. On the other hand, $dN_{21}/dE$ (dashed line), which is the global partial DOS for transmission, takes minimum value at $E=E_r$. This behavior is consistent with that of a 1D system made of a symmetric scatterer[@gas1] where one has $dN_{11}/dE \sim R dN/dE$ and $dN_{21}/dE\sim T dN/dE$. In Fig.(3), the quantities $D_{11}$ (solid line) and $D_{12}$ (dotted line) are shown. Both curves reach maximum values near the resonant point $E_r$, which is expected since $D_{\alpha \beta}$ are proportional to the dwell time or the DOS. The emittance $E_{\alpha \beta}$ is plotted in Fig.(4). Both $E_{11}$ (solid line) and $E_{12}$ (dotted line) reach extremal values at the resonant point. The system responds differently for different energy, either capacitively when $E_{11}=-E_{12} >0$, or inductively otherwise. From Fig.(4), we observe that near the resonance $E_{11}$ and $E_{12}$ respond capacitively while $E_{12}$ is inductive away from this resonance energy. This behavior, namely being capacitive when at the $T\approx 0$ resonance, is the same as that observed in the 2D T-junction[@wang1]. On the other hand for an 1D tunneling system[@but1] the response is inductive at its resonance. But in that case the resonance is marked by transmission coefficient being near unity. Finally, to confirm electric current conservation, essentially the two curves of Fig.(4) must add to zero. Clearly these curves do not cancel each other as the figure shows, exactly due to the approximate nature of the partial density of states as obtained using Eq. (\[eq2\]). After including the two corrections to DOS as derived in Eq. (\[result\]), however, we did obtain perfect current conservation for the whole energy range. This is not surprising since after all (\[result\]) is an exact result for this quantum system. Our main result Eq.(\[result\]) is derived using a specific simple example shown in Fig.(1). There seems no special reason for Eq.(\[result\]) to apply to other 2D systems, since the form of the new correction term is given by the amplitudes of the non-propagating modes inside the scattering junction (as oppose to the more general scattering matrix elements), and these evanescent amplitudes probably depend on the scatterer in some fashion. In this sense it is unfortunate that a more general form was not obtained. However since the new correction term does explain, qualitatively, the observed[@wang1] discrepancy of using Eq. (\[eq3\]) as discussed above, it is tempting to test it using the more complicated 2D system of the T-junction studied previously[@wang1]. As the T-junction has been reviewed and studied by many authors[@sols; @wang1] at various contexts, here we shall not present the details for its calculation. For this purpose, we have checked the current conservation of the T-junction using Eq.(\[eq3\]) and compared with the result obtained using Eq.(\[result\]). In Fig.(5), we have plotted the DOS $d\bar{N}_1/dE$ given by Eq.(\[eq3\]) (dotted line) and by[@foot1] Eq. (\[result\]) (solid line), and the dwell time $\tau_{d,1}/2\pi$ (dashed line). Although the result Eq. (\[result\]) is model dependent, we observe that the agreement is clearly better. This suggests that the new correction term does capture the essential ingredient of the correction, although it is not completely universal as the evanescent amplitudes depend on the peculiarities of a 2D system in some weak way, leading to the small remaining difference. Summary ======= In summary, we have investigated the electric current conservation in a two-dimensional ballistic conductor under a time dependent field. Similar to that of the 1D case, we found that in order to obtain precise current conservation, certain corrections to the density of states as obtained approximately from the scattering matrix must be included. We have derived these corrections analytically for a specific two-dimensional system and found that there are two correction terms. One of the correction term has the same form as that of the 1D case, while the second correction term is purely due to mode mixing characteristic of 2D quantum scattering. In particular, when the incident energy $E$ is within the first subband, our result essentially reduces to the one-dimensional result if $E$ is not too high. On the other hand as $E$ approaches to the $n$-th subband with $n>1$, the correction term diverges at the subband edges. Hence in 2D the mode mixing leads to important changes in the global density of states and must be included if precise electric current conservation is desired. Finally, the new correction term found here provides a qualitative explanation for the small but systematic deviation to precise current conservation observed in our previous numerical calculations[@wang1] on a 2D quantum wire in the shape of the T-junction. Indeed, our numerical test has produced better agreement when the new formula derived here is used. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Prof. M. Büttiker for helpful communications and discussions. We gratefully acknowledge support by a RGC grant from the Government of Hong Kong under grant number HKU 261/95P, a research grant from the Croucher Foundation, the Natural Sciences and Engineering Research Council of Canada and le Fonds pour la Formation de Chercheurs et l’Aide à la Recherche de la Province du Québec. We thank the Computer Center of the University of Hong Kong for computational facilities. [00]{} M. Büttiker, A. Prêtre and H. Thomas, Phys. Rev. Lett. [**70**]{}, 4114 (1993); M. Büttiker and H. Thomas, in [*Quantum-Effect Physics, Electronics and Applications*]{}, Eds. K. Ismail [*et. al.*]{}, (Institute of Physics Conference Series Number 127, Bristol, 1992), pp. 19; M. Büttiker, H. Thomas and A. Prêtre, Z. Phys. B. [**94**]{}, 133 (1994). C. Bruder and H. Schoeller, Phys. Rev. Lett. [**72**]{}, 1076 (1994). J. B. Pieper and J. C. Price, Phys. Rev. Lett. [**72**]{}, 3586 (1994). W. Chen, T. P. Smith, M. Büttiker, and M. Shayegan, Phys. Rev. Lett. [**73**]{}, 146 (1994). Jian Wang and Hong Guo, cond-mat/9608158. H. U. Baranger and A. D. Stone, Phys. Rev. B [**40**]{}, 8169 (1989); D. S. Fisher and P. A. Lee, Phys. Rev. B [**23**]{}, 6851 (1981); J. Cohen and Y. Avishai, J. Phys. Condens. Matter [**7**]{}, L121 (1995). H. Pastawski, Phys. Rev. B [**46**]{}, 4053 (1992). M. Büttiker, J. Phys. Condens. Matter [**5**]{}, 9361 (1993). M. Büttiker and T. Christen, cond-mat/9601075. V. Gasparian, T. Christen and M. Büttiker, cond-mat/9603135. G. Iannaccone, Phys. Rev. B [**51**]{} 4727 (1995). Y. Avishai and Y. B. Band, Phys. Rev. B [**32**]{}, 2674 (1985). V. Gasparian [*et. al.*]{} Phys. Rev. B [**51**]{}, 6743 (1995). P. F. Bagwell, Phys. Rev. B [**41**]{}, 10354 (1990). R. L. Schult, D. G. Ravenhall, and H. W. Wyld, Phys. Rev. B [**39**]{}, 5476 (1989). See, for example, F. Sols, M. Macucci, U. Ravaioli and K. Hess, Appl. Phys. Lett. [**54**]{}, 350 (1989); Jian Wang, Yongjiang Wang and Hong Guo, Appl. Phys. Lett. [**65**]{}, 1793 (1994). To apply Eq. (\[result\]) for the T-junction, we have substituted the appropriate amplitudes $b_n$ for the T-junction into (\[result\]). Figure Captions {#figure-captions .unnumbered} =============== - Schematic plot of the quantum wire system: a $\delta$ potential $\gamma\delta(\vec{r}-\vec{r_0})$ is confined inside a quasi-1D quantum wire, with $\vec{r_0} = (0,y_0)$. The wire width is $a$. The scattering region is between $x_1$ and $x_2$, where $x_2=-x_1=L/2$. In our calculations, the parameters are set to $L=a=1$, $y_0=0.3$, and $\gamma=-1.0$. - The global partial density of states and the transmission coefficient as functions of electron energy $E$. Solid line: transmission coefficient $T$; dotted line: $dN_{11}/dE$; dashed line: $dN_{21}/dE$. Unit of energy is $\hbar^2/2ma^2$. - The current response to the internal potential, $D_{\alpha\beta}$, as a function of energy $E$. Solid line: $D_{11}$; dotted line: $D_{21}$. - The dynamic part of the admittance, $E_{\alpha\beta}\equiv dN_{\alpha\beta}/dE-D_{\alpha\beta}$ as a function of energy. - A numerical check of the electric current conservation, Eq. (\[eq1\]), for the T-junction studied in Ref. [@wang1]. Solid line: $dN_1/dE$ as obtained by Eq. (\[result\]); dotted line: $\tau_d/h\ =\ \sum_{\alpha}D_{\alpha 1}/h$. Agreement of the two curves indicate the conservation. The remaining small differences at high end of the energy between the two curves indicates that the new correction term in Eq. (\[result\]) has a weak non-universal dependence on the 2D system shapes.
--- author: - | \ Jefferson Lab, 12000 Jefferson Avenue, Newport News, Virginia 23606, USA\ E-mail: title: Transversity Parton Distribution --- Introduction ============ Distribution of partons in a polarized spin-1/2 hadron can be completely described by three collinear Parton Distribution Functions (PDFs): unpolarised parton distribution $f_1$, helicity distribution $g_1$, and transversity distribution $h_1$. These standard collinear PDFs are defined through collinear factorization theorems and obey DGLAP evolution equations [@Altarelli:1977zs; @Dokshitzer:1977sg; @Lipatov:1974qm; @Barone:1997fh; @Vogelsang:1997ak; @Hayashigaki:1997dn]. The transversity distribution [@Ralston:1979ys] describes transversely polarized quarks in a transversely polarized nucleon. Formally such a distribution can be expressed by $$h_1^q(x) = \int \frac{d\xi^-}{2\pi} e^{-ix P^+ \xi^-}\langle P, S_{T}| {\bar{\psi}}_q(0^+,\xi^-,0_\perp) \frac{\gamma^+ \gamma^j \gamma_5}{2}\psi_q(0^+,0^-,0_\perp)|P,S_{T}\rangle$$ The corresponding charge is called “tensor charge” $$\delta q = \int_0^1 d x (h_1^q(x) - h_1^{\bar q}(x))$$ Transversity obeys so-called Soffer bound [@Soffer:1995ww] $$|h_1(x,Q^2)| \le \frac{1}{2}\left(f_1(x,Q^2) + g_1(x,Q^2) \right)\,$$ This bound was shown to be preserved at LO accuracy in Ref. [@Barone:1997fh] and at NLO accuracy in Ref. [@Vogelsang:1997ak]. Transversity is the least known of the three collinear distributions and the reason is that it cannot be measured in Deep Inelastic Scattering (DIS) due to its chiral odd nature. It should couple to another chiral odd quantity (chiral odd fragmentation or chiral odd distribution function, for example transversity itself). The best channel to measure transversity remains polarized Drell-Yan (preferable proton anti proton) process in which one could measure the product of transversity distributions directly, see Ref. [@Barone:2005pu]. QCD evolution of collinear transversity distribution is well known, see Refs. [@Barone:1997fh; @Vogelsang:1997ak; @Hayashigaki:1997dn]. It does not couple to gluons and thus exhibits non-singlet $Q^2$ evolution. Gluon transversity distribution does not exist either. This leads to the fact that transversity is suppressed at low-x makes it a valid object to study in high-x region by Jefferson Lab 12 [@Dudek:2012vr]. Currently the knowledge on transversity comes from Semi Inclusive Deep Inelastic Scattering (SIDIS) experimentally observed at HERMES [@Airapetian:2004tw; @Airapetian:2010ds], COMPASS [@Ageev:2006da; @Martin:2013eja] and JLab 6 [@Qian:2011py] in single spin asymmetries where transversity couples to so-called Collins fragmentation function [@Collins:1992kk]. Information on the convolution of two chiral-odd fragmentation functions is obtained from $e^+e^- \to h_1 \, h_2 \, X$ processes  [@Abe:2005zx; @Seidl:2008xc; @Seidl:2012er]. One usually measures low transverse momentum final hadron and thus one applies Transverse Momentum Dependent factorization. The transversity in this case depends also on intrinsic transverse motion of quarks $\bf k_\perp$ and one speaks of Transverse Momentum Dependent (TMD) transversity. One can also study transversity coupled to so-called di-hadron fragmentation function [@Collins:1993kq; @Jaffe:1997hf; @Radici:2001na]. The $u$ and $d$ quark transversity distributions, together with the Collins fragmentation functions, have been extracted for the first time in Refs. [@Anselmino:2007fs; @Anselmino:2008jk], from a combined analysis of SIDIS and $e^+e^-$ data. The most recent extraction is presented in Ref. [@Anselmino:2013vqa]. Di-hadron method was implemented in the analysis of Ref. [@Bacchetta:2012ty] and the results on the extraction of transversity from Refs. [@Anselmino:2007fs; @Anselmino:2008jk; @Anselmino:2013vqa] and Ref. [@Bacchetta:2012ty] agree with each other quite well. QCD evolution of Transverse Momentum Dependent transversity was recently obtained in Ref. [@Bacchetta:2013pqa]. Phenomenology ============= The result on extraction of the transversity is presented in Fig. \[fig:newh1-collins-A12\].[^1] One can see that $u$ quark transversity is positive and $d$ quark transversity is negative. This results is coming from global analysis of SIDIS HERMES [@Airapetian:2004tw; @Airapetian:2010ds], COMPASS [@Ageev:2006da; @Martin:2013eja] and $e^+e^-$ BELLE [@Abe:2005zx; @Seidl:2008xc; @Seidl:2012er] data. Experimentally so-called Collins asymmetry in SIDIS with unpolarised beams ($U$) and transversely polarized target ($T$) is measured and it is proportional to convolution of transversity and Collins fragmentation functions $$A_{UT}^{\sin(\phi_h+\phi_S)} \propto \sum_q h_1^q\otimes H_{1q}^\perp$$ here $\phi_h$, and $\phi_S$ are azimuthal angles of produced pion and polarization vector, experimentally observed modulation is proportional to $\sin(\phi_h+\phi_S)$ and sign $\otimes$ denotes usual TMD convolution [@Collins:2011zzd]. One can see that knowledge of Collins fragmentation function ($H_{1q}^\perp$) is needed in order to extract transversity, fortunately in $e^+e^-$ process one observes an asymmetry which is related to convolution of two Collins functions $\sum_q H_{1q}^\perp \otimes H_{1\bar q}^\perp$. This allows us to have perform global analysis  [@Anselmino:2007fs; @Anselmino:2008jk; @Anselmino:2013vqa] of SIDIS and $e^+e^-$ data. -1.cm -0.5cm We also present the results on the tensor charge at $Q^2=$ 0.8 GeV$^2$ in Fig. \[fig:tensorcharge\].[^2] -0.5cm ![\[fig:tensorcharge\] The tensor charge for $u$ (left) and $d$ (right) quarks, computed using the transversity distributions obtained in Ref. [@Anselmino:2013vqa]. The gray areas correspond to the statistical uncertainty bands of the extraction. The results are compared with those given in Ref. [@Anselmino:2008jk] (number 2) of the model calculations of Refs. [@Cloet:2007em; @Wakamatsu:2007nc; @Gockeler:2005cj; @He:1994gz; @Pasquini:2006iv; @Gamberg:2001qc; @Hecht:2001ry] (number 3-9) and with the results extraction from Ref. [@Bacchetta:2012ty] (number 10).](tensor-charge.pdf "fig:"){width="90.00000%"} -0.5cm -36pt Future measurements at Jefferson Lab 12 are going to be very important for the extraction of the transversity and the tensor charge. We estimate that corresponding improvement of the statistical error of the extraction will be a factor of $5$ approximately [@Dudek:2012vr]. This will mean that from almost 50% uncertainty we will be able to extract tensor charge with 10% uncertainty and compare it better to model predictions. Tensor charge is important for some dynamical effects of new heavy Beyond Standard Model degrees of freedom [@Bhattacharya:2011qm]. In order to constrain possible parameters of those models one needs precise knowledge of the tensor charge. Future Electron Ion Collider [@Boer:2011fh] will also allow us to study carefully $Q^2$ evolution of transversity and explore low-x region. Conclusions =========== Interested reader is referred to several reviews that describe the transversity in greater detail, see Refs [@Barone:2001sp; @Barone:2010zz; @D'Alesio:2007jt]. I have not discussed all possible ways to access transversity, for instance $\Lambda$ electroproduction in SIDIS. In this case one needs to know the chiral odd fragmentation function of $\Lambda$ production and it can be accessed via $e^+e^- \rightarrow \bar\Lambda\Lambda X$. One could also study transversity in proton proton scattering by utilizing $ p p^\uparrow \rightarrow \pi jet X$ [@D'Alesio:2010am]. In future we will have data from BABAR Collaboration, which have performed an independent new analysis of $e^+ e^- \to h_1 \, h_2 \, X$ data [@Garzia:2012za]. Jefferson Lab 12 will provide precision data in high-x region [@Dudek:2012vr] and thus complement results obtained in SIDIS at HERMES [@Airapetian:2004tw; @Airapetian:2010ds], COMPASS [@Ageev:2006da; @Martin:2013eja] and JLab 6 [@Qian:2011py]. Generally proton proton scattering can be described by so-called twist-3 factorization in which one studies multi-parton correlations of partons and corresponding Efremov-Teryaev-Qiu-Sterman functions [@Efremov:1981sh; @Efremov:1984ip; @Qiu:1991pp; @Qiu:1998ia; @Koike:2009ge; @Kang:2010zzb]. These functions are related to TMD functions and more globally twist-3 formalism and TMD formalism are closely related to each other, and have been shown to be equivalent in the overlap region where both can apply [@Ji:2006ub; @Koike:2007dg; @Bacchetta:2008xw]. Once a comprehensive global analysis of the data from SIDIS (TMD) and proton proton scattering (twist-3) is done (see preliminary results in Refs. [@Kang:2012xf; @Gamberg:2013kla]) we are going to obtain a complete description of asymmetries and corresponding parton distributions including transversity parton distribution. [**Acknowledgements**]{} The author would like to thank his colleagues Mauro Anselmino, Elena Boglione, Umberto D’Alesio, Stefano Melis, Francesco Murgia, Leonard Gamberg, and Zhong-Bo Kang. The main results presented in this article are obtained in collaboration with them. [99]{} G. Altarelli and G. Parisi, [*Nucl. Phys.*]{} [**B126**]{} (1977) 298. Y. L. Dokshitzer, [*Sov. Phys. JETP*]{} [**46**]{} (1977) 641–653. L. Lipatov, [ *Sov.J.Nucl.Phys.*]{} [**20**]{} (1975) 94–102. V. Barone, [ *Phys.Lett.*]{} [**B409**]{} (1997) 499–502 \[[[hep-ph/9703343]{}](http://arXiv.org/abs/hep-ph/9703343)\]. W. Vogelsang, [*Phys.Rev.*]{} [**D57**]{} (1998) 1886–1894 \[[[ hep-ph/9706511]{}](http://arXiv.org/abs/hep-ph/9706511)\]. A. Hayashigaki, Y. Kanazawa and Y. Koike, [*Phys. Rev.*]{} [**D56**]{} (1997) 7350–7360 \[[[hep-ph/9707208]{}](http://arXiv.org/abs/hep-ph/9707208)\]. J. P. Ralston and D. E. Soper, [*Nucl.Phys.*]{} [**B152**]{} (1979) 109. J. Soffer, [*Phys. Rev. Lett.*]{} [**74**]{} (1995) 1292–1294 \[[[ http://arXiv.org/abs/hep-ph/9409254]{}](http://arXiv.org/abs/http://arXiv.org/abs/hep-ph/9409254)\]. Collaboration, V. Barone [*et. al.*]{}, [[hep-ex/0505054]{}](http://arXiv.org/abs/hep-ex/0505054). J. Dudek, R. Ent, R. Essig, K. Kumar, C. Meyer [*et. al.*]{}, [*Eur.Phys.J.*]{} [**A48**]{} (2012) 187 \[[[ 1208.1244]{}](http://arXiv.org/abs/1208.1244)\]. Collaboration, A. Airapetian [*et. al.*]{}, [*Phys. Rev. Lett.*]{} [**94**]{} (2005) 012002 \[[[hep-ex/0408013]{}](http://arXiv.org/abs/hep-ex/0408013)\]. Collaboration, A. Airapetian [*et. al.*]{}, [*Phys. Lett.*]{} [**B693**]{} (2010) 11–16 \[[[ 1006.4221]{}](http://arXiv.org/abs/1006.4221)\]. Collaboration, E. S. Ageev [*et. al.*]{}, [*Nucl. Phys.*]{} [**B765**]{} (2007) 31–70 \[[[hep-ex/0610068]{}](http://arXiv.org/abs/hep-ex/0610068)\]. Collaboration, A. Martin, [[1303.2076]{}](http://arXiv.org/abs/1303.2076). Collaboration, X. Qian [*et. al.*]{}, [ *Phys.Rev.Lett.*]{} [**107**]{} (2011) 072003 \[[[1106.0363]{}](http://arXiv.org/abs/1106.0363)\]. J. C. Collins, [*Nucl. Phys.*]{} [**B396**]{} (1993) 161–182. Collaboration, R. Seidl [*et. al.*]{}, [*Phys. Rev. Lett.*]{} [**96**]{} (2006) 232002. Collaboration, R. Seidl [*et. al.*]{}, [*Phys. Rev.*]{} [ **D78**]{} (2008) 032011 \[[[0805.2975]{}](http://arXiv.org/abs/0805.2975)\]. Collaboration, R. Seidl [*et. al.*]{}, [*Phys. Rev.*]{} [**D86**]{} (2012) 032011(E) \[[[0805.2975]{}](http://arXiv.org/abs/0805.2975)\]. J. C. Collins, S. F. Heppelmann and G. A. Ladinsky, [*Nucl. Phys.*]{} [**B420**]{} (1994) 565–582 \[[[hep-ph/9305309]{}](http://arXiv.org/abs/hep-ph/9305309)\]. R. Jaffe, X.-m. Jin and J. Tang, [*Phys. Rev. Lett.*]{} [**80**]{} (1998) 1166–1169 \[[[ hep-ph/9709322]{}](http://arXiv.org/abs/hep-ph/9709322)\]. M. Radici, R. Jakob and A. Bianconi, [*Phys. Rev.*]{} [**D65**]{} (2002) 074031 \[[[hep-ph/0110252]{}](http://arXiv.org/abs/hep-ph/0110252)\]. M. Anselmino, M. Boglione, U. D’Alesio, A. Kotzinian, F. Murgia and A. Prokudin, [*Phys. Rev.*]{} [**D75**]{} (2007) 054032 \[[[hep-ph/0701006]{}](http://arXiv.org/abs/hep-ph/0701006)\]. M. Anselmino, M. Boglione, U. D’Alesio, A. Kotzinian, S. Melis, F. Murgia and A. Prokudin, [*Nucl. Phys. Proc. Suppl.*]{} [**191**]{} (2009) 98–107 \[[[0812.4366]{}](http://arXiv.org/abs/0812.4366)\]. M. Anselmino, M. Boglione, U. D’Alesio, S. Melis, F. Murgia [*et. al.*]{}, [[1303.3822]{}](http://arXiv.org/abs/1303.3822). A. Bacchetta, A. Courtoy and M. Radici, [[1212.3568]{}](http://arXiv.org/abs/1212.3568). A. Bacchetta and A. Prokudin, [[1303.2129]{}](http://arXiv.org/abs/1303.2129). J. Collins, [*Foundations of Perturbative QCD*]{}. Cambridge Monographs on Particle Physics, Nuclear Physics and Cosmology. Cambridge University Press, 2011. I. C. Cloet, W. Bentz and A. W. Thomas, [*Phys. Lett.*]{} [**B659**]{} (2008) 214–220 \[[[0708.3246]{}](http://arXiv.org/abs/0708.3246)\]. M. Wakamatsu, [*Phys. Lett.*]{} [**B653**]{} (2007) 398–403 \[[[ 0705.2917]{}](http://arXiv.org/abs/0705.2917)\]. Collaboration, M. Gockeler [ *et. al.*]{}, [*Phys. Lett.*]{} [**B627**]{} (2005) 113–123 \[[[hep-lat/0507001]{}](http://arXiv.org/abs/hep-lat/0507001)\]. H.-x. He and X.-D. Ji, [*Phys. Rev.*]{} [**D52**]{} (1995) 2960–2963 \[[[ hep-ph/9412235]{}](http://arXiv.org/abs/hep-ph/9412235)\]. B. Pasquini, M. Pincetti and S. Boffi, [*Phys. Rev.*]{} [**D76**]{} (2007) 034020 \[[[hep-ph/0612094]{}](http://arXiv.org/abs/hep-ph/0612094)\]. L. P. Gamberg and G. R. Goldstein, [*Phys. Rev. Lett.*]{} [**87**]{} (2001) 242001 \[[[hep-ph/0107176]{}](http://arXiv.org/abs/hep-ph/0107176)\]. M. Hecht, C. D. Roberts and S. Schmidt, [*Phys. Rev.*]{} [**C64**]{} (2001) 025204 \[[[nucl-th/0101058]{}](http://arXiv.org/abs/nucl-th/0101058)\]. T. Bhattacharya, V. Cirigliano, S. D. Cohen, A. Filipuzzi, M. Gonzalez-Alonso [*et. al.*]{}, [*Phys.Rev.*]{} [**D85**]{} (2012) 054512 \[[[1110.6448]{}](http://arXiv.org/abs/1110.6448)\]. D. Boer, M. Diehl, R. Milner, R. Venugopalan, W. Vogelsang [*et. al.*]{}, [[1108.1713]{}](http://arXiv.org/abs/1108.1713). V. Barone, A. Drago and P. G. Ratcliffe, [*Phys. Rept.*]{} [**359**]{} (2002) 1–168 \[[[ http://arXiv.org/abs/hep-ph/0104283]{}](http://arXiv.org/abs/http://arXiv.org/abs/hep-ph/0104283)\]. V. Barone, F. Bradamante and A. Martin, [*Prog. Part. Nucl. Phys.*]{} [**65**]{} (2010) 267–333. U. D’Alesio and F. Murgia, [*Prog.Part.Nucl.Phys.*]{} [**61**]{} (2008) 394–454 \[[[0712.4328]{}](http://arXiv.org/abs/0712.4328)\]. Invited review paper to be published in Prog.Part.Nucl.Phys. U. D’Alesio, F. Murgia and C. Pisano, [*Phys.Rev.*]{} [ **D83**]{} (2011) 034021 \[[[1011.2692]{}](http://arXiv.org/abs/1011.2692)\]. Collaboration, I. Garzia, [[ 1211.5293]{}](http://arXiv.org/abs/1211.5293). A. V. Efremov and O. V. Teryaev, [*Sov. J. Nucl. Phys.*]{} [**36**]{} (1982) 140. A. V. Efremov and O. V. Teryaev, [*Phys. Lett.*]{} [**B150**]{} (1985) 383. J. Qiu and G. Sterman, [*Phys. Rev. Lett.*]{} [**67**]{} (1991) 2264–2267. J.-W. Qiu and G. Sterman, [*Phys. Rev.*]{} [**D59**]{} (1999) 014004 \[[[hep-ph/9806356]{}](http://arXiv.org/abs/hep-ph/9806356)\]. Y. Koike and T. Tomita, [*Phys.Lett.*]{} [ **B675**]{} (2009) 181–189 \[[[ 0903.1923]{}](http://arXiv.org/abs/0903.1923)\]. Z.-B. Kang, F. Yuan and J. Zhou, [*Phys. Lett.*]{} [**B691**]{} (2010) 243–248 \[[[ 1002.0399]{}](http://arXiv.org/abs/1002.0399)\]. X. Ji, J.-W. Qiu, W. Vogelsang and F. Yuan, [*Phys. Rev. Lett.*]{} [ **97**]{} (2006) 082002 \[[[ hep-ph/0602239]{}](http://arXiv.org/abs/hep-ph/0602239)\]. Y. Koike, W. Vogelsang and F. Yuan, [*Phys.Lett.*]{} [**B659**]{} (2008) 878–884 \[[[0711.0636]{}](http://arXiv.org/abs/0711.0636)\]. A. Bacchetta, D. Boer, M. Diehl and P. J. Mulders, [*JHEP*]{} [**08**]{} (2008) 023 \[[[0803.0227]{}](http://arXiv.org/abs/0803.0227)\]. Z.-B. Kang and A. Prokudin, [*Phys.Rev.*]{} [**D85**]{} (2012) 074008 \[[[1201.5427]{}](http://arXiv.org/abs/1201.5427)\]. L. Gamberg, Z.-B. Kang and A. Prokudin, [[1302.3218]{}](http://arXiv.org/abs/1302.3218). [^1]: The plot is from Ref. [@Anselmino:2013vqa] [^2]: The plot is from Ref. [@Anselmino:2013vqa]
--- abstract: 'Higher dimensional super symmetry has been analyzed in terms of quaternion variables and the theory of quaternion harmonic oscillator has been analyzed. Supersymmertization of quaternion Dirac equation has been developed for massless,massive and interacting cases including generalized electromagnetic fields of dyons. Accordingly higher dimensional super symmetric gauge theories of dyons are analyzed.' author: - 'O.P.S. Negi [^1]' title: HIGHER DIMENSIONAL SUPERSYMMETRY --- Department of Physics\ Kumaun University\ S. S. J. Campus\ Almora- 263601, U.A., India\ E-mail:- ops\_negi@yahoo.co.in Introduction ============ Quaternions were very first example of hyper complex numbers having the significant impacts on mathematics & physics [@key-1] . Because of their beautiful and unique properties quaternions attached many to study the laws of nature over the field of these numbers. Quaternions are already used in the context of special relativity [@key-2], electrodynamics [@key-3; @key-4], Maxwell’s equation[@key-5], quantum mechanics[@key-6; @key-7], Quaternion Oscillator[@key-8], gauge theorie [@key-9; @key-10] , Supersymmetry [@key-11]and many other branches of Physics[@key-12] and Mathematics[@key-13].On the other hand supersymmetry (SUSY) is described as the symmetry of bosons and fermions[@key-14; @key-15; @key-16]. Gauge Hierarchy problem, not only suggests that the SUSY exists but put an upper limit on the masses of super partners[@key-17; @key-18]. The exact SUSY implies exact fermion-boson masses, which has not been observed so far. Hence it is believed that supersymmetry is an approximate symmetry and it must be broken [@key-19; @key-20]. We have considered following two motivations to study the higher dimensional supersymmetric quantum mechanics[@key-21] over the field of Quaternions. 1\. Supersymmetric field theory can provide us realistic models of particle physics which do not suffer from gauge hierarchy problem and role of quaternions will provide us simplex and compact calculation accordingly. 2\. Quaternions super symmetric quantum mechanics can give us new window to understand the behavior of supersymmetric partners and mechanism of super symmetry breaking etc. 3\. Quaternions are capable to deal the higher dimensional structure and thus include the theory of monopoles and dyons. Keeping these facts in mind and to observe the role of quaternions in supersymmetry, the theory of quaternion harmonic oscillator has been analyzed for the systems of bosons and fermions respectively in terms of commutation and anti commutation relations. Eigen values of particle Hamiltonian and number operators are calculated by imposing the restriction on the component of quaternion variables. Accordingly, the super charges are calculated and it is shown that the Hamiltonian operator commutes with the super charges representing the conversion of a fermionic state to a bosonic state and vice versa. Quaternion reformulation of N= 1 , 2 and 4 dimension supersymmetry has been investigated in terms of supercharges and super partner potential and quaternion mechanics has also been analyzed in terms of complex and quaternion quantum mechanics for N = 2 and N = 4 SUSY respectively. It has been shown that elegant frame work of quaternion quantum mechanics includes non Abelian gauge structure in contradiction to complex quantum mechanics of supersymmetry corresponding to N= 2, complex and N= 4 real dimension of supersymmetry. Definition ========== A quaternion $\phi$ is expressed as $$\begin{aligned} \phi & = & e_{\text{0}}\phi_{0}+e_{1}\phi_{1}+e_{2}\phi_{2}+e_{3}\phi_{3}\label{eq:1}\end{aligned}$$ Where $\phi_{0},\phi_{1},\phi_{2},\phi_{3}$are the real quartets of a quaternion and $e_{0},e_{1},e_{2},e_{3}$ are called quaternion units and satisfies the following relations, $$\begin{aligned} e_{0}^{2} & = & e_{0}=1\nonumber \\ e_{0}e_{i} & = & e_{i}e_{0}=e_{i}(i=1,2,3)\nonumber \\ e_{i}e_{j} & = & -\delta_{ij}+\varepsilon_{ijk}e_{k}(i,j,k=1,2,3)\label{eq:2}\end{aligned}$$ The quaternion conjugate $\bar{\phi}$ is then defined as $$\begin{aligned} \bar{\phi} & = & e_{\text{0}}\phi_{0}-e_{1}\phi_{1}-e_{2}\phi_{2}-e_{3}\phi_{3}\label{eq:3}\end{aligned}$$ Here $\phi_{0}$is real part of the quaternion defined as$$\begin{aligned} \phi_{0} & =Re\,\,\phi=\frac{1}{2}(\bar{\phi}+\phi)\label{eq:4}\end{aligned}$$ $Re\,\,\phi=\phi_{0}=0$ , then $\phi=-\bar{\phi}$ and imaginary $\phi$ is called pure quaternion and is written as $$\begin{aligned} Im\,\,\phi & = & e_{1}\phi_{1}+e_{2}\phi_{2}+e_{3}\phi_{3}\label{eq:5}\end{aligned}$$ The norm of a quaternion is expressed as $$\begin{aligned} N(\phi) & = & \bar{\phi}\phi=\phi\bar{\phi}=\phi_{0}^{2}+\phi_{1}^{2}+\phi_{2}^{2}+\phi_{3}^{2}\geq0\label{eq:6}\end{aligned}$$ and the inverse of a quaternion is described as $$\begin{aligned} \phi^{-1} & = & \frac{\bar{\phi}}{\left|\phi\right|}\label{eq:7}\end{aligned}$$ While the quaternion conjugation satisfies the following property $$\begin{aligned} \overline{(\phi_{1}\phi_{2})} & = & \bar{\phi_{2}}\bar{\phi}_{1}\label{eq:8}\end{aligned}$$ The norm of the quaternion (\[eq:6\]) is positive definite and enjoys the composition law $$\begin{aligned} N(\phi_{1}\phi_{2}) & = & N(\phi_{1})N(\phi_{2})\label{eq:9}\end{aligned}$$ Quaternion (\[eq:1\]) is also written as $\phi=(\phi_{0},\overrightarrow{\phi})$where $\overrightarrow{\phi}=$$e_{1}$$\phi_{1}+e_{2}\phi_{2}+e_{3}$$\phi_{3}$ is its vector part and $\phi_{0}$is its scalar part. The sum and product of two quaternions are $$\begin{aligned} (\alpha_{0},\overrightarrow{\alpha})+(\beta_{0},\overrightarrow{\beta}) & = & (\alpha_{0}+\beta_{0},\overrightarrow{\alpha}+\overrightarrow{\beta})\nonumber \\ (\alpha_{0},\overrightarrow{\alpha})\,(\beta_{0},\overrightarrow{\beta}) & = & (\alpha_{0}\beta_{0}-\overrightarrow{\alpha}.\overrightarrow{\beta},\,\alpha_{0}\overrightarrow{\beta}+\beta_{0}\overrightarrow{\alpha}+\overrightarrow{\alpha}\times\overrightarrow{\beta})\label{eq:10}\end{aligned}$$ Quaternion elements are non-Abelian in nature and thus represent a division ring. Field Associated with Dyons =========================== Let us define the generalized charge on dyons as [@key-22; @key-23], $$\begin{aligned} q & = & q=e\,\,-\,\, i\,\, g\,\,\,(i=\sqrt{-1})\label{eq:11}\end{aligned}$$ where e and g are respectively electric and magnetic charges. Generalized four potential $\left\{ V_{\mu}\right\} =\left(\varphi,\overrightarrow{V}\right)$associated with dyons is defined as, $$\begin{aligned} \left\{ V_{\mu}\right\} & = & \left\{ A_{\mu}\right\} -\, i\,\,\left\{ B_{\mu}\right\} =(\varphi,\overrightarrow{V)}\label{eq:12}\end{aligned}$$ where $\left\{ A_{\mu}\right\} =\left(\varphi_{e},\overrightarrow{A}\right)$and $\left\{ B_{\mu}\right\} =\left(\varphi_{g},\overrightarrow{B}\right)$are respectively electric and magnetic four potentials. We have used throughout the natural units $c=\hslash=1$. Electric and magnetic fields of dyons are defined in terms of components of electric and magnetic potentials as, $$\begin{aligned} \overrightarrow{E} & = & -\frac{\partial\overrightarrow{A}}{\partial t}-\overrightarrow{\nabla}\varphi_{e}-\overrightarrow{\nabla}\times\overrightarrow{A}\nonumber \\ \overrightarrow{H} & = & -\frac{\partial\overrightarrow{B}}{\partial t}-\overrightarrow{\nabla}\varphi_{g}-\overrightarrow{\nabla}\times\overrightarrow{B}\label{eq:13}\end{aligned}$$ These electric and magnetic fields of dyons are invariant under generalized duality transformation i.e. $$\begin{aligned} \left\{ A_{\mu}\right\} & \Rightarrow & \left\{ A_{\mu}\right\} cos\theta+\left\{ B_{\mu}\right\} sin\theta\nonumber \\ \left\{ B_{\mu}\right\} & \Rightarrow & -\left\{ A_{\mu}\right\} sin\theta+\left\{ B_{\mu}\right\} cos\theta\label{eq:14}\end{aligned}$$ The expression of generalized electric and magnetic fields given by equation (\[eq:13\]) are symmetrical and both the electric and magnetic fields of dyons may be written in terms of longitudinal and transverse components. The generalized vector electromagnetic fields associated with dyons is defined as $$\begin{aligned} \overrightarrow{\psi} & = & \overrightarrow{E}-i\,\,\overrightarrow{H}\label{eq:15}\end{aligned}$$ As such, we get the following differential form of generalized Maxwell’s equations for dyons i.e. $$\begin{aligned} \overrightarrow{\nabla}\cdot\overrightarrow{\psi} & = & J_{0}\nonumber \\ \overrightarrow{\nabla}\times\overrightarrow{\psi} & = & -i\,\,\overrightarrow{J}-i\,\,\frac{\partial\overrightarrow{\psi}}{\partial t}\label{eq:16}\end{aligned}$$ where $j_{0}$ and $\overrightarrow{J}$, are the generalized charge and current source densities of dyons, given by; $$\begin{aligned} \left\{ J_{\mu}\right\} & = & \left\{ j_{\mu}\right\} -\,\, i\,\,\left\{ k_{\mu}\right\} =\left(J_{0},\overrightarrow{J}\right)\label{eq:17}\end{aligned}$$ Substituting relation (\[eq:13\]) into equation (\[eq:15\]) and using equation (\[eq:12\]), we obtain the following relation between generalized vector field and potential of dyons i.e. $$\begin{aligned} \overrightarrow{\psi} & = & -\frac{\partial\overrightarrow{V}}{\partial t}-\overrightarrow{\nabla}\varphi-i\,\,\overrightarrow{\nabla}\times\overrightarrow{V}\label{eq:18}\end{aligned}$$ where electric and magnetic four current densities$\left\{ j_{\mu}\right\} =(\rho_{e},\,\overrightarrow{j}$)and $\left\{ k_{\mu}\right\} =(\rho_{g},$$\overrightarrow{k}\,)$Thus we write the following tensor forms of generalized Maxwell’s-Dirac equations of dyons i.e. $$\begin{aligned} F_{\mu\nu,\,\nu} & = & \partial^{\nu}F_{\mu\nu}=j_{\mu}\nonumber \\ \tilde{F}_{\mu\nu,\,\nu} & = & \partial^{\nu}\tilde{F}_{\mu\nu}=k_{\mu}\label{eq:19}\end{aligned}$$ where $$\begin{aligned} F_{\mu\nu} & = & E_{\mu\nu}-\tilde{H}_{\mu\nu}\nonumber \\ \tilde{F}_{\mu\nu} & = & H_{\mu\nu}+\tilde{E}_{\mu\nu}\label{eq:20}\end{aligned}$$ and $$\begin{aligned} E_{\mu\nu} & = & A_{\mu,\nu}-A_{\nu,\mu}\nonumber \\ H_{\mu\nu} & = & B_{\mu,\nu}-B_{\nu,\mu}\nonumber \\ \tilde{E}_{\mu\nu} & = & \frac{1}{2}\varepsilon_{\mu\nu\sigma\lambda}E^{\sigma\lambda}\nonumber \\ \tilde{H}_{\mu\nu} & = & \frac{1}{2}\varepsilon_{\mu\nu\sigma\lambda}H^{\sigma\lambda}\label{eq:21}\end{aligned}$$ The tidle denotes the dual part while $\varepsilon_{\mu\nu\sigma\lambda}$ are four indexes Levi-Civita symbol. Generalized fields of dyons given by equation (\[eq:13\]) may directly be obtained from field tensors $F_{\mu\nu}$and $\tilde{F}_{\mu\nu}$ as, $$\begin{aligned} F_{0i} & = & E^{\, i\,}\nonumber \\ F_{ij} & = & \varepsilon_{ijk}H^{\, k}\nonumber \\ \tilde{H}_{0i} & = & -H^{\, i}\nonumber \\ \tilde{H}_{ij} & = & -\varepsilon_{ijk}E^{^{k}}\label{eq:22}\end{aligned}$$ A new vector parameter S (say) may directly be obtained [@key-10; @key-12]from equation (\[eq:16\])i.e. $$\begin{aligned} \overrightarrow{S} & = & \square\overrightarrow{\psi}=-\frac{\partial\overrightarrow{\psi}}{\partial t}-\overrightarrow{\nabla}\rho-i\,\,\overrightarrow{\nabla}\times\overrightarrow{\psi}\label{eq:23}\end{aligned}$$ where $\square$represents the D’ Alembertian operator i.e. $$\begin{aligned} \square & = & \frac{\partial^{2}}{\partial t\,^{2}}-\frac{\partial^{2}}{\partial x\,^{2}}-\frac{\partial^{2}}{\partial y^{\,2}}-\frac{\partial^{2}}{\partial z\,^{2}}\label{eq:24}\end{aligned}$$ Defining generalized field tensor as $$\begin{aligned} G_{\mu\nu} & = & F_{\mu\nu}-i\,\tilde{F}_{\mu\nu}\label{eq:25}\end{aligned}$$ We may directly obtain the following generalized field equation of dyons i.e. $$\begin{aligned} G_{\mu\nu,\,\nu} & = & \partial^{\nu}G_{\mu\nu}=J_{\mu}\label{eq:26}\end{aligned}$$ where $G_{\mu\nu}=V_{\mu,\nu}-V_{\nu,\mu}$is called the generalized electromagnetic field tensor of dyons. Equation (\[eq:26\]) may also be written as follows like second order Klein-Gordon equation for dyonic fields; $$\begin{aligned} \square V_{\mu} & = & J_{\mu}\label{eq:27}\end{aligned}$$ where we have imposed the Lorentz gauge condition on both potentials and consequently to generalized potential. Equations ( \[eq:19\]) and (\[eq:26\]) are also invariant under duality transformations; $$\begin{aligned} (F,\tilde{F}) & = & (F\, cos\,\theta+\tilde{F}sin\,\theta;-F\, sin\,\theta+\,\tilde{F}\, cos\,\theta)\label{eq:28}\\ (j_{\mu},k_{\mu}) & = & (j_{\mu}cos\,\theta+k_{\mu}sin\,\theta;-j_{\mu}sin\,\theta+k_{\mu}cos\,\theta)\label{eq:29}\end{aligned}$$ where $$\begin{aligned} \frac{g}{e} & = & \frac{B_{\mu}}{A_{\mu}}=\frac{k_{\mu}}{j_{\mu}}=-tan\,\theta=Constant\label{eq:30}\end{aligned}$$ and consequently the generalized charge of a dyon may be written as $$\begin{aligned} q & = & \left|q\right|exp\,(-i\,\theta)\label{eq:31}\end{aligned}$$ The suitable Lagrangian density, which yields the field equation (\[eq:26\]) under the variation of field parameters (i.e. potential only) without changing the trajectory of particle, may be written as follows; $$\begin{aligned} \mathit{\mathsf{\mathbf{L}}} & = & -\frac{1}{4}G_{\mu\nu}^{*}G_{\mu\nu}+V_{\mu}^{*}V_{\mu}\label{eq:32}\end{aligned}$$ where [\*]{} denotes the complex conjugate. Lagrangian density given by equation (\[eq:32\]) directly leads the following expression of Lorentz force equation of motion for dyons i.e. $$\begin{aligned} f_{\mu} & = & Re\,(q^{*}G_{\mu\nu})u^{\nu}\label{eq:33}\end{aligned}$$ Where Re denotes real part and $\left\{ u^{\nu}\right\} $ is the four-velocity of the particle. Quaternion SUSY Harmonic Oscillator =================================== Let us define bosonic quaternion oscillator as the extension of complex oscillator having the decomposition [@key-8]as, $$\begin{aligned} \hat{a} & = & \frac{1}{\sqrt{6}}\left[\widehat{a}_{0}+e_{1}\widehat{a}_{1}+e_{2}\widehat{a}_{2}+e_{3}\widehat{a}_{3}\right]\label{eq:34}\end{aligned}$$ where $\widehat{a}_{0},\widehat{a}_{1},\widehat{a}_{2},\widehat{a}_{3}$are real operators . Let us defined the conjugate of equation (\[eq:34\]) as $$\begin{aligned} \hat{a}^{\dagger} & = & \frac{1}{\sqrt{6}}\left[\widehat{a}_{0}-e_{1}\widehat{a}_{1}-e_{2}\widehat{a}_{2}-e_{3}\widehat{a}_{3}\right]\label{eq:35}\end{aligned}$$ Like other oscillator, we start with the following fundamental boson commutation relations i.e. $$\begin{aligned} \left[\hat{a}\,\,,\,\,\hat{a}^{\dagger}\right] & = & 1\nonumber \\ \left[\hat{a}\,\,,\,\,\hat{a}\right] & = & 0\nonumber \\ \left[\hat{a}^{\dagger},\,\hat{a}^{\dagger}\right] & = & 0\label{eq:36}\end{aligned}$$ Then to maintain the above relations of bosonic oscillator,we get the following commutation relation between the components of bosonic oscillator in terms of imaginary quaternion units i.e. $$\begin{aligned} \left[\hat{a}_{0}\,\,,\,\,\hat{a_{1}}\right] & = & e_{1}\nonumber \\ \left[\hat{a}_{0}\,\,,\,\,\hat{a_{2}}\right] & = & e_{2}\nonumber \\ \left[\hat{a}_{0}\,\,,\,\,\hat{a}_{3}\right] & = & e_{3}\label{eq:37}\end{aligned}$$ or$$\begin{aligned} \left[\hat{a}_{0}\,\,,\,\,\hat{a_{A}}\right] & = & e_{A}(A=1,2,3)\nonumber \\ \left[\hat{a}_{\mu}\,\,,\,\,\hat{a}_{\nu}\right] & = & \,\,0\quad\forall\,\,\,\mu\neq\nu\label{eq:38}\end{aligned}$$ Let us describe the Hamiltonian for bosonic harmonic oscillator as $$\begin{aligned} \hat{H_{B}} & = & \frac{p^{2}}{2m}+\frac{1}{2}m\omega^{2}q^{2}\label{eq:39}\end{aligned}$$ which can be written in terms of as $$\begin{aligned} \hat{H_{B}} & = & \frac{1}{2}\hbar\omega(\hat{a}\,\hat{a}^{\dagger}+\hat{a}^{\dagger}\hat{a)}=\hbar\omega(\hat{a}^{\dagger}\hat{a\,+\frac{1}{2})}\label{eq:40}\end{aligned}$$ where $$\begin{aligned} \hat{a} & = & \frac{1}{\sqrt{2m\hbar\omega}}\left(m\omega\hat{q}-e_{1}\hat{p}\right)\nonumber \\ \hat{a}^{\dagger} & = & \frac{1}{\sqrt{2m\hbar\omega}}\left(m\omega\hat{q}+e_{1}\hat{p}\right)\label{eq:41}\end{aligned}$$ Then it is necessary to recover the ordinary commutation relation $\left[\hat{q},\hat{p}\right]=i$ $\hbar$ between $\hat{q}\hat{,p}$ .So, we get $$\begin{aligned} \hat{p} & = & \sqrt{\frac{m\hbar\omega}{3}}\left(-\widehat{a_{1}}+e_{3}\widehat{a_{2}}-e_{2}\widehat{a_{3}}\right)\nonumber \\ \widehat{q} & = & \widehat{a_{0}}\sqrt{\frac{\hbar}{3m\omega}}\label{eq:42}\end{aligned}$$ Now we can define the number operator as $$\begin{aligned} \widehat{N_{B}} & = & \widehat{a}\,^{\dagger}\widehat{a}\label{eq:43}\end{aligned}$$ which thus commutes with Hamiltonian $\hat{H_{B}}$i.e. $$\begin{aligned} \left[\widehat{N_{B}}\,\,,\,\,\widehat{H_{B}}\right] & = & 0\label{eq:44}\end{aligned}$$ Bosonic number operator satisfies the following relations; $$\begin{aligned} \left[\widehat{N_{B}}\,\,,\,\,\widehat{a}\,^{\dagger}\right] & = & \widehat{a}^{\dagger}\nonumber \\ \left[\widehat{N_{B}}\,\,,\,\,\widehat{a}\,\right] & = & -\widehat{a}\label{eq:45}\end{aligned}$$ which shows that $\hat{a}$ and $\hat{a}^{\dagger}$may be regarded as annihilation and creation operators. We also have $$\begin{aligned} \hat{a}{\displaystyle \left|0\right\rangle } & = & 0\nonumber \\ \hat{a}^{\dagger}\left|n\right\rangle & = & \sqrt{n+1}\left|n+1\right\rangle \nonumber \\ \hat{a}{\displaystyle \left|n\right\rangle } & = & \sqrt{n}\left|n-1\right\rangle \nonumber \\ \widehat{N_{B}}\left|n\right\rangle & = & n\left|n\right\rangle \label{eq:46}\end{aligned}$$ and the Hilbert space is then spanned by state vectors as $$\begin{aligned} \left|n\right\rangle & = & \frac{1}{\sqrt{n}!}(\hat{a}^{\dagger})^{n}\left|n\right\rangle \label{eq:47}\end{aligned}$$ where $\left|0\right\rangle $ is considered as ground or vacuum state and then must be normalized as $$\begin{aligned} \left\langle 0\mid0\right\rangle & = & 1\label{eq:48}\end{aligned}$$ and thus gives rise to the familiar results $$\begin{aligned} \widehat{H_{B}}\,\left|n\right\rangle & = & E_{n}\left|n\right\rangle =(n+\frac{1}{2})\left|n\right\rangle ;\,\,\,\, E_{n}=(n+\frac{1}{2})\label{eq:49}\end{aligned}$$ Similarly we can write the following anti commutation relation for fermionic harmonic oscillator $$\begin{aligned} \left\{ \widehat{b\,},\widehat{b}^{\dagger}\right\} & = & 1\nonumber \\ \left\{ \widehat{b\,},\widehat{b}\right\} & = & 0\nonumber \\ \left\{ \widehat{b\,}^{\dagger},\widehat{b}^{\dagger}\right\} & = & 0\label{eq:50}\end{aligned}$$ where $\widehat{b}$ is a fermionic quaternion operator and may be decomposed as $$\begin{aligned} \hat{b} & = & \frac{1}{\sqrt{6}}\left[\widehat{b}_{0}+e_{1}\widehat{b}_{1}+e_{2}\widehat{b}_{2}+e_{3}\widehat{b}_{3}\right]\nonumber \\ \hat{b}^{\dagger} & = & \frac{1}{\sqrt{6}}\left[\widehat{b}_{0}-e_{1}\widehat{b}_{1}-e_{2}\widehat{b}_{2}-e_{3}\widehat{b}_{3}\right]\label{eq:51}\end{aligned}$$ To satisfy the relations (\[eq:50\]) by fermion operators given by equation (\[eq:51\]) , it is necessary to impose the following restrictions on the various components of operators i.e., $$\begin{aligned} b_{0}^{2}+b_{1}^{2} & +b_{2}^{2}+b_{3}^{2}= & 0\nonumber \\ b_{0}^{2}- & b_{1}^{2}-b_{2}^{2}-b_{3}^{2}= & 3\nonumber \\ \left[\widehat{b_{j}}\,,\,\widehat{b}_{k}\right] & =\varepsilon_{jkl} & \widehat{b}_{l}\nonumber \\ \left[\widehat{b}\,,\,\widehat{b}_{j}\right] & =0 & \forall\,\, j,k,l=1,2,3\label{eq:52}\end{aligned}$$ We have defined through out the text $\left[\,,\,\right]$ as commutator, $\left\{ \,,\,\right\} $as anti commutator and $\varepsilon_{jkl}$ is three index Levi- Civita symbol. Like bosonic harmonic oscillator let us define the fermionic Hamiltonian as , $$\begin{aligned} \hat{H_{F}} & = & \frac{1}{2}\hbar\omega(\hat{b}\,\hat{b}^{\dagger}+\hat{b}^{\dagger}\hat{b)}=\hbar\omega(\,\hat{b}\,\hat{b}^{\dagger}-\frac{1}{2})=\hbar\omega(\widehat{N_{F}}\,\,-\frac{1}{2})\label{eq:53}\end{aligned}$$ Where $\widehat{N_{F}}\,=\hat{b}^{\dagger}\hat{b}$ and $\widehat{N_{F}}\,$can take eigen values $n_{f}=0,1$.The Hilbert space with basis vector $\left|n\right\rangle $ is now constructed in the following manner so that $$\begin{aligned} \widehat{N_{F}}\, & \left|n\right\rangle & =n_{f}\left|n\right\rangle \,\,\,\,\,\,(n_{f}=0,1)\nonumber \\ \widehat{b} & \left|1\right\rangle & =\left|0\right\rangle \nonumber \\ \widehat{b}^{\dagger} & \left|0\right\rangle & =\left|1\right\rangle \nonumber \\ \widehat{b} & \left|0\right\rangle & =0=\widehat{b}^{\dagger}\left|1\right\rangle \label{eq:54}\end{aligned}$$ The energy eigen spectra of fermionic oscillator have only two levels for eigen state $\left|0\right\rangle $or $\left|1\right\rangle $ i.e. $E_{0}=-\frac{1}{2}\hbar\omega$ and $E_{1}=+\frac{1}{2}\hbar\omega$showing that ground state energy of this oscillator is negative i.e. $E_{0}=-\frac{1}{2}\hbar\omega$ . Let us now construct a simple supersymmetric quantum mechanical system that include an oscillator with the bosonic and fermionic degrees of freedom .We call it as supersymmetric harmonic oscillator viewed in the frame work of quaternion variables. The supersymmetry is thus obtained by annihilating simultaneously one bosonic quantum and creating one fermionic quantum or vice versa. We illustrate the annihilating (supersymmetric) charges (generators) as$$\begin{aligned} \widehat{Q}= & \sqrt{\hbar\omega} & (\widehat{a}^{\dagger}\widehat{b})\nonumber \\ \widehat{Q}^{\dagger}= & \sqrt{\hbar\omega} & (\widehat{b}^{\dagger}\widehat{a})\label{eq:55}\end{aligned}$$ So the SUSY Hamiltonian becomes $$\begin{aligned} \widehat{H} & =\left\{ \widehat{Q}^{\dagger}\,,\,\widehat{Q}\right\} & =\hbar\omega\left\{ \widehat{a}^{\dagger}\,\widehat{a}\,+\widehat{b}^{\dagger}\widehat{b}\right\} =\hbar\omega(\widehat{N}_{B}+\widehat{N}_{F})\label{eq:56}\end{aligned}$$ and $$\begin{aligned} \left[\widehat{H}\,\,,\,\,\widehat{Q}\right] & = & 0=\left[\widehat{H}\,\,,\,\,\widehat{Q}^{\dagger}\right]\label{eq:57}\end{aligned}$$ where $\mathbf{\widehat{N}_{B}}$ and $\widehat{N}_{F}$ are respectively bosonic and fermionic number operators. The eigen state is described as$\left|n_{B}\,,\, n_{F}\right\rangle $ and ground state as $\left|0\,.\,0\right\rangle $so that $$\begin{aligned} \widehat{H} & \left|n_{B}\,,\, n_{F}\right\rangle & =E_{n_{B}\,,\, n_{B}}\left|n_{B}\,,\, n_{F}\right\rangle ;\,\, n_{B}=0,1,2,3,........;n_{F}=0,1.\label{eq:58}\end{aligned}$$ and we also have $$\begin{aligned} \widehat{Q}\left|n,\,1_{F}\right\rangle & = & \sqrt{n+1}\left|n+1\,,\,0\right\rangle \nonumber \\ \widehat{Q}^{\dagger}\left|n+1,\,0_{F}\right\rangle & = & \sqrt{n+1}\left|n\,,\,1\right\rangle \label{eq:59}\end{aligned}$$ These supercharges represent conversion of a fermionic state to a bosonic state and bosonic state to fermionic state or vice versa i.e. $$\begin{aligned} \widehat{Q}^{\dagger}\left|BOSON\right\rangle & = & \left|FERMION\right\rangle \nonumber \\ \widehat{Q}\left|FERMION\right\rangle & = & \left|BOSON\right\rangle \label{eq:60}\end{aligned}$$ Equations (\[eq:56\]and \[eq:57\]) are analogous to following equations of supersymmetry, $$\begin{aligned} \left\{ \widehat{Q_{\alpha}}^{\dagger},\widehat{Q}_{\beta}\right\} & = & P^{\mu}(\sigma_{\mu})_{\alpha\beta}\label{eq:61}\\ \left[\widehat{H}\,\,,\widehat{Q}_{\alpha}\right] & = & 0\label{eq:62}\end{aligned}$$ for $\mu=0$ and $\alpha=\beta=1$namely one dimensional SUSY .Supercharges always commute with the usual Hamiltonian.Thus the anti commuting charges in quaternion formalism combine to form the generators of time translation, namely the Hamiltonian $H$. The ground state of this system is the state $\left|0\right\rangle _{osc}.\left|0\right\rangle _{spin}$ or $\left|0\right\rangle _{boson}.\left|0\right\rangle _{fermion}=\left|0\,\,,\,\,0\right\rangle $where both bosonic and fermionic degrees of freedom are in the lowest energy state.As such we have analyzed the theory of supersymmetric harmonic oscillator for one dimensional supersymmetric quantum mechanics and putting the restriction accordingly. Other wise one has to extend the dimensions and to loose the hermiticity of the Hamiltonian of the supersymmetric system.Let us now try to illustrate the Supersymmetry of Dirac equation in terms of quaternion variables. Supersymmetric Quaternion Dirac Equation ========================================= The quaternion formulation of free particle Dirac equation[@key-6; @key-24; @key-25] is described as, $$\begin{aligned} i\,\,\gamma_{\mu\,\,}\partial_{\mu}\psi(x,t) & = & m\,\psi(x,t)\,\,(\mu=0,1,2,3)\label{eq:63}\end{aligned}$$ Let us discuss the supersymmetrization in terms of following three cases. Case I: For mass less free particle i.e.$m=0$ and external potential $\Phi=0$; Equation (\[eq:63\] )becomes $$\begin{aligned} i\,\,\gamma_{\mu\,\,}\partial_{\mu}\psi(x,t) & = & 0\label{eq:64}\end{aligned}$$ Let us consider the following solutions of this equation as $$\begin{aligned} \psi(x,t) & = & \psi(x)\, e^{i\,(\overrightarrow{p}\cdot\overrightarrow{x}-Et)}\label{eq:65}\end{aligned}$$ where we have taken natural units $c=\hbar=1$ and as such we get the following form of equation (\[eq:64\])i.e.$$\begin{aligned} (\gamma_{0}E\,-\gamma_{1}p_{1}-\gamma_{2}p_{2}-\gamma_{3}p_{3}) & \psi(x)= & 0\label{eq:66}\end{aligned}$$ We define the following representation of gamma matrices in terms of quaternion units i.e. $$\begin{aligned} \gamma_{0} & = & \left[\begin{array}{cc} 1 & 0\\ 0 & -1\end{array}\right]\nonumber \\ \gamma_{l} & = & e_{l}\,\left[\begin{array}{cc} 0 & 1\\ 1 & 0\end{array}\right]\,\,\,\,(l=1,2,3)\label{eq:67}\end{aligned}$$ thus equation(\[eq:66\]) takes the form $$\begin{aligned} \left\{ \left[\begin{array}{cc} 1 & 0\\ 0 & -1\end{array}\right]E-e_{l}\left[\begin{array}{cc} 0 & 1\\ 1 & 0\end{array}\right]p_{l}\right\} \left\{ \begin{array}{c} \psi_{a}\\ \psi_{b}\end{array}\right\} & = & 0\label{eq:68}\end{aligned}$$ where $\psi_{a}=\psi_{0}+i\,\psi_{1}$and $\psi_{b}=\psi_{2}-i\,\psi_{3}$.We thus obtain the following coupled equations $$\begin{aligned} \widehat{A}\psi_{a}(x) & = & E\,\psi_{b}(x)\nonumber \\ \widehat{A}^{\dagger}\psi_{b}(x) & = & E\,\psi_{a}(x)\label{eq:69}\end{aligned}$$ where $\widehat{A}\,=-e_{l}\,\widehat{p}_{l}$ and $\widehat{A}^{\dagger}=e_{l}\,\widehat{p}_{l}$.We can now decouple equation (\[eq:69\]) as $$\begin{aligned} \widehat{A}\,\widehat{A}^{\dagger}\psi_{b}(x) & = & E^{2}\psi_{b}(x)\nonumber \\ \widehat{A}^{\dagger}\widehat{A}\,\psi_{a}(x) & = & E^{2}\psi_{a}(x)\nonumber \\ P_{l}^{2}\psi_{b}(x) & = & E^{2}\psi_{b}(x)\nonumber \\ P_{l}^{2}\psi_{a}(x) & = & E^{2}\psi_{a}(x)\label{eq:70}\end{aligned}$$ where $\psi_{a}(x)$ and $\psi_{b}(x)$are eigen functions of partner Hamiltonians $H_{-}=\widehat{A}^{\dagger}\widehat{A}$ and $H_{+}=\widehat{A}\,\widehat{A}^{\dagger}$ .The supersymmetric Hamiltonian is thus described as $$\begin{aligned} \widehat{H} & = & \left[\begin{array}{cc} H_{+} & 0\\ 0 & H_{-}\end{array}\right]=\left[\begin{array}{cc} -P_{^{l}}^{2} & 0\\ 0 & P_{^{l}}^{2}\end{array}\right]\label{eq:71}\end{aligned}$$ Restricting the propagation along x-axis to discuss the quantum mechanics in two dimensional space time,we have $$\begin{aligned} \widehat{p}_{l} & = & -i\,\frac{d}{dx}\nonumber \\ e_{l}\,\widehat{p}_{l} & = & \frac{d}{dx}\nonumber \\ \widehat{H} & = & \left[\begin{array}{cc} -\frac{d^{2}}{dx^{2}} & 0\\ 0 & -\frac{d^{2}}{dx^{2}}\end{array}\right]\label{eq:72}\end{aligned}$$ or$$\begin{aligned} \widehat{H} & = & \left[\begin{array}{cc} \widehat{Q}\widehat{Q}^{\dagger} & 0\\ 0 & \widehat{Q}^{\dagger}\widehat{Q}\end{array}\right]\label{eq:73}\end{aligned}$$ where supercharges are described in terms of quaternion units i.e. $$\begin{aligned} \widehat{Q} & = & \left[\begin{array}{cc} 0 & -e_{2}^{\dagger}\frac{d}{dx}\\ 0 & 0\end{array}\right]\nonumber \\ \widehat{Q}^{\dagger} & = & \left[\begin{array}{cc} 0 & 0\\ -e_{2}^{\dagger}\frac{d}{dx} & 0\end{array}\right]\label{eq:74}\end{aligned}$$ As such we may obtain the supersymmetry algebra as $$\begin{aligned} \left[\widehat{Q\,},\widehat{H}\right] & = & \left[\widehat{Q\,},\widehat{H}^{\dagger}\right]=0\nonumber \\ \left\{ \widehat{Q\,},\widehat{Q\,}\right\} & = & \left\{ \widehat{Q\,}^{\dagger},\widehat{Q\,}^{\dagger}\right\} =0\nonumber \\ \left\{ \widehat{Q\,},\widehat{Q\,}^{\dagger}\right\} & = & \widehat{H}\label{eq:75}\end{aligned}$$ Here $\widehat{Q},$converts the upper component spinor $\left\{ \begin{array}{c} \psi_{a}\\ 0\end{array}\right\} $to a lower one $\left\{ \begin{array}{c} 0\\ \psi_{b}\end{array}\right\} $and convert the lower component Spinor $\left\{ \begin{array}{c} 0\\ \psi_{b}\end{array}\right\} $to upper one $\left\{ \begin{array}{c} \psi_{a}\\ 0\end{array}\right\} $. If $\psi$to be an eigen state of $H_{+}(H_{-})$, $\widehat{Q}$$\psi(\widehat{Q}^{\dagger}$$\psi)$is the eigen state of that of $H_{+}(H_{-})$with equal energy. Case II- $m\,\neq0$but potential $\Phi=0$. Corresponding Dirac’s equation (\[eq:63\]) with its solution (\[eq:65\])is described as $$\begin{aligned} (\gamma_{0}E\,-\gamma_{1}p_{1}-\gamma_{2}p_{2}-\gamma_{3}p_{3}-\, m) & \psi(x)= & 0\label{eq:76}\end{aligned}$$ which may be written as follows in terms of quaternion units i.e. $$\begin{aligned} \left\{ \left[\begin{array}{cc} 1 & 0\\ 0 & -1\end{array}\right]E-e_{l}\left[\begin{array}{cc} 0 & 1\\ 1 & 0\end{array}\right]p_{l}-m\left[\begin{array}{cc} 1 & 0\\ 0 & 1\end{array}\right]\right\} \left\{ \begin{array}{c} \psi_{a}\\ \psi_{b}\end{array}\right\} & = & 0\label{eq:77}\end{aligned}$$ Accordingly ,we have the following sets of equations $$\begin{aligned} \widehat{A}^{\dagger}\psi_{b} & = & (E-m)\psi_{a}\nonumber \\ \widehat{A}\,\psi_{a} & = & (E+m)\psi_{b}\nonumber \\ \widehat{A}^{\dagger}\widehat{A}\,\psi_{a} & = & (E^{2}-m^{2}\,)\psi_{a}\nonumber \\ \widehat{A}\,\widehat{A}^{\dagger}\psi_{b} & = & (E^{2}-m^{2}\,)\psi_{b}\nonumber \\ \widehat{P_{l}}^{2}\psi_{a,b} & = & (E^{2}-m^{2}\,)\psi_{a,b}\label{eq:78}\end{aligned}$$ which are the Schrödinger equation for free particle.SUSY Hamiltonian is now described as $$\begin{aligned} \widehat{H} & = & \left[\begin{array}{cc} \widehat{Q}\widehat{Q}^{\dagger} & 0\\ 0 & \widehat{Q}^{\dagger}\widehat{Q}\end{array}\right]=\left[\begin{array}{cc} \widehat{P_{l}}^{2}+m^{2} & 0\\ 0 & \widehat{P_{l}}^{2}+m^{2}\end{array}\right]\label{eq:79}\end{aligned}$$ On the similar way we get the following relations while restricting for two dimensional structure of space and time i.e. $$\begin{aligned} \widehat{H} & = & \left[\begin{array}{cc} \widehat{Q}\widehat{Q}^{\dagger} & 0\\ 0 & \widehat{Q}^{\dagger}\widehat{Q}\end{array}\right]=\left[\begin{array}{cc} -\frac{d^{2}}{dx^{2}}+m^{2} & 0\\ 0 & -\frac{d^{2}}{dx^{2}}+m^{2}\end{array}\right]\label{eq:80}\end{aligned}$$ where $$\begin{aligned} \widehat{Q} & = & \left[\begin{array}{cc} 0 & -e_{2}^{\dagger}\frac{d}{dx}+m\\ 0 & 0\end{array}\right]\nonumber \\ \widehat{Q}^{\dagger} & = & \left[\begin{array}{cc} 0 & 0\\ -e_{2}^{\dagger}\frac{d}{dx}+m & 0\end{array}\right]\label{eq:81}\end{aligned}$$ Hence we restore the property of SUSY quantum mechanics and obtain the commutation and anti commutation relations same that of equation (\[eq:75\]) for the free particle Dirac equation with mass as well. Case III- We now discuss and verify the SUSY quantum mechanics relations for $m\neq0$ with scalar potential $\Phi=V$.We extend the present theory in the same manner and express Dirac Hamiltonian in the following form; $$\begin{aligned} \widehat{H_{D}} & = & \left[\begin{array}{cc} 0 & e_{l}p_{l}\\ -e_{l}p_{l} & 0\end{array}\right]+\left[\begin{array}{cc} 0 & -i\,(m+V)\\ i\,(m+V) & 0\end{array}\right]\label{eq:82}\end{aligned}$$ or$$\begin{aligned} \widehat{H_{D}} & = & \left[\begin{array}{cc} 0 & e_{l}p_{l}-i\,(m+V)\\ -e_{l}p_{l}+i\,(m+V) & 0\end{array}\right]\label{eq:83}\end{aligned}$$ or $$\begin{aligned} \widehat{H} & = & \left[\begin{array}{cc} \widehat{Q}\widehat{Q}^{\dagger} & 0\\ 0 & \widehat{Q}^{\dagger}\widehat{Q}\end{array}\right]=\left[\begin{array}{cc} -\frac{d^{2}}{dx^{2}}+(m+V)^{2} & 0\\ 0 & -\frac{d^{2}}{dx^{2}}+(m+V)^{2}\end{array}\right]\label{eq:84}\end{aligned}$$ where$$\begin{aligned} \widehat{Q} & = & \left[\begin{array}{cc} 0 & -e_{2}^{\dagger}\frac{d}{dx}+i\,(m+V)\\ 0 & 0\end{array}\right]\nonumber \\ \widehat{Q}^{\dagger} & = & \left[\begin{array}{cc} 0 & 0\\ -e_{2}^{\dagger}\frac{d}{dx}+i\,(m+V) & 0\end{array}\right]\label{eq:85}\end{aligned}$$ Equations to satisfy the supersymmetric quantum mechanical relations given by equation (\[eq:75\]) and as such the supersymmetry is verified even for interacting case with scalar potential. Case IV- Dirac equation in Electromagnetic Field- Before writing the quaternion Dirac equation in generalized electromagnetic fields of dyons let us start with the quaternion gauge transformations. A $Q-$field (\[eq:1\]) is described in terms of following SO(4) local gauge transformations[@key-6; @key-9]; $$\begin{aligned} \phi & \rightarrow\phi' & =U\,\phi\,\bar{V\,}\,\,\,\,\,\, U,V\,\varepsilon Q\,\,,\,\,\, U\,\bar{U\,}=V\,\overline{V}=1\label{eq:86}\end{aligned}$$ The covariant derivative for this is then written in terms of two gauge potentials as $$\begin{aligned} D_{\mu}\phi & = & \partial_{\mu}\phi+A_{\mu}\phi-\phi B_{\mu}\label{eq:87}\end{aligned}$$ where potential transforms as $$\begin{aligned} A_{\mu}^{'} & = & U\, A_{\mu}\bar{U}\,+U\,\partial_{\mu}\bar{U}\,\end{aligned}$$ $$\begin{aligned} B_{\mu}^{'} & = & V\, B_{\mu}\bar{V}\,+V\,\partial_{\mu}\bar{V}\,\label{eq:88}\end{aligned}$$ and $$\begin{aligned} \bar{\phi'}\phi' & =\overline{(U\phi\bar{V)}}((U\phi\bar{V)} & =\bar{\phi}\phi=\phi_{0}^{2}+\left|\vec{\phi}\right|^{^{2}}\label{eq:89}\end{aligned}$$ Here we identify the non Abelian gauge fields $A_{\mu}$and $B_{\mu}$as the gauge potentials respectively for electric and magnetic charges of dyons described earlier in section-3.Corresponding field momentum of equation (\[eq:87\])may also be written as follows $$\begin{aligned} P_{\mu}\phi & = & p_{\mu}\phi+A_{\mu}\phi-\phi B_{\mu}\label{eq:90}\end{aligned}$$ where the gauge group $SO(4)=SU(2)_{e}\times SU(2)_{g}$ is constructed in terms of quaternion units of electric and magnetic gauges .Accordingly, the covariant derivative thus describes two different gauge field strengths i.e. $$\begin{aligned} \left[D_{\mu},D_{\upsilon}\right]\phi & = & f_{\mu\nu}\phi-\phi h_{\mu\nu}\nonumber \\ f_{\mu\nu} & = & A_{\mu,\nu}-A_{\nu,\mu}+\left[A_{\mu},A_{\nu}\right]\nonumber \\ h_{\mu\nu} & = & B_{\mu,\nu}-B_{\nu,\mu}+\left[B_{\mu},B_{\nu}\right]\label{eq:91}\end{aligned}$$ where $f_{\mu\nu}$ and $h_{\mu\nu}$ are gauge field strengths associated with electric and magnetic charges of dyons respectively.We may now write the Dirac equation as $$\begin{aligned} i\,\,\gamma_{\mu\,\,}D_{\mu}\psi(x,t) & = & m\,\psi(x,t)\,\end{aligned}$$ and accordingly with some restrictions and using the properties of quaternions we may write the Dirac equation as$$\begin{aligned} \left[\begin{array}{cc} m & e_{\mu}(p_{\mu}+A_{\mu}-B_{\mu})\\ -e_{\mu}(p_{\mu}+A_{\mu}-B_{\mu}) & m\end{array}\right]\left[\begin{array}{c} \psi_{a}\\ \psi_{b}\end{array}\right] & = & E\left[\begin{array}{c} \psi_{a}\\ \psi_{b}\end{array}\right]\label{eq:92}\end{aligned}$$ where $\varphi_{a}=\varphi_{0}+i\varphi_{1}$and $\varphi_{b}=\varphi_{2}-i\varphi_{3}$and as such we obtain the following set of equations;$$\begin{aligned} E\psi_{a}= & m\psi_{a}+e_{\mu}(p_{\mu}\psi_{b}+A_{\mu}\psi_{b}-\psi_{b}B_{\mu}) & ;\\ E\psi_{b}= & m\psi_{b}+e_{\mu}(p_{\mu}\psi_{a}+A_{\mu}\psi_{a}-\psi_{a}B_{\mu}) & ;\\ e_{\mu}(p_{\mu}\psi_{b}+A_{\mu}\psi_{b}-\psi_{b}B_{\mu}) & = & (E-m)\psi_{a}\\ e_{\mu}(p_{\mu}\psi_{a}+A_{\mu}\psi_{a}-\psi_{a}B_{\mu}) & = & (E-m)\psi_{b}\end{aligned}$$ $$\begin{aligned} \widehat{A}^{\dagger}\psi_{b} & = & (E-m)\psi_{a}=e_{\mu}(p_{\mu}\psi_{b}+A_{\mu}\psi_{b}-\psi_{b}B_{\mu})\nonumber \\ \widehat{A}\,\psi_{a} & = & (E+m)\psi_{b}=e_{\mu}(p_{\mu}\psi_{a}+A_{\mu}\psi_{a}-\psi_{a}B_{\mu})\nonumber \\ \widehat{A}^{\dagger}\widehat{A}\,\psi_{a} & = & (E^{2}-m^{2}\,)\psi_{a}\nonumber \\ \widehat{A}\,\widehat{A}^{\dagger}\psi_{b} & = & (E^{2}-m^{2}\,)\psi_{b}\label{eq:93}\end{aligned}$$ where we have restricted ourselves to the case of two dimensional supersymmetry by imposing the condition $A_{1}^{\dagger}=-A_{1},A_{2}^{\dagger}=-A_{2},A_{3}^{\dagger}=-A_{3},B_{1}^{\dagger}=-B_{1},B_{2}^{\dagger}=-B_{2},B_{3}^{\dagger}=-B_{3}$ to restore the supersymmetry.As such it is possible to supersymmetrize the Dirac equation for generalized electromagnetic fields of dyons and we obtain the commutation and anti commutation relations given by equation(\[eq:75\]) to verify the supersymmetric quantum mechanics in this case. Higher dimensional Supersymmetry ================================ Quaternion differential operator is defined as $$\begin{aligned} \partial & = & -i\,\partial_{t}+e_{1}\partial_{1}+e_{2}\partial_{2}+e_{3}\partial_{3}\nonumber \\ \overline{\partial} & = & -i\,\partial_{t}-e_{1}\partial_{1}-e_{2}\partial_{2}-e_{3}\partial_{3}\label{eq:94}\end{aligned}$$ which describes $$\begin{aligned} \partial\bar{\partial} & = & \partial_{t}^{2}+\partial_{1}^{2}+\partial_{2}^{2}+\partial_{3}^{2}\label{eq:95}\end{aligned}$$ and can be decomposed in to two dimensional from as $$\begin{aligned} \partial\partial^{\dagger}=-\nabla^{2} & = & (\frac{\partial}{\partial x}+e_{1}\frac{\partial}{\partial y})(-\frac{\partial}{\partial x}+e_{1}\frac{\partial}{\partial y})\label{eq:96}\end{aligned}$$ where ($\dagger$) changes only complex quantities with one quaternion units which plays the role of complex quantity (in C(1,i) case ) and thus is equivalent to $$\begin{aligned} -\nabla^{2} & = & (\frac{\partial}{\partial x}+i\frac{\partial}{\partial y})(-\frac{\partial}{\partial x}+i\frac{\partial}{\partial y})\label{eq:97}\end{aligned}$$ Now defining $q=(\frac{\partial}{\partial x}+i\frac{\partial}{\partial y})$ and $q^{\dagger}=(-\frac{\partial}{\partial x}+i\frac{\partial}{\partial y})$we have the negative of Laplacian $-\nabla^{2}=q\, q^{\dagger}=q^{\dagger}q$andthus describes two-dimensional free particle Supersymmetric quantum mechanics. Following Das et al[@key-21] we may now construct a two-dimensional supersymmetric theory in the following manner, $$\begin{aligned} q & = & \overrightarrow{a}\cdot\overrightarrow{(\nabla}+\overrightarrow{W}\,)\label{eq:98}\end{aligned}$$ where $\overrightarrow{W}\:$ is described as super potential, $\overrightarrow{a}=a_{x}+ia_{y}$ and $\overrightarrow{\nabla}$represents the two dimensional gradient.We may now write the super partner hamiltonians described in the previous sections as[@key-21]; $$\begin{aligned} H_{1} & =q^{\dagger}q & =\sum_{j=1}^{2}(-\nabla_{j}+W_{j})(\nabla_{j}+W_{j})-i\,\epsilon^{jk}\left\{ \nabla_{j},\nabla_{k}\right\} \nonumber \\ H_{2} & =qq^{\dagger} & =\sum_{j=1}^{2}(\nabla_{j}+W_{j})(-\nabla_{j}+W_{j})-i\,\epsilon^{jk}\left\{ \nabla_{j},\nabla_{k}\right\} \label{eq:99}\end{aligned}$$ as described earlier here also the curly brackets represent anti commutators and $\epsilon^{jk}$ is anti symmetric with $\epsilon^{12}=1$ and $\epsilon^{21}=-1$.It is to be noted that the vector super potential naturally generates a gauge field interaction structure resulting from supersymmetry algebra.Thus it is possible to say that we can take quaternion supersymmetry as N = 4 real supersymmetry because like complex quanties generat two dimensional real representation,accordingly quaternions generate four dimensional real representations. thus we need to define q in such a manner that $-\nabla^{2}=q\, q^{\dagger}=q^{\dagger}q$ and 4-dimensional supersymmetry algebra can be built accordingly in terms of three non commutating quantities but associative quantities like three quaternion units. ej. Let us assume that $q$ in free space can be written as a linear super position in terms of pure quaternion units consisting non-Abelian gauge structure and are obtained from $q$ with $q=\frac{1}{2}(q-\overline{q})$, i.e. $$\begin{aligned} q & = & {\scriptstyle {\displaystyle \sum_{j=1}^{3}e_{j}\nabla_{j}}}\end{aligned}$$ and we may write equation (\[eq:99\]) as $$\begin{aligned} H_{1} & = & q^{\dagger}q=\sum_{j=1}^{3}e_{j}e_{j}^{\dagger}(-\nabla_{j}^{2})-\sum_{j<k}^{3}(e_{j}e_{k}^{\dagger}+e_{k}e_{j}^{\dagger})\nabla_{j}\nabla_{k}\nonumber \\ H_{2} & = & qq^{\dagger}=\sum_{j=1}^{3}e_{j}^{\dagger}e_{j}(-\nabla_{j}^{2})-\sum_{j<k}^{3}(e_{j}^{\dagger}e_{k}+e_{k}^{\dagger}e_{j})\nabla_{j}\nabla_{k}\label{eq:100}\end{aligned}$$ where we may write $q=\overrightarrow{a}\cdot\overrightarrow{\nabla}$with ${\scriptstyle {\displaystyle \overrightarrow{a}=\sum_{j=1}^{3}e_{j}a_{j}}}$.The vector super potential depends on the position as $$\begin{aligned} q=\overrightarrow{a}\cdot\overrightarrow{(\nabla} & + & \overrightarrow{W)}=\sum_{j=1}^{3}e_{j}(\nabla_{j}+W_{j})\end{aligned}$$ and accordingly we may obtain the super partner Hamiltonians $qq^{\dagger}$and $q^{\dagger}q$ interms of interacting super potential [@key-21] with the assumption that the gauge interaction structure naturally arises from the requirement of super symmetry in ters of quaternions and gauge theory of dyons be dealt in this manner.The generalization of this theory to octonion is not possible because of the non associative nature of octonions.Secondly octonions can not be written directly in terms of eight dimensional matrix representation of real numbers like quaternions are written in terms four dimensional representation of real numbers. There is the difference between quaternion and octonions that the quaternion satisfies all the property of matrices while the octonions are not and the alternativity property of split octonions is still not competant to resolve these inconsistencies associated with octonions. So, it is hard to write the super symmetric extension and we will have to write it another way to visualize the supersymmetry with octonions and octonion gauge theory of dyons.thus we may conclude that many of the properties of quaternion quantum mechanics lead to the properties of supersymmetric quantum mechanics because in both the cases the energy is positive semi definite and a gauge interaction arises automatically when one defines the quaternion units in terms of Pauli spin matrices and supersymmetric charges are defined accordingly. If we describe the well known N = 4 supersymmetry, the 4-dimensional real anti symmetric matrices $\alpha_{j}$and $\beta_{J}$ for all (j = 1,2,3) associated with it [@key-26]satisfy the algebra given by $$\begin{aligned} \left\{ \alpha^{i},\,\alpha^{j}\right\} & = & \left\{ \beta^{i},\,\beta^{j}\right\} =-2\,\delta^{ij}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left[\alpha^{i}\,,\,\beta^{j}\right]=0\nonumber \\ \left[\alpha^{i},\,\alpha^{j}\right] & = & -2\,\varepsilon^{ijk}\alpha^{k}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left[\beta^{i},\,\beta^{j}\right]=-2\,\varepsilon^{ijk}\beta^{k}\label{eq:101}\end{aligned}$$ which is the algebra of quaternions. The matrices $\alpha_{j}$ and $\beta_{J}$are thus the quaternion analogue for give $4\times4$ real matrix representation of three quaternion units since the properties of $\alpha_{j}$ and $\beta_{J}$ are same as those for three non abelian quaternion units. Thus we conclude that N=4 real Super symmetry can be visualized as N=1 quaternion and N=2 complex Super symmetry and the theories of monopoles and dyons are thus be understood better in terms of hyper complex number system. [10]{} W. R. Hamilton, “Elements of quaternions”, Chelsea Publications Co., NY, (*1969*). L. Silberstein, Phil. Mag., 63 (1912) 790. P.G.Tait, “An elementary Treatise on Quaternions”, Oxford Univ. Press. (1875). B.S.Rajput, S.R.Kumar and O.P.S.Negi,Letter. Nuvovo Cimento,34 (1982) 180; 36 (1983) 75. V. Majernik, PHYSICA, 39 (2000) 9-24 and references therein. S.L.Adler, “Quaternion Quantum Mechanics and Quantum Fields”, Oxford Univ. Press, NY, (1995). D.Finklestein, J.M.Jauch, S.Schiminovich and D.Speiser, J.Math. Phys., 4 (1963) 788. D.V.Duc and V.T.Cuong, Communication in Physics, 8 (1998) 197. K.Morita, Prog. Theor. Phys., 67 (1982) 1860; 65 (1981) 2071, P.S.Bisht, O.P.S.Negi and B.S.Rajput, Int. J. Theor. Phys., 32 (1993) 2099. A.J.Davies, Phys. Rev. A49 (1994) 714. Shalini Bisht, P.S.Bisht and O.P.S.Negi, IL Nuovo Cimento, B113, (1998) 1449. A. Waser, AW-Verlag www.aw-Verlag.ch (2000); V.V.Kravchenkov, “Applied Quaternion Analysis”, Helderman Verlag, Germany (2003). M.F.Sohnius, Phys. Rep. 128 (1985) 53. P. Fayet and S.Ferrara, Phys. Rep., 32 (1977) 247. F.Cooper, A.Khare U.Sukhatme, Phys. Rep. 251 (1995) 267. V.A.Kostelecky and D.K.Campbell, “Supersymmetry in Physics”, North Holland, (1985). A. Bilal, “Introduction to Supersymmetry”, hep-th/0101055 v1. E.Witten, Nucl . Phys. B202 ((1982) 213. C.M.Hull, “The geometry of Supersymmetric Quantum Mechanics”, hep-th/9910028. A.Das,S.Okubo and S.A. Pernice, Mod. Phys. Lett., A12 (1997) 581; A.Das and S.A. Pernice,“Higher Dimensional SUSY Quantum Mechanics”, hep-th/9612125. B. S. Rajput and D. C. Joshi, Had. J., 4 (1981), 1805. B. S. Rajput and Om Prakash, Indian J. Phys., A53 (1979), 274. A.J.Davies,Phys. Rev. ,D41(1990),2628; A.Govorkov,Theor.Math.PHYS, 68 (1987) 893. P.Roteli,Mod.Phys. Lett.,A4 (1989)933; A4(1989) 1763. H.Osborn, Phys. Lett. .B83 (1979) 321; P.de Vecchia, “Duality in Supersymmetric Gauge Theories”, hep-th/9608090 and references therein. [^1]: Talk presented at conference on, “FUNCTION THEORIES IN HIGHER DIMENSIONS”, Tampere University of Technology, Tampere, Finland, June12-16, 2006.\ Address from July 01 to August 31,2006- Universität Konstanz, Fachbereich Physik, Prof. Dr. H.Dehnen,Postfach M 677, D-78457 Konstanz,Germany
--- abstract: 'We derive explicit forms of Markovian transition probability densities for the velocity space and phase-space Brownian motion of a charged particle in a constant magnetic field.' author: - | Rados[ł]{}aw Czopnik\ Institute of Theoretical Physics, University of Wroc[ł]{}aw,\ PL-50 205 Wroc[ł]{}aw, Poland\ and\ Piotr Garbaczewski\ Institute of Physics, Pedagogical University,\ PL-65 069 Zielona Góra, Poland title: Charged Brownian particle in a magnetic field --- Motivation ========== An old-fashioned problem of the Brownian motion of a charged particle in a constant magnetic field has originated from studies of the diffusion of plasma across a magnetic field [@Tay], [@Kur] and nowadays, together with a free Brownian motion example, stands for a textbook illustration of how transport and auto-correlation functions should be computed in generic situations governed by the Langevin equation cf. [@Bal] but also [@Sch], [@vKa]. To our knowledge, except for the paper [@Kur] no attempt was made in the literature to give a complete characterization of the pertinent stochastic process. However a striking peculiarity of Ref. [@Kur] is that the Brownian motion in a magnetic field is there described in terms of *operator-valued (matrix-valued functions) probability distributions that involve fractional powers of matrices. In consequence, we have no clean relationship with the standard formalism of Kramers-Smoluchowski equations, nor ways to stay in conformity with the standard wisdom about probabilistic procedures valid in case of the free Brownian motion (Ornstein-Uhlenbeck process), cf. [@Step], [@Cha], [@Nel]. Therefore, we address an issue of the Brownian motion of a charged particle in a magnetic field anew, to unravel its features of a fully-fledged stochastic diffusion process.* Velocity-space diffusion process ================================ The standard analysis of the Brownian motion of a free particle employs the Langevin equation $\frac{d\overrightarrow{u}}{dt}=-\beta \overrightarrow{u}+ \overrightarrow{A}% \left( t\right)$ where $\overrightarrow{u}$ denotes the velocity of the particle and the influence of the surrounding medium on the motion (random acceleration) of the particle is modeled by means of two independent contributions. A systematic part $-\beta \overrightarrow{u}$ represents a dynamical friction. The remaining fluctuating part $\overrightarrow{A}\left( t\right) $ is supposed to display a statistics of the familiar white noise: (i)$\overrightarrow{A}\left( t\right) $ is independent of $% \overrightarrow{u}$, (ii) $\left\langle A_{i}\left( s\right) \right\rangle = 0$ and $\left\langle A_{i}\left( s\right) A_{j}\left( s^{\shortmid }\right) \right\rangle =2q\delta _{ij}\delta \left( s-s^{\shortmid }\right) $ for $i,j=1,2,3$, where $q=\frac{k_{B}T}{m}\beta $ is a physical parameter. The Ornstein-Uhlenbeck stochastic process comes out on that conceptual basis. The linear friction model can be adopted to the case of diffusion of charged particles in the presence of a constant magnetic field which acts upon particles via the Lorentz force. The Langevin equation for that motion reads: $$\frac{d\overrightarrow{u}}{dt}=-\beta \overrightarrow{u}+\frac{q_{e}}{mc}% \overrightarrow{u}\times \overrightarrow{B}+\overrightarrow{A}\left( t\right) \label{Langevin}$$ where $q_{e}$ denotes an electric charge of the particle of mass $m$. Let us assume for simplicity that the constant magnetic field $\overrightarrow{B}$ is directed along the z-axis of a Cartesian reference frame: $ \overrightarrow{B}=\left( 0,0,B\right) $ and $B=const$. In this case Eq. (\[Langevin\]) takes the form $$\frac{d\overrightarrow{u}}{dt}=-\Lambda \overrightarrow{u}+ \overrightarrow{A}% \left( t\right) \label{LanII}$$ where $$\Lambda =\left( \begin{array}{ccc} \beta & -\omega _{c} & 0 \\ \omega _{c} & \beta & 0 \\ 0 & 0 & \beta \end{array} \right)$$ and $\omega _{c}=\frac{q_{e}B}{mc}$ denotes the Larmor frequency. Assuming the Langevin equation to be (at least formally) solvable, we can infer a probability density $P\left( \overrightarrow{u},t|% \overrightarrow{u}_{0}\right) $, $t>0$ conditioned by the the initial velocity data choice $\overrightarrow{% u}=\overrightarrow{u}_{0}$ at $t=0$. Physical circumstances of the problem enforce a demand: (i) $P\left( \overrightarrow{u},t|\overrightarrow{u}_{0}\right) \rightarrow \delta ^{3}\left( \overrightarrow{u}-\overrightarrow{u}% _{0}\right) $ as $t\rightarrow 0$ and (ii) $P\left( \overrightarrow{u},t|\overrightarrow{u}_{0}\right) \rightarrow \left( \frac{m}{2\pi k_{B}T}\right) ^{\frac{3}{2}}\exp \left( -\frac{m |\overrightarrow{u}_{0}| ^{2}}{2k_{B}T} \right) $ as $t\rightarrow \infty $. A formal solution of Eq. (\[LanII\]) reads: $$\overrightarrow{u}\left( t\right) -e^{-\Lambda t}\overrightarrow{u}% _{0}=\int_{0}^{t}e^{-\Lambda \left( t-s\right) } \overrightarrow{A}\left( s\right) ds \enspace . \label{sol1}$$ By taking into account that $$e^{-\Lambda t}=e^{-\beta t}\left( \begin{array}{ccc} \cos \omega _{c}t & \sin \omega _{c}t & 0 \\ -\sin \omega _{c}t & \cos \omega _{c}t & 0 \\ 0 & 0 & 1 \end{array} \right) =e^{-\beta t}U\left( t\right)$$ we can rewrite (\[sol1\]) as follows $$\overrightarrow{u}\left( t\right) -e^{-\beta t}U\left( t\right) \overrightarrow{u}_{0}=\int_{0}^{t}e^{-\beta \left( t-s\right) }U\left( t-s\right) \overrightarrow{A}\left( s\right) ds \enspace .$$ Statistical properties of $\overrightarrow{u}\left( t\right) -e^{-\Lambda t}\overrightarrow{u}_{0}$ are identical with those of $% \int_{0}^{t}e^{-\Lambda \left( t-s\right) } \overrightarrow{A}\left( s\right) ds$. In consequence, the problem of deducing a probability density $P\left( \overrightarrow{u},t|\overrightarrow{u}_{0}\right) $ is equivalent to deriving the probability distribution of the random vector $$\overrightarrow{S}=\int_{0}^{t}\psi \left( s\right) \overrightarrow{A}\left( s\right) ds \label{Sdef}$$ where $\psi \left( s\right) =e^{-\Lambda \left( t-s\right) }=e^{-\beta \left( t-s\right) }U\left( t-s\right)$. The white noise term $ \overrightarrow{A}\left( s\right) $ in view of the integration with respect to time is amenable to a more rigorous analysis that invokes the Wiener process increments and their statistics, [@Doob]. Let us divide the time integration interval into a large number of small subintervals $\Delta t$. We adjust them suitably to assure that effectively $\psi \left( s\right) $ is constant on each subinterval $\left( j\Delta t,\left( j+1\right) \Delta t\right) $ and equal $\psi \left( j\Delta t\right) $. As a result we obtain the expression $$\overrightarrow{S}=\sum_{j=0}^{N-1}\psi \left( j\Delta t\right) \int_{j\Delta t}^{\left( j+1\right) \Delta t}\overrightarrow{A}\left( s\right) ds \enspace . \label{S}$$ Here $ \overrightarrow{B}\left( \Delta t\right) =\int_{j\Delta t}^{\left( j+1\right) \Delta t}\overrightarrow{A}\left( s\right) ds$ stands for the above-mentioned Wiener process increment. Physically, $\overrightarrow{B}\left( \Delta t\right) $ represents the *net acceleration which a Brownian particle may suffer (in fact accumulates) during an interval of time $\Delta t$. Equation (\[S\]) becomes* $$\overrightarrow{S}=\sum_{j=0}^{N-1}\psi \left( j\Delta t\right) \overrightarrow{B}\left( \Delta t\right) = \sum_{j=0}^{N-1}\overrightarrow{s}% _{j}$$ where we introduce $\overrightarrow{s}_{j}=\psi \left( j\Delta t\right) \overrightarrow{B}\left( \Delta t\right) = \psi _{j}\overrightarrow{B}\left( \Delta t\right) $. The Wiener process argument [@Cha], [@Nel], allows us to infer the probability distribution of $\overrightarrow{s}_{j}$. It is enough to employ the fact that the distribution of $\overrightarrow{B}\left( \Delta t\right) $ is Gaussian with mean zero and variance $q=\frac{k_{B}T}{m}\beta $. Then $$w\left[ \overrightarrow{B}\left( \Delta t\right) \right] = \left( \frac{1}{% 4\pi q\Delta t}\right) ^{\frac{3}{2}}\exp \left( -\frac{\left| \overrightarrow{B}\left( \Delta t\right) \right| ^{2}}{4q\Delta t}\right) \label{w(B)}$$ and in view of $\overrightarrow{s}_{j}=\psi _{j} \overrightarrow{B}\left( \Delta t\right) $ by performing the change of variables in (\[w(B)\]) we get $$\widetilde{w}\left[ \overrightarrow{s}_{j}\right] = \det \left[ \psi _{j}^{-1}% \right] w\left[ \psi _{j}^{-1}\overrightarrow{s}_{j}\right] = \frac{1}{\det \psi _{j}}w\left[ \psi _{j}^{-1}\overrightarrow{s}_{j}\right] \enspace .$$ Since $ \det \psi \left( s\right) =e^{-3\beta \left( t-s\right) } $ and $\psi ^{-1}\left( s\right) = U\left[ -\left( t-s\right) \right] e^{\beta \left( t-s\right) }$ we obtain $$\widetilde{w}\left[ \overrightarrow{s}_{j}\right] =\left( \frac{1}{4\pi q\Delta t}\right) ^{\frac{3}{2}}\frac{1}{e^{-3\beta \left( t-j\Delta t\right) }}\exp \left( -\frac{\left| e^{\beta \left( t-j\Delta t\right) }U% \left[ -\left( t-j\Delta t\right) \right] \overrightarrow{s}_{j}\right| ^{2}% }{4q\Delta t}\right)$$ and finally $$\widetilde{w}\left[ \overrightarrow{s}_{j}\right] = \left( \frac{1}{4\pi q\Delta t}\frac{1}{e^{-2\beta \left( t-j\Delta t\right) }} \right) ^{\frac{3}{% 2}}\exp \left( -\frac{\left| \overrightarrow{s}_{j} \right| ^{2}}{4q\Delta te^{-2\beta \left( t-j\Delta t\right) }}\right) \enspace .$$ Clearly, $\overrightarrow{s}_{j}$ are mutually independent random variables whose distribution is Gaussian with mean zero and variance $\sigma _{j}^{2}=2q\Delta te^{-2\beta \left( t-j\Delta t\right) }$. Hence, the probability distribution of $\overrightarrow{S}= \sum_{j=0}^{N-1}% \overrightarrow{s}_{j}$ is again Gaussian with mean zero. Its variance equals the sum of variances of $\overrightarrow{s}_{j}$ i.e. $ \sigma ^{2}=\sum_{j}\sigma _{j}^{2}=2q\sum_{j}\Delta te^{-2\beta \left( t-j\Delta t\right) }$. After taking the limit $N\rightarrow \infty $ $(\Delta t\rightarrow 0)$ we arrive at $$\sigma ^{2}=2q\int_{0}^{t}dse^{-2\beta \left( t-s\right) }= \frac{k_{B}T}{m}% \left( 1-e^{-2\beta t}\right) \enspace .$$ Because of $\overrightarrow{S}=\overrightarrow{u} \left( t\right) -e^{-\Lambda t}% \overrightarrow{u}_{0}$ the transition probability density of the Brownian particle velocity, conditioned by the initial data $\overrightarrow{u}_0$ at $t_0=0$ reads $$P\left( \overrightarrow{u},t|\overrightarrow{u}_{0}\right) = \left( \frac{1}{% 2\pi \frac{k_{B}T}{m}\left( 1-e^{-2\beta t}\right) } \right) ^{\frac{3}{2}% }\exp \left( -\frac{\left| \overrightarrow{u}- e^{-\Lambda t}\overrightarrow{u% }_{0}\right| ^{2}}{2\frac{k_{B}T}{m}\left( 1-e^{-2\beta t}\right) } \right) \enspace .$$ The process is Markovian and time-homogeneous, hence the above formula can be trivially extended to encompass the case of arbitrary $t_{0}\neq 0$ : $ P\left( \overrightarrow{u},t|\overrightarrow{u}_{0},t_{0}\right)$ arises by substituting everywhere $t-t_0$ instead of $t$. Physical arguments (cf. demand (ii) preceding Eq. (4)) refer to an asymptotic probability distribution (invariant measure density) $P(u)$ of the random variable $\overrightarrow{u}$ in the Maxwell-Boltzmann form $$P\left( \overrightarrow{u}\right) =\left( \frac{m}{2\pi k_{B}T} \right) ^{\frac{3}{2}}\exp \left( -\frac{m\left| \overrightarrow{u}\right| ^{2}}{2k_{B}T}\right) \enspace .$$ This time-independent probability density together with the time-homogeneous transition density (15) uniquely determine a stationary Markovian stochastic process for which we can evaluate various mean values. Expectation values of velocity components vanish: $ \left\langle u_{i}\left( t\right) \right\rangle = \int_{-\infty }^{\infty }u_{i}P\left( \overrightarrow{u}\right) d\overrightarrow{u}=0 $ for $i=1,2,3$. The matrix of the second moments (velocity auto-correlation functions) reads $$\left\langle u_{i}\left( t\right) u_{j}\left( t_{0}\right) \right\rangle =\int_{-\infty }^{\infty }u_{i}u_{j}^{0}P\left( \overrightarrow{u},t;% \overrightarrow{u}_{0},t_{0}\right) d\overrightarrow{u}d\overrightarrow{u}% _{0}$$ where $i,j=1,2,3$ and in view of $ P\left( \overrightarrow{u},t;\overrightarrow{u}_{0},t_{0}\right) = P\left( \overrightarrow{u},t|\overrightarrow{u}_{0},t_{0}\right) P\left( \overrightarrow{u}_{0}\right)$ we arrive at the compact expression $$\frac{k_{B}T}{m}e^{-\Lambda \left| t-t_{0}\right| }=\frac{k_{B}T}{m}% e^{-\beta \left| t-t_{0}\right| }\left( \begin{array}{ccc} \cos \omega _{c}\left| t-t_{0}\right| & \sin \omega _{c}\left| t-t_{0}\right| & 0 \\ -\sin \omega _{c}\left| t-t_{0}\right| & \cos \omega _{c}\left| t-t_{0}\right| & 0 \\ 0 & 0 & 1 \end{array} \right) \enspace .$$ In particular, the auto-correlation function (second moment) of the $x$-component of velocity equals $$\left\langle u_{1}\left( t\right) u_{1}\left( t_{0}\right) \right\rangle =% \frac{k_{B}T}{m}e^{-\beta \left| t-t_{0}\right| }\cos \omega _{c}\left| t-t_{0}\right| \label{autocor}$$ in agreement with white noise calculations of Refs. [@Tay] and [@Bal], cf. Chap.11, formula (11.25). The so-called running diffusion coefficient arises here via straightforward integration of the function $R_{11}(\tau )= <u_1(t)u_1(t_0)>$ where $\tau = t-t_0 >0$: $$D_1(t) = \int_0^t <u_1(0)u_1(\tau)> d\tau = % {{k_BT}\over m} {{\beta + [\omega _csin(\omega _ct) - \beta cos(\omega _ct)]exp(-\beta t)}\over {\beta ^2 + {\omega _c}^2}}$$ with an obvious asymptotics (the same for $D_2(t)$): $D_B=lim_{t\rightarrow \infty } D_1(t)= {{k_BT}\over m} {\beta \over {\beta ^2 + {\omega _c}^2}}$ and the large friction ($\omega _c$ fixed and bounded) version $D= {{k_BT}\over {m\beta }}$. Spatial process - dynamics in the plane ======================================= The cylindrical symmetry of the problem allows us to consider separately processes running on the $XY$ plane and along the $Z$-axis (where the free Brownian motion takes place). We shall confine further attention to the two-dimensional $XY$-plane problem. Henceforth, each vector will carry two components which correspond to the $x$ and $y$ coordinates respectively. We will directly refer to the vector and matrix quantities introduced in the previous section, but while keeping the same notation, we shall simply disregard their $z$-coordinate contributions. We define the spatial displacement $\overrightarrow{r}$ of the Brownian particle as folows $$\overrightarrow{r}-\overrightarrow{r}_{0}=\int_{0}^{t} \overrightarrow{u}% \left( \eta \right) dn$$ where $\overrightarrow{u}\left( t\right) $ is given by Eq. (\[LanII\]) (except for disregarding the third coordinate). Our aim is to derive the probability distribution of $\overrightarrow{r}$ at time $t$ provided that the particle position and velocity were equal $\overrightarrow{r}_{0}$ and $\overrightarrow{u}_{0}$ respectively, at time $t_{0}=0$. To that end we shall mimic procedures of the previous section. In view of: $$\overrightarrow{r}-\overrightarrow{r}_{0}- \int_{0}^{t}e^{-\Lambda \eta }% \overrightarrow{u}_{0}=\int_{0}^{t}d\eta \int_{0}^{\eta }dse^{-\Lambda \left( \eta -s\right) }\overrightarrow{A}\left( s\right)$$ we have $$\overrightarrow{r}-\overrightarrow{r}_{0}- \Lambda ^{-1}\left( 1-e^{-\Lambda t}\right) \overrightarrow{u}_{0}=\int_{0}^{t}\Lambda ^{-1}\left( 1-e^{\Lambda \left( s-t\right) }\right) \overrightarrow{A}\left( s\right) ds$$ where $$\Lambda ^{-1}=\frac{1}{\beta ^{2}+\omega _{c}^{2}}\left( \begin{array}{cc} \beta & \omega _{c} \\ -\omega _{c} & \beta \end{array} \right)$$ is the inverse of the matrix $\Lambda $ (regarded as a rank two sub-matrix of that originally introduced in Eq. (3)). Let us define two auxiliary matrices $$\begin{aligned} \Omega &\equiv &\Lambda ^{-1}\left( 1-e^{-\Lambda t}\right) \label{omega} \\ \phi \left( s\right) &\equiv &\Lambda ^{-1}\left( 1-e^{\Lambda \left( s-t\right) }\right) \notag \enspace .\end{aligned}$$ Because of: $$e^{-\Lambda t}=\exp \left\{ - t \left( \begin{array}{cc} \beta & -\omega _{c} \\ \omega _{c} & \beta \end{array} \right) \right\} =e^{-\beta t}\left( \begin{array}{cc} \cos \omega _{c}t & \sin \omega _{c}t \\ -\sin \omega _{c}t & \cos \omega _{c}t \end{array} \right) =e^{-\beta t}U\left( t\right)$$ we can represent matrices $\Omega $, $\phi \left( s\right) $ in more detailed form. We have: $$\Omega =\frac{1}{\beta ^{2}+\omega _{c}^{2}}\left\{ \left( \begin{array}{cc} \beta & \omega _{c} \\ -\omega _{c} & \beta \end{array} \right) -e^{-\beta t}\left( \begin{array}{cc} \beta & \omega _{c} \\ -\omega _{c} & \beta \end{array} \right) \left( \begin{array}{cc} \cos \omega _{c}t & \sin \omega _{c}t \\ -\sin \omega _{c}t & \cos \omega _{c}t \end{array} \right) \right\}$$ and $$\phi \left( s\right) =\Lambda ^{-1}\left( 1-e^{-\beta \left( t-s\right) }U\left( t-s\right) \right) =$$ $$\frac{1}{\beta ^{2}+\omega _{c}^{2}}\left( \begin{array}{cc} \beta & \omega _{c} \\ -\omega _{c} & \beta \end{array} \right) \left( \begin{array}{cc} 1-e^{\beta \left( s-t\right) }\cos \omega _{c}\left( s-t\right) & -e^{\beta \left( s-t\right) }\sin \omega _{c}\left( s-t\right) \\ e^{\beta \left( s-t\right) }\sin \omega _{c}\left( s-t\right) & 1-e^{\beta \left( s-t\right) }\cos \omega _{c}\left( s-t\right) \end{array} \right) \enspace .$$ Next steps imitate procedures of the previous section. Thus, we seek for the probability distribution of the random (planar) vector $ \overrightarrow{R}=\int_{0}^{t}\phi \left( s\right) \overrightarrow{A}\left( s\right) ds \label{Rdef}$ where $\overrightarrow{R}=\overrightarrow{r}-\overrightarrow{r}_{0}-\Omega \overrightarrow{u}_{0}$. Dividing the time interval $\left( 0,t\right) $ into small subintervals to assure that $\phi \left( s\right) $ can be regarded constant over the time span $\left( j\Delta t,\left( j+1\right) \Delta t\right) $ and equal $\phi \left( j\Delta t\right) $, we obtain $$\overrightarrow{R}=\sum_{j=0}^{N-1}\phi \left( j\Delta t\right) \int_{j\Delta t}^{\left( j+1\right) \Delta t}\overrightarrow{A}\left( s\right) ds=\sum_{j=0}^{N-1}\phi \left( j\Delta t\right) \overrightarrow{B}% \left( \Delta t\right) =\sum_{j=0}^{N-1}\overrightarrow{r}_{j}$$ where $\overrightarrow{r}_{j}=\phi \left( j\Delta t\right) \overrightarrow{B}\left( \Delta t\right) =\phi _{j}\overrightarrow{B}\left( \Delta t\right) $. By invoking the probability distribution (10) we perform an appropriate change of variables: $\overrightarrow{r% }_{j}=\phi _{j}\overrightarrow{B}\left( \Delta t\right) $ to yield a probability distribution of $\overrightarrow{r}_{j}$ $$\widetilde{w}\left[ \overrightarrow{r}_{j}\right] = \det \left[ \phi _{j}^{-1}% \right] w\left[ \phi _{j}^{-1}\overrightarrow{r}_{j}\right] =\frac{1}{\det \phi _{j}}w\left[ \phi _{j}^{-1}\overrightarrow{r}_{j}\right] \enspace .$$ Presently (not to be confused with previous steps (11)-(15)) we have $$\det \phi \left( s\right) =\frac{1}{\beta ^{2}+\omega _{c}^{2}}\left( 1+e^{2\beta \left( s-t\right) }-2e^{\beta \left( s-t\right) }\cos \omega _{c}\left( s-t\right) \right)$$ and $$\phi ^{-1}\left( s\right) =\frac{1}{1+e^{2\beta \left( s-t\right) }-2e^{\beta \left( s-t\right) }\cos \omega _{c}\left( s-t\right) }\left[ 1-e^{\beta \left( s-t\right) }U\left( -\left( s-t\right) \right) \right] \Lambda \enspace .$$ So, the inverse of the matrix $\phi _{j}$ has the form: $$\phi _{j}^{-1}=\frac{\widetilde{A}_{j}}{\gamma _{j}}$$ where $$\widetilde{A}_{j}=\left( \begin{array}{cc} 1-e^{\beta \left( j\Delta t-t\right) }\cos \omega _{c}\left( j\Delta t-t\right) & e^{\beta \left( j\Delta t-t\right) }\sin \omega _{c}\left( j\Delta t-t\right) \\ -e^{\beta \left( j\Delta t-t\right) }\sin \omega _{c}\left( j\Delta t-t\right) & 1-e^{\beta \left( j\Delta t-t\right) }\cos \omega _{c}\left( j\Delta t-t\right) \end{array} \right) \left( \begin{array}{cc} \beta & -\omega _{c} \\ \omega _{c} & \beta \end{array} \right)$$ and $$\gamma _{j}=1+e^{2\beta \left( j\Delta t-t\right) }-2e^{\beta \left( j\Delta t-t\right) }\cos \omega _{c}\left( j\Delta t-t\right) \enspace .$$ There holds: $$\det \phi _{j}^{-1}=\left( \det \phi _{j}\right) ^{-1}=\left( \beta ^{2}+\omega _{c}^{2}\right) \frac{1}{\gamma _{j}}$$ and as a consequence we arrive at the following probability distribution of $\overrightarrow{r}_{j}$ $$\widetilde{w}\left[ \overrightarrow{r}_{j}\right] = \frac{1}{\frac{1}{\beta ^{2}+\omega _{c}^{2}}\gamma _{j}}\left( \frac{1}{4\pi q\Delta t} \right) \exp \left\{ \frac{\left| \widetilde{A}_{j}\left( \begin{array}{c} r_{j}^{x} \\ r_{j}^{y} \end{array} \right) \right| ^{2}}{\gamma _{j}^{2}4q\Delta t}\right\} \enspace .$$ In view of $$\left| \widetilde{A}_{j}\left( \begin{array}{c} r_{j}^{x} \\ r_{j}^{x} \end{array} \right) \right| ^{2}=\left( \beta ^{2}+ \omega _{c}^{2}\right) \gamma _{j}% \left[ \left( r_{j}^{x}\right) ^{2}+ \left( r_{j}^{y}\right) ^{2}\right]$$ that finally leads to $$\widetilde{w}\left[ \overrightarrow{r}_{j}\right] = \left( \frac{\beta ^2 + \omega _{c}^2}{4\pi q\Delta t \gamma _{j}}\right) \exp \left\{ -\frac{(\beta ^2 + \omega _{c}^2)\, \left| \overrightarrow{r}_{j}\right| ^{2}} {4q\Delta t \gamma _{j}}\right\} \enspace .$$ Since this probability distribution is Gaussian with mean zero and variance $% \sigma _{j}^{2}=$ $2q\Delta t\frac{1}{\beta ^{2}+\omega _{c}^{2}} \gamma _{j}$, the random vector$\ \overrightarrow{R}$ as a sum of independent random variables $\overrightarrow{r}_{j}$ has the distribution $$w\left( \overrightarrow{R}\right) =\frac{1}{2\pi \sum_{j} \sigma _{j}^{2}}% \exp \left( -\frac{R_{x}^{2}+R_{y}^{2}}{2\sum_{j} \sigma _{j}^{2}}\right) \enspace .$$ $$\sigma ^{2}=\sum_{j}\sigma _{j}^{2}=2q\sum_{j}\Delta t\frac{1}{\beta ^{2}+\omega _{c}^{2}}\gamma _{j} \enspace .$$ In the limit of $\Delta t\rightarrow 0$ we arrive at the integral $$\sigma ^{2}=2q\frac{1}{\beta ^{2}+\omega _{c}^{2}} \int_{0}^{t}\gamma \left( s\right) ds$$ with $ \int_{0}^{t}\gamma \left( s\right) ds=t+ \Theta $, where $$\Theta = \Theta (t) = \frac{1}{2\beta }\left( 1-e^{-2\beta t}\right) -2\frac{1}{\beta ^{2}+\omega _{c}^{2}}\left[ \beta +\left( \omega _{c}\sin \omega _{c}t-\beta \cos \omega _{c}t\right) e^{-\beta t}\right] \enspace .$$ That gives rise to an ultimate form of the transition probability density of the spatial displacement process: $$P\left( \overrightarrow{r},t|\overrightarrow{r}_{0},t_{0}=0, \overrightarrow{u% }_{0}\right) =\frac{1}{4\pi \frac{k_{B}T}{m}\frac{\beta } {\beta ^{2}+\omega _{c}^{2}}\left( t+\Theta \right) }\exp \left( - \frac{\left| \overrightarrow{r% }-\overrightarrow{r}_{0}-\Omega \overrightarrow{u}_{0} \right| ^{2}}{4\frac{% k_{B}T}{m}\frac{\beta }{\beta ^{2}+\omega _{c}^{2}} \left( t+\Theta \right) }% \right)$$ with $\Omega =\Omega (t)$ defined in Eqs. (\[omega\]), (27). Notice that an asymptotic diffusion coefficient $D_B=D{\beta^2\over {\beta ^2 + \omega ^2_c}}$ encodes an attenuation signature for the spatial dispersion (when $\omega _c$ grows up at $\beta $ fixed). The spatial displacement process governed by the above transition probability density surely is *not Markovian. That can be checked by inspection: the Chapman-Kolmogorov identity is not valid, like in the standard free Brownian motion example where the Ornstein-Uhlenbeck process induced (sole) spatial dynamics is non-Markovian as well.* Phase-space process =================== Axial direction --------------- We take advantage of the cylindrical symmetry of our problem, and consider separately the (free) Brownian dynamics in the direction parallel to the magnetic field vector, e.g. along the $Z$-axis. That amounts to a familiar Ornstein-Uhlenbeck process in its extended phase-space form. In the absence of external forces, the kinetic (Kramers-Fokker-Planck equation) reads: $${\partial _t W + u\nabla _zW = \beta \nabla _u(Wu) + q \triangle _uW}$$ where $q=D\beta ^2$. Here $\beta $ is the friction coefficient, $D$ will be identified later with the spatial diffusion constant, and (as before) we set $D=k_BT/m\beta $ in conformity with the Einstein fluctuation-dissipation identity. The joint probability distribution (in fact, density) $W(z,u,t)$ for a freely moving Brownian particle which at $t=0$ initiates its motion at $x_0$ with an arbitrary inital velocity $u_0$ can be given in the form of the maximally symmetric displacement probability law: $$W(z,u,t)= W(R,S) = [4\pi ^2(FG-H^2)]^{-1/2} \cdot exp\{ - {{GR^2 - HRS + FS^2}\over {2(FG - H^2)}}\}$$ where $R=z-u_0(1-e^{-\beta t})\beta ^{-1}$, $S=u-u_0e^{-\beta t}$ while $ F = {D\over \beta }(2\beta t - 3 +4e^{-\beta t}- e^{-2\beta t})$ $G=D\beta (1-e^{-2\beta t})$ and $H=D(1-e^{-\beta t})^2$. Planar process -------------- Now we shall consider Brownian dynamics in the direction perpendicular to the magnetic field $\overrightarrow{B}$, hence (while in terms of configuration-space variables) we address an issue of the planar dynamics. We are interested in the complete phase-space process, hence we need to specify the transition probability density  $P\left( \overrightarrow{r},\overrightarrow{u},t|\overrightarrow{r}_{0},% \overrightarrow{u}_{0},t_{0}=0\right) $ of the Markov process conditioned by the initial data $\overrightarrow{u}=% \overrightarrow{u}_{0}$ and $\overrightarrow{r}= \overrightarrow{r}_{0}$ at time $% t_{0}=0$. That is equivalent to deducing the joint probability distribution $W\left( \overrightarrow{S,}% \overrightarrow{R}\right) $ of random vectors $\overrightarrow{S}$ and $% \overrightarrow{R}$, previously defined to appear in the form $\overrightarrow{S}=\overrightarrow{u} \left( t\right) -e^{-\Lambda t}% \overrightarrow{u}_{0}$ and $\overrightarrow{R}=\overrightarrow{r}-\overrightarrow{r}_{0}- \Omega \overrightarrow{u}_{0}$ respectively. Let us stress that presently, all vectors are regarded as two-dimensional versions (the third component being simply disregarded) of the original random variables we have discussed so far in Sections 2 and 3. Vectors $\overrightarrow{S}$ and $\overrightarrow{R}$ both share a Gaussian distribution with mean zero. Consequently, the joint distribution $W\left( \overrightarrow{S,}\overrightarrow{R}\right) $ is determined by the matrix of variances and covariances: $C = \left( c_{ij}\right) = \left(\left\langle x_{i}x_{j}\right\rangle \right)$, where we abbreviate four phase-space variables in a single notion of $x=\left( S_{1},S_{2},R_{1},R_{2}\right) $ and label components of $x$ by $i,j=1,2,3,4$. In terms of $\overrightarrow{R}$ and $\overrightarrow{S}$ the covariance matrix $C$ reads: $$C=\left( \begin{array}{cccc} \left\langle S_{1}S_{1}\right\rangle & \left\langle S_{1}S_{2}\right\rangle & \left\langle S_{1}R_{1}\right\rangle & \left\langle S_{1}R_{2}\right\rangle \\ \left\langle S_{2}S_{1}\right\rangle & \left\langle S_{2}S_{2}\right\rangle & \left\langle S_{2}R_{1}\right\rangle & \left\langle S_{2}R_{2}\right\rangle \\ \left\langle R_{1}S_{1}\right\rangle & \left\langle R_{1}S_{2}\right\rangle & \left\langle R_{1}R_{1}\right\rangle & \left\langle R_{1}R_{2}\right\rangle \\ \left\langle R_{2}S_{1}\right\rangle & \left\langle R_{2}S_{2}\right\rangle & \left\langle R_{2}R_{1}\right\rangle & \left\langle R_{2}R_{2}\right\rangle \end{array} \right) \enspace .$$ The joint probability distribution of $\overrightarrow{S}$ and $% \overrightarrow{R}$ is given by $$W\left( \overrightarrow{S,}\overrightarrow{R}\right) =W\left( \overrightarrow{x}\right) =\frac{1}{4\pi ^{2}} \left( \frac{1}{\det C}\right)^{\frac{1}{2}}\exp \left( -\frac{1}{2}\sum_{i,j}c_{ij}^{-1}x_{i}x_{j}\right)$$ where $c_{ij}^{-1}$denotes the component of the inverse matrix $C^{-1}$. The probability distributions of $\overrightarrow{S}$ and $\overrightarrow{R} $, which were established in the previous sections, determine a number of expectation values: $$g\equiv \left\langle S_{1}S_{1}\right\rangle =\left\langle S_{2}S_{2}\right\rangle =\frac{k_{B}T}{m}\left( 1-e^{-2\beta t}\right)$$ while $ \left\langle S_{1}S_{2}\right\rangle = \left\langle S_{2}S_{1}\right\rangle =0$. Furthermore: $$f\equiv \left\langle R_{1}R_{1}\right\rangle =\left\langle R_{2}R_{2}\right\rangle =2\frac{k_{B}T}{m}\frac{\beta } {\beta ^{2}+\omega _{c}^{2}}\left( t+\Theta \right)= 2D_B (t+\Theta ) \, .$$ In addition we have $ \left\langle R_{1}R_{2}\right\rangle =\left\langle R_{2}R_{1}\right\rangle =0$. As a consequence, we are left with only four non-vanishing components of the covariance matrix $C$: $ c_{13}=c_{31}=\left\langle S_{1}R_{1}\right\rangle $, $ c_{14}=c_{41}=\left\langle S_{1}R_{2}\right\rangle $, $ c_{23}=c_{32}=\left\langle S_{2}R_{1}\right\rangle $, $ c_{24}=c_{42}=\left\langle S_{2}R_{2}\right\rangle $ which need a closer examination. We can obtain those covariances by exploiting a dependence of the random quantities $% \overrightarrow{S}$ and $\overrightarrow{R}$ on the white-noise term $% \overrightarrow{A}\left( s\right) $ whose statistical properties are known. There follows: $$S_{1}=\int_{0}^{t}dse^{-\beta \left( t-s\right) }\left[ \cos \omega _{c}\left( t-s\right) A_{1}\left( s\right) +\sin \omega _{c}\left( t-s\right) A_{2}\left( s\right) \right]$$ $$S_{2}=\int_{0}^{t}dse^{-\beta \left( t-s\right) }\left[ -\sin \omega _{c}\left( t-s\right) A_{1}\left( s\right) +\cos \omega _{c}\left( t-s\right) A_{2}\left( s\right) \right]$$ $$\begin{aligned} R_{1} &=&\int_{0}^{t}ds\frac{1}{\beta ^{2}+\omega _{c}^{2}}\left[ \beta \left( 1-e^{-\beta \left( t-s\right) }\cos \omega _{c}\left( t-s\right) \right) +\omega _{c}e^{-\beta \left( t-s\right) }\sin \omega _{c}\left( t-s\right) \right] A_{1}\left( s\right) + \\ & &\int_{0}^{t}ds\frac{1}{\beta ^{2}+\omega _{c}^{2}}\left[ -\beta e^{-\beta \left( t-s\right) }\sin \omega _{c}\left( t-s\right) +\omega _{c}\left( 1-e^{-\beta \left( t-s\right) }\cos \omega _{c} \left( t-s\right) \right) % \right] A_{2}\left( s\right)\end{aligned}$$ $$\begin{aligned} R_{2} &=&\int_{0}^{t}ds\frac{1}{\beta ^{2}+\omega _{c}^{2}} \left[ -\omega _{c}\left( 1-e^{-\beta \left( t-s\right) } \cos \omega _{c}\left( t-s\right) \right) +\beta e^{-\beta \left( t-s\right) }\sin \omega _{c}\left( t-s\right) \right] A_{1}\left( s\right) + \\ &&\int_{0}^{t}ds\frac{1}{\beta ^{2}+\omega _{c}^{2}}\left[ \omega _{c}e^{-\beta \left( t-s\right) }\sin \omega _{c} \left( t-s\right) +\beta \left( 1-e^{-\beta \left( t-s\right) }\cos \omega _{c} \left( t-s\right) \right) \right] A_{2}\left( s\right) \enspace .\end{aligned}$$ Multiplying together suitable components of vectors $\overrightarrow{S}$ and $\overrightarrow{R}$ and taking averages of those products in conformity with the rules $\left\langle A_{i}\left( s\right) \right\rangle =0$ and $% \left\langle A_{i}\left( s\right) A_{j}\left( s^{\shortmid }\right) \right\rangle =2q\delta _{ij}\delta \left( s-s^{\shortmid }\right) $, where $% q=\frac{k_{B}T}{m}\beta $, $i,j=1,2,3$, we arrive at: $$h\equiv \left\langle R_{1}S_{1}\right\rangle = \left\langle R_{2}S_{2}\right\rangle =2q\frac{1}{\beta ^{2}+ \omega _{c}^{2}} \int_{0}^{t}ds e^{-\beta \left( t-s\right) } [ \beta \cos \omega _{c}\left( t-s\right) +$$ $$\omega _{c}\sin \omega _{c} \left( t-s\right) -\beta e^{-\beta \left( t-s\right) }] =q\frac{1}{\beta ^{2}+\omega _{c}^{2}} \left( 1-2e^{-\beta t}\cos \omega _{c}t+e^{-2\beta t}\right)$$ and $$k\equiv \left\langle R_{1}S_{2}\right\rangle =-\left\langle R_{2}S_{1}\right\rangle =2q\frac{1}{\beta ^{2}+\omega _{c}^{2}}% \int_{0}^{t}dse^{-\beta \left( t-s\right) }[ -\beta \sin \omega _{c}\left( t-s\right) +$$ $$\omega _{c}\cos \omega _{c} \left( t-s\right) -\omega _{c}e^{-\beta \left( t-s\right) }] = q\frac{1}{\beta ^{2}+\omega _{c}^{2}}\left[ 2e^{-\beta t}\sin \omega _{c}t-\frac{\omega _{c}}{\beta }\left( 1-e^{-2\beta t}\right) \right] \enspace .$$ The covariance matrix $C=\left( c_{ij}\right) $ has thus the form $$C=\left( \begin{array}{cccc} g & 0 & h & -k \\ 0 & g & k & h \\ h & k & f & 0 \\ -k & h & 0 & f \end{array} \right)$$ while its inverse $C^{-1}$ reads as follows: $$C^{-1}=\frac{1}{\det C}\left( fg-h^{2}-k^{2}\right) \left( \begin{array}{cccc} f & 0 & -h & k \\ 0 & f & -k & -h \\ -h & -k & g & 0 \\ k & -h & 0 & g \end{array} \right)$$ where $ \det C=\left( fg-h^{2}-k^{2}\right) ^{2}$. The joint probability distribution of $\overrightarrow{S}$ and $% \overrightarrow{R}$ can be ultimately written in the form: $$W\left( \overrightarrow{S},\overrightarrow{R}\right) =$$ $$\frac{1}{4\pi ^{2}\left( fg-h^{2}-k^{2}\right) }\exp \left( - \frac{f\left| \overrightarrow{% S}\right| ^{2}+g\left| \overrightarrow{R}\right| ^{2}- 2h\overrightarrow{S}% \cdot \overrightarrow{R}+2k\left( \overrightarrow{S} \times \overrightarrow{R}% \right) _{i=3}}{2\left( fg-h^{2}-k^{2}\right) }\right) \enspace .$$ In the above, all vector entries are two-dimensional. The specific $i=3$ vector product coordinate in the exponent is simply an abbreviation for the (ordinary $R^3$-vector product) procedure that involves merely first two components of three-dimensional vectors (the third is then arbitrary and irrelevant), hence effectively involves our two-dimensional $\overrightarrow{R}$ and $\overrightarrow{S}$.\ [**Acknowledgement:**]{} One of the authors (P. G.) receives financial support from the KBN research grant No. 2 PO3B 086 16. [99]{} J. B. Taylor, Phys. Rev. Lett. **6**, 262, (1961) B. Kurşunoǧlu, Ann. Phys. **17**, 259, (1962) R. Balescu, *Statistical Dynamics. Matter out of Equilibrium*. (Imperial College Press, London, 1997) Z. Schuss, *Theory and Applications of Stochastic Differential Equations*, (Wiley, NY, 1980) N. G. van Kampen, *Stochastic Processes in Physics and Chemistry*, (North Holland, Amsterdam, 1981) S. Stepanov, Phys. Rev. E **54**, 2209, (1996) S. Chandrasekhar, Rev. Mod. Phys. **15**, 1, (1943) E. Nelson, *Dynamical Theories of Brownian Motion*, (Princeton University Press, Princeton, 1967) J. L. Doob, Ann. Math. **43**, 351, (1942)
--- abstract: 'The Higgs inflation scenario is an approach to realize the inflation, in which the Higgs boson plays a role of the inflaton without introducing a new particle. We investigate a Higgs inflation scenario in the so-called radiative seesaw model proposed by E. Ma. We find that a part of parameter regions where additional scalar fields can play a role of an inflaton is compatible with the current LHC results, the current data from neutrino experiments and those of the dark matter abundance as well as the direct search. We show that we can partially test this model by measuring masses of scalar bosons at the International Linear Collider.' author: - Toshinori Matsui title: 'Testability of the Higgs inflation scenario in a radiative seesaw model [^1] ' --- Introduction ============ In 2012, the LHC discovered a new particle with the mass of 126 GeV [@atlas; @cms]. The particle is regarded as the Higgs boson predicted in the Standard Model (SM) of elementary particles. The discovery of the Higgs boson means that all the particle contents in the SM are completed. The LHC is now searching for indications of new physics, and is trying to measure the deviation in the coupling from the SM. On the other hand, the cosmic observations such as the experiments at WMAP and Planck have reported the new results [@WMAP; @Planck]. These experiments measure the temperature fluctuation of the cosmic microwave background precisely, by which we can impose constraints on the models of inflation. Cosmic inflation at the early Universe [@inf], which is a promising candidate to solve cosmological problems such as the horizon problem and the flatness problem, requires an additional scalar boson, the inflaton. We consider the Higgs inflation scenario where the Higgs boson plays a role of the inflaton. In the minimal model of this scenario [@Hinf], we do not have to introduce any other particle in addition to the particle contents in the SM to explain an inflation. However, it would be difficult to realize the Higgs inflation scenario in the minimal model. Assuming the SM with one Higgs doublet, the vacuum stability argument indicates that the model can be well defined only below the energy scale where the running coupling of the Higgs self-coupling becomes zero. For the Higgs boson mass to be 126 GeV with the top quark mass to be 173.1 GeV and for the coupling for the strong force to be $\alpha_s =$ 0.1184, the critical energy scale is estimated to be around $10^{10}$ GeV using the NNLO calculation, although the uncertainty due to the values of the top quark mass and $\alpha_s$ is not small [@lambda_run]. The vacuum seems to be metastable when we assume that the model holds up to the Planck scale. This kind of analysis gives a strong constraint on the scenario of the Higgs inflation, because the inflation occurs at the energy scale where the vacuum stability is not guaranteed in the SM. Recently, a viable model for the Higgs inflation has been proposed, in which the Higgs sector is extended including an additional scalar field [@other_Hinf; @2Hinf]. There is also another problem in the minimal model, which comes from unitarity argument [@uni_break; @uni_care]. Extending the Higgs sector from the SM one, we may expect to reveal new physics that can explain phenomena such as neutrino oscillation, existence of dark matter and baryon asymmetry of the Universe. Here, we extend the Higgs inflation model in the framework of a radiative seesaw scenario by E. Ma [@Kanemura:2012ha]. The radiative seesaw scenario is a way to explain tiny neutrino masses, where they are radiatively induced at the loop level by introducing $Z_2$-odd scalar fields and $Z_2$-odd right-handed neutrinos [@KNT; @Ma; @AKS]. An interesting characteristic feature in these radiative seesaw models is that dark matter candidates automatically enter into the model because of the $Z_2$ parity. In this work, we discuss a simple model to explain inflation, neutrino masses and dark matter simultaneously, which is based on the simplest radiative seesaw model [@Ma]. Both the Higgs boson and neutral components of the $Z_2$-odd scalar doublet can satisfy conditions on the slow-roll inflation [@slow-roll] and vacuum stability up to the inflation scale. We find that a part of the parameter region where these scalar fields can play a role of the inflaton is compatible with the current LHC results, the current data from neutrino experiments and those of the dark matter abundance as well as the direct search [@XENON100]. A phenomenological consequence of scenario results in a specific mass spectrum of scalar fields, which can be tested at the International Linear Collider (ILC) [@ILC1]. Extension to a radiative seesaw model ===================================== We extend the Higgs inflation model in the framework of a radiative seesaw scenario [@Ma]. In this model, there are the $Z_2$-odd scalar doublet field $\Phi_2$ and right-handed neutrino $\nu_R^{}$ in addition to the $Z_2$-even SM Higgs doublet field $\Phi_1$ due to the invariance under the unbroken discrete $Z_2$ symmetry [@Ma]. Because Dirac Yukawa couplings of neutrinos are forbidden by the $Z_2$ symmetry, the Yukawa interaction for leptons is given by ${\cal L}_{Yukawa} = Y_\ell \overline{L_L}\Phi_1\ell_R+Y_\nu\overline{L_L}\Phi_2^c\nu_R+h.c.$ (the superscript $c$ denotes the charge conjugation). The scalar potential is given by [@2Hinf] $$\begin{aligned} V &=&\frac{M_P^2 R}{2}+(\xi_1|\Phi_1|^2 +\xi_2 |\Phi_2|^2)R +\mu_1^2 |\Phi_1|^2 + \mu_2^2 |\Phi_2|^2 \nonumber\\ &&+ \frac{1}{2} \lambda_1 |\Phi_1|^4 + \frac{1}{2} \lambda_2 |\Phi_2|^4 + \lambda_3|\Phi_1|^2|\Phi_2|^2 + \lambda_4 (\Phi_1^{\dagger} \Phi_2) (\Phi_2^{\dagger} \Phi_1) + \frac{1}{2}\lambda_5((\Phi_1^{\dagger} \Phi_2)^2+h.c.), \label{eq:potential}\end{aligned}$$ where $M_P$($\simeq 10^{19}$ GeV) is the Planck scale, and $R$ is the Ricci scalar. Then, these quartic coupling constants should satisfy the following constraints on the unbounded-from-below conditions at the tree level; $$\begin{aligned} \lambda_1>0,\ \ \lambda_2>0,\ \ \lambda_3+\lambda_4+\lambda_5+\sqrt{\lambda_1 \lambda_2}>0, \label{eq:vs}\end{aligned}$$ and we impose the conditions of triviality; $$\begin{aligned} \lambda_i \lesssim 2\pi. \label{eq:tri}\end{aligned}$$ Assuming $\mu_1^2 < $0 and $\mu_2^2 >$ 0, $\Phi_1$ obtains the vacuum expectation value (VEV) $v$ ($=\sqrt{-2\mu_1^2/\lambda_1}$), while $\Phi_2$ cannot get the VEV because of the unbroken $Z_2$ symmetry. The lightest $Z_2$-odd particle is stabilized by the $Z_2$ parity, and it can act as the dark matter as long as it is electrically neutral. Mass eigenstates of the scalar bosons are the SM-like $Z_2$-even Higgs scalar boson ($h$), the $Z_2$-odd CP-even scalar boson ($H$), the $Z_2$-odd CP-odd scalar boson ($A$) and $Z_2$-odd charged scalar bosons ($H^\pm$). Masses of these scalar bosons are given by [@Ma]; $m_h^2=\lambda_1 v^2, \ m_H^2=\mu_2^2 +\frac{1}{2}(\lambda_3+\lambda_4+\lambda_5) v^2, \ m_A^2=\mu_2^2 +\frac{1}{2}(\lambda_3+\lambda_4-\lambda_5) v^2, \ m_{H^{\pm}}^2=\mu_2^2 +\frac{1}{2}\lambda_3 v^2$. Constraints on the parameters ============================= For the Higgs inflation scenario in our model defined in the previous section, there are nine parameters in the scalar sector; i.e., $\xi_1$, $\xi_2$, $\mu_1^2$, $\mu_2^2$, $\lambda_1$, $\lambda_2$, $\lambda_3$, $\lambda_4$ and $\lambda_5$. They must satisfy the vacuum stability condition on the running of the scalar coupling constants and the constraint from the slow-roll inflation, the dark matter data and the neutrino data. We find that a part of parameter regions is compatible with all constraints. Then, we can get the possible mass spectrum for additional scalar bosons in our model [@Kanemura:2012ha]. First, we discuss the constraint from the slow-roll inflation. In order that some of the scalar bosons play a role of the inflaton, we need to impose following conditions [@2Hinf]; $$\begin{aligned} \lambda_2\xi_1-(\lambda_3+\lambda_4)\xi_2&>&0, \nonumber\\ \lambda_1\xi_2-(\lambda_3+\lambda_4)\xi_1&>&0, \nonumber\\ \lambda_1\lambda_2-(\lambda_3+\lambda_4)^2&>&0. \label{eq:vs2}\end{aligned}$$ Parameters in the scalar potential should satisfy the constraint from the power spectrum [@WMAP; @2Hinf]; $$\begin{aligned} \xi_2 \sqrt{\frac{2(\lambda_1+a^2\lambda_2-2a(\lambda_3+\lambda_4))}{\lambda_1\lambda_2-(\lambda_3+\lambda_4)^2}} \simeq 5\times 10^{4}, \ \ \ \ \ \frac{\lambda_5}{\xi_2} \frac{a\lambda_2 - (\lambda_3+\lambda_4)}{\lambda_1+a^2\lambda_2-2a(\lambda_3+\lambda_4)} \lesssim 4\times 10^{-12}, \label{eq:l5}\end{aligned}$$ where $a$ is given as $a\equiv\xi_1/\xi_2$. When the scalar potential satisfies the conditions in Eqs. (\[eq:vs2\]) and (\[eq:l5\]), the model could realize the inflation. Second, we discuss the constraint from dark matter. We here assume that the CP-odd boson $A$ is the dark matter (the lightest $Z_2$-odd particle). When $\lambda_5$ is very small such as ${\cal O}(10^{-7})$, $A$ is difficult to act as the dark matter because the scattering process $AN\to HN$ ($N$ is a nucleon) opens and the cross section cannot be consistent with the current direct search results for dark matter [@direct_Z; @Kashiwase:2012xd; @LopezHonorez:2006gr]. To avoid the process $AN\to HN$ kinematically, we here take $\lambda_5\simeq 10^{-6}$ and $$\begin{aligned} a\lambda_2 - (\lambda_3+\lambda_4)\simeq 10^{-1} \label{eq:FT}\end{aligned}$$ at the inflation scale. With this choice, masses of $A$ and $H$ are almost the same value. The co-annihilation process $AH\to XX$ via the $Z$ boson is important to explain the abundance of the dark matter where $X$ is a particle in the SM, because the pair annihilation process $AA\to XX$ via the $h$ boson is suppressed due to the constraint from the inflation. Because the cross section of $AH\to XX$ depends only on the mass of the dark matter, the mass of the dark matter $A$ is constrained from the abundance of the dark matter as $128~{\rm GeV}\leq m_A\leq138~{\rm GeV}$, where we have used the nine years WMAP data [@WMAP]. Third, we can explain tiny neutrino masses in this model which are generated by the one loop diagram [@Ma]. The neutrino mass is related to $\lambda_5$ and masses of scalar bosons ($m_H^{}$ and $m_A^{}$), which are constrained from the inflation and the dark matter. From the relation $(Y_\nu)_i^k(Y_\nu)_j^k/M_R^k\simeq {\cal O} (10^{-11})$ GeV$^{-1}$ where $M_R^k$ is the Majorana mass of $\nu_R^k$ ($k$=1-3) and $(Y_\nu)_i^k$ is neutrino Yukawa coupling constant, the magnitude of tiny neutrino masses can be explained. For example, when $M_R^k$ is ${\cal O}(1)$ TeV, $(Y_\nu)_i^k$ is ${\cal O}(10^{-2})$. captype[table]{} $\lambda_{1}$ $\lambda_{2}$ $\lambda_{3}$ $\lambda_{4}$ $\lambda_{5}$ --------------- --------------- --------------- --------------- --------------- -------------------- $10^{2}$ GeV 0.26 0.35 0.51 -0.51 1.0$\times10^{-6}$ $10^{17}$ GeV 1.6 6.3 6.3 -3.2 1.2$\times10^{-6}$ Finally, we calculate the running of the coupling constants using the renormalization group equations [@beta]. As shown in Fig \[fig:run\], for the contribution of additional scalar bosons, this model can be stable up to the inflation scale from the electroweak scale [@extended_run]. As numerical input parameters, we take the VEV ($v = 246~$GeV), SM-like Higgs mass ($m_h = 126$ GeV) and the allowed value for the dark matter mass ($m_A^{} = 130$ GeV). Further numerical input parameter comes from the perturbativity of $\lambda_2$ up to the inflation scale; i.e., $\lambda_2(\mu_{\rm inf}) = 2\pi$, where $\mu_{\rm inf}$ is the inflation scale $10^{17}$ GeV. The parameter set in Table \[table:lambda\] can be consistent with these numerical inputs and the constraints are given in Eqs. (\[eq:vs\])-(\[eq:FT\]). Consequently, we can obtain the mass spectrum of the scalar bosons in our model as $$\begin{aligned} m_h\simeq126~{\rm GeV}, \ \ \ m_{H^{\pm}}\simeq173~{\rm GeV},\ \ \ m_H\simeq130~{\rm GeV},\ \ \ m_A\simeq130~{\rm GeV}, \label{eq:hmassD}\end{aligned}$$ where the mass difference between $A$ and $H$ is about 500 KeV. The mass spectrum is not largely changed even if $m_A^{}$ is varied with in its allowed region. In the next section, we consider the constraints on our model from the existing experiments and the way to test the characteristic mass spectrum in this model at the future collider experiment. Phenomenology ============= The LEP experiment constrains masses of the $Z_2$-odd scalar bosons. The mass of charged scalar bosons $m_{H^\pm}^{}$ should be lager than 70-90 GeV by the LEP [@LEP_direct; @LEP_pm]. This constraint is satisfied in our model ($m_{H^{\pm}}\simeq173~{\rm GeV}$). Furthermore, $m_{H}^{} + m_{A}^{}$ should be larger than $m_Z^{}$, and the combination of $m_{H}^{}$ and $m_{A}^{}$ is bounded by $H$$A$ production by the LEP date [@LEP_direct; @LEP_HApair]. However, when $m_{H}^{} - m_{A}^{} < 8$ GeV, masses of neutral $Z_2$-odd scalar boson loop diagrams are not really constrained by the LEP [@LEP_direct; @LEP_HApair]. On the other hand, the contributions to the electroweak parameters [@STdef] from additional scalar bosons loops which are given by [@ST1; @ST2] are also consistent with the electroweak precision data with 90% Confidence Level (C.L.) [@ST2]. Next, we consider the way to test at the LHC. According to Refs. [@LHC1; @LHC2; @LHC3], they conclude that it could be difficult to test $pp\to AH^+/HH^+/H^+H^-$ processes because the cross sections of the background processes are very large. The process of $pp\to AH$ could be tested with about the 3$\sigma$ C.L. with the various benchmark points for $m_A$ and $m_H$. However, it would be difficult to test $pp\to AH$ in our scenario, because $m_H$ and $m_A$ are almost degenerate in our scenario, and the event number of $pp\to AH$ is negligibly small after imposing the basic cuts [@LHC1; @LHC2; @LHC3]. Furthermore, as the total decay width of $H$ is about $10^{-29}$ GeV, $H$ would pass through the detector. Therefore, this signal is also difficult to be detected at the LHC. Finally, we discuss the signals of $H, A$ and $H^\pm$ at the ILC with $\sqrt{s}=500$ GeV. In the following, we use Calchep 2.5.6 for numerical evaluation [@calc]. We focus on the $H^\pm$ pair production process: $e^+e^-\to Z^*(\gamma^*)\to H^+H^-\to W^{+(*)}W^{-(*)}AA\to jj\ell\nu AA$ ($j$ denotes a hadron jet) [@ILC2]. Because of the kinematical reason, the energy of the two-jet system $E_{jj}$ satisfies the following equation; $$\begin{aligned} \frac{m_{H^{\pm}}^2-m_A^2}{\sqrt{s}+2\sqrt{s/4-m_{H^{\pm}}^2}} < E_{jj} < \frac{m_{H^{\pm}}^2-m_A^2}{\sqrt{s}-2\sqrt{s/4-m_{H^{\pm}}^2}}. \label{eq:Ejj}\end{aligned}$$ In our parameter set, the distribution of $E_{jj}$ for the differential cross section in this process is shown in Fig. \[fig:Ejj\]. The important background processes against this process, which are $e^+e^-\to W^{+}W^{-}\to jj\ell\overline{\nu}$ and $e^+e^-\to Z(\gamma )Z\to jj\ell\overline{\ell}$ with a missing $\overline{\ell}$ event, could be well reduced by imposing an appropriate kinematic cuts. Then, we expect that $m_{H^\pm}$ and $m_A$ can be measured by using the endpoints of $E_{jj}$ at the ILC after the background reduction. On the other hand, we consider $HA$ production: $e^+e^-\to Z^*\to HA\to AAZ^*\to AAjj$ at the ILC. If the mass difference between $m_A$ and $m_H$ is sizable, it could also be detected by using the endpoint of $E_{jj}$. However, $m_A$ and $m_H$ are almost degenerate in our scenario. When we detect $H^\pm$ but we cannot detect the clue of this process at the ILC, it seems that $m_A$ and $m_H$ are almost same value. Conclusion ========== We have studied the Higgs inflation model in the framework of a radiative seesaw scenario. In our model, we may be able to explain inflation, neutrino masses and dark matter simultaneously. We find that a part of parameter regions is compatible with all constraints which come from the conditions of the slow-roll inflation, the current LHC results, the current data from neutrino experiments and those of the dark matter abundance as well as the direct search results. We can test this scenario by measuring masses of scalar bosons at the ILC with $\sqrt{s}=500$ GeV. This work is collaboration with Shinya Kanemura and Takehiro Nabeshima. I would like to thank them for their support. [99]{} S. Kanemura, T. Matsui, T. Nabeshima and , arXiv:1211.4448 \[hep-ph\]. G. Aad [*et al.*]{} \[ATLAS Collaboration\], Phys. Lett. B [**716**]{} (2012) 1. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Lett. B [**716**]{} (2012) 30. D. Larson [*et al.*]{}, Astrophys. J. Suppl.  [**192**]{} (2011) 16, G. Hinshaw, D. Larson, E. Komatsu, D. N. Spergel, C. L. Bennett, J. Dunkley, M. R. Nolta and M. Halpern [*et al.*]{}, arXiv:1212.5226 \[astro-ph.CO\]. P. A. R. Ade [*et al.*]{} \[ Planck Collaboration\], arXiv:1303.5062 \[astro-ph.CO\]. A. H. Guth, Phys. Rev.  D [**23**]{} (1981) 347; K. Sato, Mon. Not. Roy. Astron. Soc.  [**195**]{} (1981) 467. F. L. Bezrukov and M. Shaposhnikov, Phys. Lett. B [**659**]{} (2008) 703. A. De Simone, M. P. Hertzberg and F. Wilczek, Phys. Lett. B [**678**]{} (2009) 1; F. Bezrukov and M. Shaposhnikov, JHEP [**0907**]{} (2009) 089; J. Elias-Miro, J. R. Espinosa, G. F. Giudice, G. Isidori, A. Riotto and A. Strumia, Phys. Lett. B [**709**]{} (2012) 222; G. Degrassi, S. Di Vita, J. Elias-Miro, J. R. Espinosa, G. F. Giudice, G. Isidori and A. Strumia, JHEP [**1208**]{} (2012) 098. R. N. Lerner and J. McDonald, Phys. Rev. D [**80**]{} (2009) 123507, R. N. Lerner and J. McDonald, Phys. Rev. D [**83**]{} (2011) 123522, R. N. Lerner and J. McDonald, JCAP [**1211**]{} (2012) 019, C. Arina, J. -O. Gong and N. Sahu, Nucl. Phys. B [**865**]{} (2012) 430. J. -O. Gong, H. M. Lee and S. K. Kang, JHEP [**1204**]{} (2012) 128. C. P. Burgess, H. M. Lee and M. Trott, JHEP [**0909**]{} (2009) 103; JHEP [**1007**]{} (2010) 007; J. L. F. Barbon and J. R. Espinosa, Phys. Rev.  D [**79**]{} (2009) 081302; M. P. Hertzberg, JHEP [**1011**]{} (2010) 023. G. F. Giudice and H. M. Lee, Phys. Lett. B [**694**]{} (2011) 294. L. M. Krauss, S. Nasri and M. Trodden, Phys. Rev.  D [**67**]{} (2003) 085002; K. Cheung and O. Seto, Phys. Rev.  D [**69**]{} (2004) 113009. E. Ma, Phys. Rev.  D [**73**]{} (2006) 077301; Phys. Lett.  B [**662**]{} (2008) 49; T. Hambye, K. Kannike, E. Ma and M. Raidal, Phys. Rev.  D [**75**]{} (2007) 095003; E. Ma and D. Suematsu, Mod. Phys. Lett.  A [**24**]{} (2009) 583. M. Aoki, S. Kanemura and O. Seto, Phys. Rev. Lett.  [**102**]{} (2009) 051805; Phys. Rev.  D [**80**]{} (2009) 033007; M. Aoki, S. Kanemura and K. Yagyu, Phys. Rev.  D [**83**]{} (2011) 075016; Phys. Lett.  B [**702**]{} (2011) 355. A. D. Linde, Phys. Lett. B [**108**]{} (1982) 389; A. Albrecht and P. J. Steinhardt, Phys. Rev. Lett.  [**48**]{} (1982) 1220. E. Aprile [*et al.*]{} \[XENON100 Collaboration\], Phys. Rev. Lett.  [**109**]{} (2012) 181301. J. Brau, (Ed.) [*et al.*]{} \[ILC Collaboration\], arXiv:0712.1950 \[physics.acc-ph\]; G. Aarons [*et al.*]{} \[ILC Collaboration\], arXiv:0709.1893 \[hep-ph\]; N. Phinney, N. Toge and N. Walker, arXiv:0712.2361 \[physics.acc-ph\]; T. Behnke, (Ed.) [*et al.*]{} \[ILC Collaboration\], arXiv:0712.2356 \[physics.ins-det\]; H. Baer, [*et al.*]{} “Physics at the International Linear Collider”, [*Physics Chapter of the ILC Detailed Baseline Design Report*]{}: http://lcsim.org/papers/DBDPhysics.pdf. Y. Cui, D. E. Morrissey, D. Poland and L. Randall, JHEP [**0905**]{} (2009) 076; C. Arina, F. -S. Ling and M. H. G. Tytgat, JCAP [**0910**]{} (2009) 018. S. Kashiwase and D. Suematsu, Phys. Rev. D [**86**]{} (2012) 053001. L. Lopez Honorez, E. Nezri, J. F. Oliver and M. H. G. Tytgat, JCAP [**0702**]{} (2007) 028. K. Inoue, A. Kakuto and Y. Nakano, Prog. Theor. Phys.  [**63**]{} (1980) 234; H. Komatsu, Prog. Theor. Phys.  [**67**]{} (1982) 1177. S. Nie and M. Sher, Phys. Lett. B [**449**]{} (1999) 89; S. Kanemura, T. Kasai and Y. Okada, Phys. Lett. B [**471**]{} (1999) 182. G. Abbiendi [*et al.*]{} \[OPAL Collaboration\], Eur. Phys. J. C [**35**]{} (2004) 1; Eur. Phys. J. C [**32**]{} (2004) 453. A. Pierce and J. Thaler, JHEP [**0708**]{} (2007) 026. E. Lundstrom, M. Gustafsson and J. Edsjo, Phys. Rev. D [**79**]{} (2009) 035013. M. E. Peskin and T. Takeuchi, Phys. Rev. Lett.  [**65**]{} (1990) 964; Phys. Rev. D [**46**]{} (1992) 381. D. Toussaint, Phys. Rev. D [**18**]{} (1978) 1626; M. E. Peskin and J. D. Wells, Phys. Rev. D [**64**]{} (2001) 093003. S. Kanemura, Y. Okada, H. Taniguchi and K. Tsumura, Phys. Lett. B [**704**]{} (2011) 303; M. Baak, M. Goebel, J. Haller, A. Hoecker, D. Ludwig, K. Moenig, M. Schott and J. Stelzer, Eur. Phys. J. C [**72**]{} (2012) 2003. R. Barbieri, L. J. Hall and V. S. Rychkov, Phys. Rev. D [**74**]{} (2006) 015007 \[hep-ph/0603188\]. Q. -H. Cao, E. Ma and G. Rajasekaran, Phys. Rev. D [**76**]{} (2007) 095011. E. Dolle, X. Miao, S. Su and B. Thomas, Phys. Rev. D [**81**]{} (2010) 035003 A. Pukhov, hep-ph/0412191. M. Aoki, S. Kanemura and H. Yokoya, arXiv:1303.6191 \[hep-ph\]; M. Aoki and S. Kanemura, Phys. Lett. B [**689**]{} (2010) 28. [^1]: This proceeding paper is based on Ref. [@Kanemura:2012ha].
--- abstract: 'Parkinson’s disease is a neurodegenerative disorder characterized by the presence of different motor impairments. Information from speech, handwriting, and gait signals have been considered to evaluate the neurological state of the patients. On the other hand, user models based on Gaussian mixture models - universal background models (GMM-UBM) and i-vectors are considered the state-of-the-art in biometric applications like speaker verification because they are able to model specific speaker traits. This study introduces the use of GMM-UBM and i-vectors to evaluate the neurological state of Parkinson’s patients using information from speech, handwriting, and gait. The results show the importance of different feature sets from each type of signal in the assessment of the neurological state of the patients.' address: | $^1$Pattern Recognition Lab. Friedrich-Alexander Universität, Erlangen-N[ü]{}rnberg, Germany\ $^2$ Faculty of Engineering. Universidad de Antioquia UdeA, Calle 70 No. 52-21, Medellín, Colombia\ $^3$ Technische Hochschule Nürnberg, Germany\ $^\star$corresponding author: [juan.vasquez@fau.de](juan.vasquez@fau.de) bibliography: - 'strings.bib' - 'refs.bib' title: 'comparison of user models based on GMM-UBM and i-vectors for speech, handwriting, and gait assessment of Parkinson’s disease patients' --- Parkinson’s disease, GMM-UBM, i-vectors, gait analysis, handwriting analysis, speech analysis. Introduction {#sec:intro} ============ Parkinson’s disease (PD) is a neurological disorder characterized by the progressive loss of dopaminergic neurons in the midbrain, producing several motor and non-motor impairments [@Hornykiewicz1998]. PD affects all of the sub-systems involved in motor activities like speech production, walking, or handwriting. The severity of the motor symptoms is evaluated with the third section of the movement disorder society - unified Parkinson’s disease rating scale (MDS-UPDRS-III) [@Goetz2008]. The assessment requires the patient to be present at the clinic, which is expensive and time-consuming because several limitations, including the availability of neurologist and the reduced mobility of patients. The evaluation of motor symptoms is crucial for clinicians to make decisions about the medication or therapy for the patients [@Patel2010]. The analysis of signals such as gait, handwriting, and speech helps to assess the motor symptoms of patients, providing objective information to clinicians to make timely decisions about the treatment. Several studies have analyzed different signals such as speech, gait, and handwriting to monitor the neurological state of the PD patients. Speech was considered in [@tu2017objective] to predict the MDS-UPDRS-III score of 61 PD patients using spectral and glottal features. The authors computed the Hausdorff distance between a speaker from the test set and the speakers in the training set. The neurological state of the patients was predicted with a Pearson’s correlation of up to 0.58. In [@smith2017vocal] the authors predicted the MDS-UPDRS-III score of 35 PD patients with features based on articulation and prosody analyses, and a Gaussian staircase regression. The authors reported moderate Spearman’s correlations ($\rho$=0.42). Handwriting was considered in [@mucha2018identification], to predict the H&Y score of 33 PD patients using kinematic features and a regression based on gradient-boosting trees. The H&Y score was predicted with an equal error rate of 12.5%. Finally, regarding gait features, in [@aghanavesi2019motion] the authors predicted a lower limbs subscore of the MDS-UPDRS-III from 19 PD patients, using several harmonic and non-linear features, and a support vector regression (SVR) algorithm. The subscore for lower limbs was predicted with an intra-class correlation coefficient of 0.78. According to the literature, most of the related works consider only one modality. Multimodal analyses, i.e., considering information from different sensors, have not been extensively studied. In [@barth2012combined] the authors combined information from statistical and spectral features extracted from handwriting and gait signals. The fusion of features improved the accuracy of the classification between PD and healthy control (HC) subjects. Previous studies [@vasquez2017gcca; @vasquez2019multimodal] suggested that the combination of modalities also improved the accuracy in the prediction of the neurological state of the patients. This study proposes the use of different features extracted from speech, handwriting, and gait to evaluate the neurological state of PD patients. The prediction is performed with user models based on Gaussian mixture models - universal background models (GMM-UBM) and i-vectors. To the best of our knowledge, this is one of the few studies for multimodal analysis of PD patients, and the first one that considers multimodal user models to evaluate the neurological state of the patients. Methods {#sec:methods} ======= The methods used in this study are summarized in Figure \[fig:method\]. Speech, handwriting, and gait signals are characterized using different feature extraction strategies. Then, data from HC subjects are used to train user models based on GMM-UBM and i-vector systems. For the case of the GMM-UBM, data from PD patients were used to adapt the UBMs into GMMs, creating a specific GMM for each patient. On the other hand, for the i-vector modeling, a reference i-vector was created with data from HC subjects with similar age and gender of the patients, thus i-vectors extracted from the patients can be compared with a personalized reference model. Finally, distance measures are computed between the reference models and those adapted/extracted from the PD patients. The computed distance is correlated with the neurological state of the patients based on the MDS-UPDRS-III scale. ![General methodology followed in this study.[]{data-label="fig:method"}](method.pdf){width="\linewidth"} Speech features --------------- **Phonation:** these features model abnormal patterns in the vocal fold vibration. Phonation features are extracted from the voiced segments. The feature set includes descriptors computed for 40ms frames of speech, including jitter, shimmer, amplitude perturbation quotient, pitch perturbation quotient, the first and second derivatives of the fundamental frequency $F\raisebox{-.4ex}{\scriptsize 0}$, and the log-energy [@orozco2018neurospeech]. **Articulation:** these features model aspects related to the movements of limbs involved in the speech production. The features considered the energy content in onset segments [@orozco2018neurospeech]. The onset detection is based on the computation of $F\raisebox{-.4ex}{\scriptsize 0}$. Once the border between unvoiced and voiced segments is detected, 40ms of the signal are taken to the left and to the right, forming a segment with 80ms length. The spectrum of the onset is distributed into 22 critical bands according to the Bark scale, and the Bark-band energies (BBE) are calculated. 12 MFCCs and their first two derivatives are also computed in the transitions to complete the feature set. **Prosody:** for these features, the log-$F\raisebox{-.4ex}{\scriptsize 0}$ and the log-energy contours of the voiced segments were approximated using Lagrange polynomials of order $P=5$. A 13-dimensional feature vector is formed by concatenating the six coefficients computed from the log-$F\raisebox{-.4ex}{\scriptsize 0}$ and the log-energy contours, in addition to the duration of the voiced segment [@Dehak2007]. The aim of these features is to model speech symptoms such as monotonicity ad mono-loudness in the patients. **Phonological:** these features are represented by a vector with interpretable information about the placement and manner of articulation. The different phonemes of the Spanish language are grouped into 18 phonological posteriors. The phonological posteriors were computed with a bank of parallel recurrent neural networks to estimate the probability of occurrence of a specific phonological class [@vasquez2019phonet]. Handwriting features -------------------- Handwriting features are based on the trajectory of the strokes in vertical, horizontal, radial, and angular positions. We computed the velocity and acceleration of the strokes in the different axes, in addition to the pressure of the pen, the azimuth angle, the altitude angle, and their derivatives. Finally, we considered features based on the in-air movement before the participant put the pen on the tablet’s surface. Additional information of the features can be found in [@rios2019]. Gait features ------------- **Harmonic:** these features model the spectral wealth and the harmonic structure of the gait signals obtained from the inertial sensors. We computed the continuous wavelet transform with a Gaussian wavelet. The feature set is formed with the energy content in 8 frequency bands from the scalogram, three spectral centroids, the energy in the in the 1st, 2nd, and 3rd quartiles of the spectrum, the energy content in the locomotor band (0.5–3Hz), the energy content in the freeze band (3–8Hz), and the freeze index, which is the ratio between the energy in the locomotor and freeze bands [@zach2015identifying; @rezvanian2016towards]. **Non-linear:** gait is a complex and non-linear activity that can be modeled with non-linear dynamics features. The first step to extract those features is the phase space reconstruction, according to the Taken’s theorem. Different features can be extracted from the reconstructed phase space to assess the complexity and stability of the walking process. The extracted features include the correlation dimension, the largest Lyapunov exponent, the Hurst exponent, the detrended fluctuation analysis, the sample entropy, and the Lempel-Ziv complexity [@perez2018non]. User models based on GMM-UBM ---------------------------- GMM-UBM systems were proposed recently to quantify the disease progression of PD patients [@arias2018speaker]. We propose to extend the idea to multimodal GMM-UBM systems. The main hypothesis is that speech, handwriting, or gait impairments of PD patients can be modeled by comparing a GMM adapted for a patient with a reference model created with recordings from HC subjects. GMMs represent the distribution of feature vectors extracted from the different signals from a single PD patient. When the GMM is trained using features extracted from a large sample of subjects, the resulting model is a UBM. The model for each PD patient is derived from the UBM by adapting its parameters following a maximum a posteriori process. Then, the neurological state of the patients is estimated by comparing the adapted model with the UBM using a distance measure. We use the Bhattacharyya distance, which considers differences in the mean vectors and covariances matrices between the UBM and the user model [@you2010gmm]. User models based on i-vectors ------------------------------ I-vectors are used to transform the original feature space into a low-dimensional representation called total variability space via joint factor analysis [@Dehak2011]. For speech signals, such a space models the inter- and intra-speaker variability, in addition to channel effects. For this study we aim to capture changes in speech, handwriting, and gait due to the disease [@garcia2018multimodal]. I-vectors have been considered previously to model handwriting [@christleinhandwriting] and gait data [@san2017vector]. Similar to the GMM-UBM systems, we train the i-vector extractor with data from HC subjects, and compute a reference i-vector to represent healthy speech, handwriting, or gait. Then, we extract i-vectors from PD patients, and compute the cosine distance between the patient i-vector and the reference. Data {#sec:data} ==== We considered an extended version of the PC-GITA corpus [@Orozco2014DB]. This version contains speech, handwriting, and gait signals, collected from 106 PD patients and 87 HC subjects. All of the subjects are Colombian Spanish native speakers. The patients were labeled according to the MDS-UPDRS-III scale. Table \[tab:people\] summarizes clinical and demographic aspects of the participants included in the corpus. Speech signals were recorded with a sampling frequency of 16kHz and 16-bit resolution. The same speech tasks recorded in the PC-GITA corpus [@Orozco2014DB], except for the isolated words, are included in this extended version. Handwriting data consist of online drawings captured with a tablet Wacom cintiq 13-HD with a sampling frequency of 180Hz. The tablet captures six different signals: x-position, y-position, in-air movement, azimuth, altitude, and pressure. The subjects performed a total of 14 exercises divided into writing and drawing tasks. Additional information about the handwriting exercises can be found in [@vasquez2019multimodal]. Gait signals were captured with the eGaIT system, which consists of a 3D-accelerometer (range $\pm$6g) and a 3D gyroscope (range $\pm$500$^\circ$/s) attached to the external side (at the ankle level) of the shoes [@barth2015stride]. Data from both feet were captured with a sampling frequency of 100Hz and 12-bit resolution. The exercises included 20 meters walking with a stop after 10 meters, 40 meters walking with a stop every 10 meters, *heel-toe tapping*, and the *time up and go* test. Experiments and results {#sec:results} ======================= Data from HC subjects were used to train the UBMs and the i-vector extractors. For the GMM-UBM system, data from PD patients were used to adapt the UBMs into GMMs. The Bhattacharya distance is used to compare the GMM and the UBM. For the i-vectors, a reference was created by averaging the i-vectors extracted from HC subjects that have same gender and similar age of the patients (in a range of $\pm$ 2 years). I-vectors extracted from PD patients are compared to the reference i-vector using the cosine distance. The computed distances are correlated with the MDS-UPDRS-III score of the patients. User models from different modalities ------------------------------------- The correlation between the neurological state of the patients and the user models based on GMM-UBM and i-vectors is shown in Table \[tab:results1\] for the speech, handwriting, and gait features. The results indicate that for gait and speech signals, the user models based on GMM-UBM systems are more accurate than the i-vectors. This can be explained because the distance between the adapted GMM and the UBM considers more information about the statistical distribution of the population than for the case of i-vectors, where the reference for healthy subjects is reduced to a single vector. A “strong" correlation is obtained with the harmonic features (gait analysis) modeled with the GMM-UBM system ($\rho$=0.619). This is expected because most of the items used by the neurologist in the MDS-UPDRS-III are based on the movement of the lower limbs. For handwriting features, “weak" correlations are obtained both with the GMM-UBM and the i-vector systems. The correlations obtained with speech features are not robust to model the general neurological state of the patients. This result can be explained because the MDS-UPDRS-III is a complete neurological scale that consider speech impairments in only one of the 33 items of the total scale [@Goetz2008]. \[tab:results1\] Multimodal user models ---------------------- The user models extracted from all feature sets using the GMM-UBM system were combined by concatenating the distance between the user model and the UBM for each feature set. A linear regression was trained with the matrix of distances to predict the MDS-UPDRS-III scale. The model was trained following a leave one subject out cross-validation strategy. The results of the fusion are shown in Table \[tab:results2\]. The Spearman’s correlation increases by 2.4%, absolute with respect to the one obtained only with the harmonic features. Additional regression algorithms based on random forest regression or SVRs were considered, however, they overfitted the test set, predicting only the mean value of the total scale. \[tab:results2\] Figure \[fig:fusion\] shows the error in the prediction of the MDS-UPDRS-III score of the patients. Most of the patients are in initial or intermediate state of the disease (10$<$MDS-UPDRS-III$<$50), and they were predicted with the same distribution. The outlayer in the top of the figure corresponds to the eldest patient in the corpus. The patient has a score of 53, and it was predicted with a score of 78. Although the intermediate value of the MDS-UPDRS-III score of the patient, his MDS-UPDRS-speech item is 4, i.e., the patient was completely unable to speak, which highly affected his speech features. ![Prediction of the MDS-UPDRS-III score using multimodal user models based on GMM-UBM systems.[]{data-label="fig:fusion"}](fusion_linear_reg_error1.pdf){width="0.8\linewidth"} Figure \[fig:fusion\_bar\] shows the contribution of each feature set to the multimodal user model. Each bar indicates the coefficient for the linear regression associated to each feature set. Harmonic features were the most important for the multimodal model, followed by prosody and articulation features, which have shown to be the most important features to evaluate the dysarthria associated to PD [@vasquez2017gcca]. Handwriting features were less important than expected; however, this fact can be explained because the extracted features are based on a standard kinematic analysis that might not be completely related to the symptoms associated with PD. The results for handwriting could be improved with a feature set more related to the handwriting impairments of the patients, like those based on a neuromotor analysis [@impedovo2019velocity]. ![Contrbution of each feature set to the multimodal user model system.[]{data-label="fig:fusion_bar"}](fusion_linear_reg_bar.pdf){width="0.75\linewidth"} Conclusion {#sec:conclusion} ========== The present study compared user models based on GMM-UBM and i-vectors to evaluate the neurological state of PD patients using information from speech, handwriting, and gait. Different features were extracted from each bio-signal to model different dimensions of PD symptoms. Gait features were the most accurate to model the general neurological state of the patients, however, the combination of different bio-signals improved the correlation of the proposed method. In addition, user models based on GMM-UBM were more accurate than those based on i-vectors. Better results could be obtained with the i-vector system if more training data from HC subjects were available to create the reference model, especially for handwriting and gait. Further studies will consider additional features to model other aspects of PD symptoms, especially from handwriting signals. At the same time, additional models based on representation learning, and additional fusion methods can be considered to evaluate the neurological state of the patients. Acknowledgments {#sec:ack} =============== This project received funding from the EU Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 766287, and from CODI from University of Antioquia by Grant No. 2017–15530.
--- abstract: 'Standard practice attempts to remove coordinate influence in physics through the use of invariant equations. Trans-coordinate physics proceeds differently by not introducing space-time coordinates in the first place. Differentials taken from a novel limiting process are defined for a particle’s wave function, allowing the particle’s dynamic principle to operate ‘locally’ without the use of coordinates. These differentials replace the covariant differentials of Riemannian geometry. With coordinates out of the way ‘regional conservation principles’ and the ‘Einstein field equation’ are no longer fundamentally defined; although they are constructible along with coordinate systems so they continue to be analytically useful. Gravity is solely described in terms of gravitons and quantized geodesics and curvatures. Keywords: covariance, invariance, geometry, metric spaces, state ; 03.65.a, 03.65.Ta, 04.20.Cv' author: - 'Richard Mould[^1]' title: '**Trans-Coordinate Physics**' --- Introduction {#introduction .unnumbered} ============ James Clerk Maxwell was the first to use space-time coordinate systems in the way they are used in contemporary physics. They play a role in his formulation of electromagnetic field theory that makes them virtually indispensable. Einstein embraced Maxwell’s methodology but devoted himself to eliminating the influence of coordinates because they have nothing to do with physics. However, the influence of coordinates is not eliminated by relativistic invariance as will be evident below where these space-time representations are removed *entirely* from physics. Trans-coordinate physics proceeds on the assumption that space-time coordinates should not be introduced at any level. As a practical matter, and for many analytic reasons, coordinates are very useful and probably always will be. But if nature does not use numerical labeling for event identification and/or analytic convenience, and if we are interested in the most fundamental way of thinking about nature, then we should avoid space-time coordinates from the beginning. Without coordinates the domain of relativity lies solely in the properties of the embedding metric space, and the domain of quantum mechanics resides in properties of local wave functions that are assigned to particles. These two domains overlap ‘locally’ where Lorentz invariant quantum mechanics is assumed. Photons in the space ‘between’ massive particles have a reduced function and definition. As a result, the variables of a particle’s wave packet are wholly contained inside the packet and are coordinate independent. They move with a particle’s wave function in the embedding metric space, but they do not locate it in that space. No particle has a *net velocity* or *kinetic energy* when considered in isolation, for these quantities require a coordinate framework for their definition. This alone reveals the radical nature of removing coordinates *entirely* from physics, and the inadequacy of general relativistic invariance for that purpose. Another consequence of this program is that energy and momentum are not propagated through the empty space between particles. Although particle energy, momentum, and angular momentum are conserved in local interactions, we say that nature does not provide for the exchange of energy and momentum between separated particles. We are the ones who arrange these transfers through our introduction of regional coordinates that we use to give ourselves the big picture. It facilitates analysis. The organizing power of coordinates and an opportune distribution of matter in space and time often allows us to find a system of coordinates that supports regional conservation; however, we can also find coordinates that do not support conservation. Therefore, regional conservation is coordinate dependent. It is not an invariant idea. It follows from a favorable construction on our part rather than from something intrinsic to the system[^2]. General relativity is a product of energy-momentum conservation that relies on regional coordinates for its meaning. It therefore joins regional conservation principles as something coming from coordinate construction rather than something fundamental. It is found for instance that while the metric tensor $g_{\mu\nu}$ can be defined at any event inside the wave packet of a massive particle, there is no trans-coordinate continuous function $g_{\mu\nu}$ associated with it. That is, a continuous metric tensor is not *physically* defined. Therefore derivatives of $g_{\mu\nu}$ at an event are not physically defined. General relativity suffers accordingly. The separation we establish between quantum mechanics and general relativity avoids a clash of these mismatched disciplines \[[@tH; @JM]\], and weighs in favor of quantum mechanics. And finally, a new definition of state is proposed in this paper. In the absence of regional coordinates there is no common time for two or more particles, so a state definition is proposed that spans the no-mans-land between particles. It is shown in another paper how to write the Hamiltonian for a system of separated particles of this kind \[[@RM1]\]. The new definition of state and the Hamiltonian that applies to it imposes a consistent framework on a system of trans-coordinate particles. If an atom emits a photon, then the system’s energy and momentum will be locally conserved. If that photon is not subsequently detected in another part of the universe it will essentially disappear from the system because a photon in isolated flight is energetically invisible. This does not violate conservation principles because those principles are satisfied at the emission site. If the photon is detected somewhere else, then the energy and momentum at the detector site will also be conserved. The difficulty is that the energy emitted by the atom and the energy received by the detector might not be the same, so there is no general basis for claiming that energy conservation holds for the entire two-site system. That’s because nature, we say, does not care about conservation over more than one interaction. It cares only about *conservation at individual interactions*. However, regional coordinates often make it possible to compose energy differences of this kind in such a way as to validate regional conservation; and hence the great advantage of regional coordinates. They give us a useful analytic tool and a satisfying big picture as well as (sometimes) regional conservation laws. But these laws are not fundamental. They are only products of a fortunate coordinate construction. This treatment is primarily concerned with electromagnetic interactions. Partition Lines {#partition-lines .unnumbered} =============== In Minkowski space one must choose a single world line to define the future time cone of an event **a**. If there is a non-zero mass particle present in the space it should be possible to choose a unique world line at each location inside the particle’s wave packet that is specific to the particle at that location. That world line corresponds to the direction of *square modular flow* at that event. The collection of these world lines over the particle’s wave packet can be thought of as the *streamlines* of its square modular flow in space and time. They will be called *partition lines*. We also define *perpendiculars* that are space-like lines drawn through each event perpendicular to the local partition line. We will first develop the properties of partition lines in a 1 + 1 space, and then in and 3 + 1 spaces. Figure 1 is a 1 + 1 Mnkowski surface with light paths given by $45^\circ$ dashed lines. Partition lines of an imagined particle wave packet are represented in the figure by the five slightly curved and more-or-less vertical lines. They tell us that the wave packet moves to the left with ever decreasing velocity and that it spreads out as it goes. This description is not trans-coordinate because it is specific to the Lorentz frame in the diagram; but these lines provide a scaffold on which it is possible to hang a trans-coordinate wave function. ![image](tcphysfig1.eps) Partition lines pass through every part of the particle’s wave packet and do not cross one another. They are not defined outside of a wave packet. Just as the space is initially given to us in the form of a metric background, any particle is initially given in the form of partition lines with the above characteristics. The interpretation of these lines is given in the next paragraph where values are assigned to them in a way that reflects the intended *given conditions*. These conditions are not ‘initial’ in the usual temporal sense, but are rather ‘given’ over the space-time region of interest. Let the third partition line from the left (i.e., the middle line in Fig. 1) portion off 1/2 of the packet, so half of the particle lies to the left of an event such as **a** in the figure. That is, there is a 0.5 probability that the particle will be found on the perpendicular extending to the left of **a**. This statement is assumed to have objective invariant meaning. Of course, the other half of the particle lies to the right of event **a** on the perpendicular through **a**. The middle partition line is made up of all the events in the wave packet that satisfy this condition, so they together constitute a continuous line to which we assign the value of 1/2. There is a 0.5 probability that the particle will be found *somewhere* on the left side of this line when the included events are all those on both sides of the line. In a similar way we suppose that the second partition line in Fig. 1 portions off, say, 1/4 of the packet on the perpendicular to the left of an event **b**, and that the first line portions off 1/100 of the particle or some other diminished amount. We further assume that the fifth line goes out to 99/100 of the particle packet, so the entire particle is represented by streamlines that split the particle into objectively defined fractional parts. When a wave function is finally assigned we will show that its total square modulus remains ‘constant in time’ between any two partition lines in 1 + 1 space, and is similarly confined in higher dimensions. Neighborhoods {#neighborhoods .unnumbered} ============= Every event inside the wave packet has a unique time direction defined for it by the partition line passing through the event. This allows us to define unique *inertial* neighborhoods associated with each event. ![image](tcphysfig2.eps) Consider a flat space inside the wave packet of a massive particle, and assign a Minkowski metric that is intrinsic to that space. Beginning with an in Fig. 2a, proceed up the particle’s partition line through **a** by an which is the magnitude of the invariant interval from event **a** to an event **b**. This interval **ab** is negative and identifies the chosen time axis inside the particle packet at event **a**. Then find event $\textbf{b}'$ by proceeding down the partition line the same invariant interval $-\Delta$ . Construct a backward time cone with **b** at its vertex and a forward time cone with $\textbf{b}'$ at its vertex and identify the intersection events **c** and $\textbf{c}'$. Since these events are embedded in a flat space, the positive space-like interval $\textbf{cc}'$ will pass through event **a** and will be bisected by it with $$\textbf{ca} = \textbf{ac}' = \textbf{cc}'/2 = \Delta > 0$$ For any $\Delta$, all of the events included in the intersection of the light cones of **b** and **b**$'$ are defined to be a *neighborhood* of event **a**. The events along the line **cc**$'$ are defined to be a *spatial neighborhood* of **a**. The limit as $\Delta$ goes to zero is identical with the limit of small neighborhoods around **a**. Curved Space {#curved-space .unnumbered} ============ The above considerations for a ‘flat’ space also apply *locally* in any curved space, so we let the conditions in Fig. 2a be generally valid in the limit as $\Delta \rightarrow 0$. Figure 2b shows the resulting Minkowski diagram in the local inertial system with as the space and time unit vectors in the directions $\textbf{ac}'$ and $\textbf{ab}$ respectively. The unit of these vector directions is given by $\sqrt{\Delta}$ in meters, although we have not established coordinates in those units along those directions. Specifically, we have not established a unique numerical value attached to an or a distant zero-point for that value; so the development so far is consistent with the trans-coordinate (or coordinate-less) aims of this paper. The unit vectors at event **a** will be referred to as the *local grid* at , where the time direction is always along the partition line going through **a**. These definitions have nothing to do with the curvature of the space in the wave packet at or beyond the immediate vicinity of **a**. Every event inside a particle packet has a similar local grid. The local grids of other events in the neighborhood of event **a** will be continuous with the local grid at **a** in this 1+1 space, but not for higher dimensions as we will see. The Wave Function {#the-wave-function .unnumbered} ================= We specify the quantum mechanical wave function at each event **a** in a particle wave packet over the space-time region of interest $$\varphi(\textbf{a})$$ which is identified in the manner of Euclid’s geometry since there are no coordinate numbers involved. There are four auxilary conditions on this function. **First**: The function $\varphi(\textbf{a})$ is a complex number given at event **a** that is continuous with all of its neighbors. The units of $\varphi$ are $m^{-1/2}$ in this 1 + 1 space. **Second**: Partial derivatives of $\varphi(\textbf{a})$ are defined in the limit of small neighborhoods around **a** (i.e., for small values of $\Delta$). $$\begin{aligned} \partial\varphi(\textbf{a})/\partial x &=& \lim_{\Delta\rightarrow 0} \frac{\varphi(\textbf{c}') - \varphi(\textbf{c})}{2\sqrt{\Delta}} \\ \partial\varphi(\textbf{a})/\partial t &=& \lim_{\Delta\rightarrow 0} \frac{\varphi(\textbf{b}) - \varphi(\textbf{b}')}{2\sqrt{\Delta}} \nonumber \end{aligned}$$ The second spatial derivative is then $$\partial^2\varphi(\textbf{a})/\partial x^2 = \lim_{\Delta\rightarrow 0} \frac{\partial\varphi(\textbf{c}')/\partial x - \partial\varphi(\textbf{c})/\partial x}{2\sqrt{\Delta}}$$ Notice that we have defined derivatives in the directions $\hat{x}$ and $\hat{t}$ without using coordinates to ‘locate’ or numerically ‘identify’ events along either of those directions. Only $\Delta$ *intervals* between events along the time line are taken from the invariant metric space. **Third**: The value of $\varphi$ at event **a** is related to its neighbors through the *dynamic principle*. This principle determines how $\varphi(\textbf{a})$ evolves relative to its own time against the metric background, and how it relates spatially to its immediate neighbors. **Fourth**: The objective fraction of the particle found between the partition line through event **c** in Fig. 2a and a partition line through event $\textbf{c}'$ is equal to $f_{cc'}$. In the limit as $\textbf{cc}'$ = $ 2\Delta$ goes to zero the fraction of the particle between differentially close partition lines goes to $df$. Normalization of $\varphi(\textbf{a})$ is stictly ’local’ and requires $$\varphi^*(\textbf{a})\varphi(\textbf{a}) = \lim_{ \Delta \rightarrow\ 0}\frac{f_{cc'}}{2\Delta}$$ It follows that $$\varphi^*(\textbf{a})\varphi(\textbf{a}) = \varphi^*(\textbf{b})\varphi(\textbf{b}) = \varphi^*(\textbf{b}')\varphi(\textbf{b}')$$ because the fractional difference between any two the partition lines is the same over any perpendicular. Therefore, the square modular flow will be *constant in time* between any two partition lines as previously claimed. These four auxiliary conditions must be satisfied when taken together with the initally given partition lines, but there is no guarantee that there exists a wave function that qualifies. *Finding a solution* therefore consists of varying the partition lines (i.e., the given conditions) until a wave function exists that satisfies these conditions. The choice of a world line based on partition lines is not a coordinate choice, nor is the limiting procedure that follows. So these definitions are not just coordinate invariant, they are fully *coordinate free*. They allow us to find physically creditable derivatives of any continuous function in a way that is independent of the curvature of the surrounding space, and to found a physics on that basis. One Particle {#one-particle .unnumbered} ============ Partition lines do not extend beyond the particle, so in the absence of ‘external’ coordinates that do extend beyond the particle (in an otherwise empty space) there is no basis for claiming that the particle has a *net velocity, kinetic energy, or net momentum*. This will be true of both zero and non-zero mass particles. It is a consequence of a trans-coordinate physics that particles take on these dynamic properties only in interaction with other particles. A massive particle has an ‘internal’ energy defined at each event in its wave packet, but since that may differ from one event to another there is no single internal energy representing the particle as a whole. Similarly, each part of the particle’s wave packet follows its own world line, so the there is no single world line for the particle as a whole as shown in Fig. 1. It is our claim that nature attends to the particle as a whole by dealing separately with each part. One exception is that the particle as a whole does produce a gravitational disturbance in the background invariant metric that has its origin in the regional distribution of the particle’s internal mass/energy. Two Particles {#two-particles .unnumbered} ============= Figure 3 shows the partition lines of two separated massive particles where each has its own definition of a grid that is different from the other particle. It is a consequence of the trans-coordinate picture that these particles in isolation will seem to have nothing to do with one another. However, the positional relationship of one to the other is objectively defined in the metric space in the background of both. Every event in the wave packet of each particle has a definite location in the metric space, and that fixes the positional relationship of each part of each particle with other parts of itself and with other particles. In addition, each massive particle produces a gravitational disturbance that has an invariant influence on the other. That influence is a function of the relative velocity between the two, even though kinetic energy is not defined for either one. Kinetic energy is a coordinate-based idea as has been said, whereas metrical positions and gravitational disturbances in the metric are invariant. We assume that the latter are based solely on graviton activity. ![image](tcphysfig3.eps) A Radiation Photon {#a-radiation-photon .unnumbered} ================== The pack of four lines that rise along the light line in Fig. 3 are intended to be the partition lines of a radiation photon that has a group velocity equal to the velocity of light. Photons can have partition lines as do massive particles. They separate the photon into its fractional parts, which is a separation by phase differences. The photon in Fig. 3 is confined to the packet that is distributed over the perpendicular (dashed) light path $l$. Normally in physics we do not hesitate to use coordinates in empty space, so a photon by itself will be given a period and wavelength relative to that coordinate frame, and hence an energy and momentum. But if coordinates in empty space have no legitimate place in physics, than like any other particle a photon by itself will lack translational variables (e.g., energy and momentum); and since it has no internal energy (i.e., rest mass/energy), the gravitational perturbation of its light line will be zero. There is no photon mass/energy to perturb it. It should also be clear from the diagram in Fig. 3 that the photon bundle has *no definable* wavelength or frequency at event **k**. Vacuum fluctuations exist in the ‘empty’ space between massive particles and their polarizing effects are physically significant. But if vacuum fluctuation particles are not themselves polarized they will not interact with a passing photon (resulting in a scattering of the photon). So the photon cannot use these particle grids to define its period and wavelength. Fluctuation particles do not contribute in any other way to the discussion, so their presence is ignored. Information Transfer {#information-transfer .unnumbered} ==================== It is the photon’s phases that affect a transfer of energy and momentum from one particle to another. This is shown in Fig. 4 where two particles are narrowly defined to be moving over world lines $w_1$ and $w_2$. The two dashed lines represent the partition lines of a passing photon with ‘relative’ phase differences given by $\delta\pi$. If the photon wave is a superposition of two different frequencies 1 and 2, then $\delta\pi = \delta\pi_1 + \delta\pi_2$. ![image](tcphysfig4.eps) A photon interacting with the first particle at event $\bf{a}$ will have a local energy and momentum given by $e_\gamma(\bf{a})$, $p_\gamma(\textbf{a}),$ and as it interacts with the second particle at event $\bf{b}$ it will have a local energy and momentum given by $e_\gamma(\bf{b})$, $p_\gamma(\textbf{b})$. These quantities are related through the phase relationships that are transmitted between particles, and are articulated in the local grid of the interacting particle. $$\begin{aligned} a \hspace{.1cm} photon \hspace{.1cm} at \hspace{.1cm} event &\textbf{a}:& e_\gamma(\textbf{a}) = \hbar\Sigma_i\omega_i(\textbf{a}) \hspace{.5cm} p_\gamma(\textbf{a}) = \hbar\Sigma_ik_i(\textbf{a}) \\ a \hspace{.1cm} photon \hspace{.1cm} at \hspace{.1cm} event &\textbf{b}:& e_\gamma(\textbf{b}) = \hbar\Sigma_i\omega_i(\textbf{b}) \hspace{.5cm} p_\gamma(\textbf{b}) = \hbar\Sigma_ik_i(\textbf{b}) \nonumber\end{aligned}$$ where $\omega_i(\bf{a})$ = $\partial_t\pi_i(\bf{a})$ and $k_i(\bf{a})$ = $\partial_x\pi_i(\bf{a})$. These derivatives refer to the local grid of each event in each particle, and are defined like those in Eq. 2. Electromagnetic Variables {#electromagnetic-variables .unnumbered} ========================= The parallel lines passing by event **k** in Fig. 3 are lines of constant ‘relative’ phase of the photon. Differential phase changes $\delta \pi$ over a light line like $l$ are preserved across the length of the photon wave packet. However, since the photon in flight between two particles does not have its own local grid, components cannot be defined for the electromagnetic field any more than can for energy and momentum. In empty space the *electromagnetic potential* of a radiation photon is normally given by a fourvector $A^\mu(\textbf{a})$, where the d’Alembertian operating on $A^\mu(\textbf{a})$ is equal to zero. However, trans-coordinate physics cannot use the d’Alembertian in empty space although the photon’s behavior there is lawful – it follows a dynamic principle of some kind. Where a grid exists we can give analytic expression to the dynamic principle; but where there is no grid we must settle for another kind of description. All we can do in this case is notice the physical manifestations of the dynamic principle, and there are just four in 3 + 1 space. First, different relative phases appear on different parallel layers along a light line as in Figs. 3 and 4. There is a definite phase relationship between any two of these layers. Second, the probability that a photon goes into a particular solid angle from an emission site **a** depends on the distribution given by an atomic decay at **a**, or by the interaction of $A_\mu(\textbf{a})$ with the current $j(\textbf{a})$ at that site. The only mid-flight indication of the strength of a signal in a given solid angle is the probability of a photon emission in that direction. Third, we say that the magnitude of $A_\mu$ arriving at a material target is *determined by* that probability – rather than probability determined by magnitude. In the case of a single photon (or for any definite number of photons) the components of $A_\mu$ at a material destination are indeterminate, and the magnitude of the transmission diminishes with square distance from the source by virtue of the constancy of photon number in a solid angle. The fourth property provides for Huygens’ wavelets. So far we have considered a photon as moving undeflected in an outward direction from a source along a light cone. We now say that an event such as **k** in Fig. 3 acts as a point source of radiation is all directions. The wavelet from **k** has the same (relative) phase as event **k**, and it reradiates the “probability intensity" at **k** uniformly in all directions with a velocity $c$. Two wavelets that arrive at a third event **m** have a definite phase difference that produces interference there. Notice that a Huygens’ electromagnetic wavelet is a ‘scalar’ like the primary wave that gives rise to it. The vector nature of an EM wave does not appear until it interacts with matter, and only then when an indefinite number of photons are phased in such a way as to make that happen. Photon Scattering {#photon-scattering .unnumbered} ================= If a photon scatters at an event **a** inside the wave packet of a particle, the grid for that purpose will be the particle’s grid at **a**. There will be no quantum jump or wave collapse in a scattering of this kind. Instead, some fraction of the particle $p$ and photon $\gamma$ will evolve continuously into a scattered wave that consists of a correlated particle $p'$ and a photon $\gamma'$. Energy and momentum will be defined for each of the four particles $p$, $p'$, $\gamma$, and $\gamma'$ that are mapped together on that common grid of $p$ at event **a**, and the dynamic principles of these particles (plus their interaction) will insure that total conservation applies to all four. Each component of the scattered wave of $p'$ will also have a grid that is well defined at event **a**, and is a Lorentz transformation away from the grid of $p$. Energy and momentum will be conserved on the grid of each component of $p'$. The velocity of any component of $p'$ relative to $p$ is not explicitly given in the trans-coordinate case; however, it is implicit in the Lorentz transformation that is required to go from the locally evolving grid of $p$ to that of $p'$. Virtual Photons {#virtual-photons .unnumbered} =============== So far we have talked about *radiation* photons that travel at the velocity of light. *Virtual* photons (in a Coulomb field) do not bundle themselves into wave packets, so they do not have a ‘group’ velocity that requires the identification of a world line over which the group travels. It makes no sense to say that they travel over light lines. It may therefore be possible to give the virtual photon a local grid in the same way that we created a grid for particles with non-zero mass. Its vector nature would then be more evident. However, we choose not to do that. It is unnecessary and would put the virtual photon grid in competition with the particle grid during an interaction between the two. That would necessitate a choice between one or the other in any case; so *all* photons will be considered gridless in this treatment – just like radiation photons. They all lack internal energies. They also lack translational variables such as energy and momentum when in transit between particles; and they acquire these values only when they overlap the charged particles with which they interact. We say in effect that there is no fundamental difference between ‘near’ field photons and ‘far’ field photons in an electromagnetic disturbance. Gravity {#gravity .unnumbered} ======= If a photon in transit (radiation or virtual) has no frequency or translational energy $h\nu$, it will not have a weight in the presence of a gravitating body or create a curvature in the surrounding metric space. However, massive objects having rest energy *do* create curvatures in their vicinity in which *light line geodesics* are well defined. We claim that radiation photons follow these geodesics without themselves contributing to the curvature of space. Although photons in transit are massless and hence weightless, they nonetheless behave as though they are attracted to gravitational masses. This does not mean that current photon trajectories are in error, or that particle masses have to be adjusted. The mass of an electron found from the oil drop experiment is currently assumed to include the mass of the accompanying electromagnetic field. From a trans-coordinate point of view the electric field surrounding a charged particle is not defined, so this experiment reveals the ‘bare’ mass of the electron. The mass of the Sun obtained from the period of a planet is normally assumed to include the mass of the radiation field surrounding the sun. From a trans-coordinate point of view the radiation field is not defined, so this calculation reveals the ‘bare’ mass of the sun – that is, the total number of each kind of solar particle times its mass. These changes will not result in observational anomalies in particle theory or astronomy, for we have no way to separately weigh the electromagnetic field of a charged particle, or to count the number of particles in the Sun. Binding Energy {#binding-energy .unnumbered} ============== Even in coordinate language we are able to give up the idea of electromagnetic field energy, so the binding energy of particles in a nucleus can be considered a property of the particles themselves. Imagine two positive particles of rest mass $m_0$ that approach one another in the center-of-mass system with kinetic energy $T$. The momentum of one of these particles decreases as a result of virtual photon exchange; however, its energy will not change. A virtual photon leaving one particle will carry away a certain amount of energy, but that energy is restored in equal amount by the virtual photon that is received from the other particle. This means that the net energy of the advancing particle will be unchanged during the trip. When the particle reaches the point at which it has lost all of its kinetic energy and has combined with the other particle due to nuclear forces, we would say that the initial kinetic energy of one of them has become its binding energy $BE$, where $$E = BE + m_0c^2 = T + m_0c^2$$ As the particle moves inward its energy square $E^2 = P^2 + m_0^2c^4$ remains constant while $P^2$ goes to zero. Therefore $E^2$ becomes identified with an increased mass $M^2c^4$ giving $E = Mc^2 = BE + m_0$. Then $$Binding \hspace{.2cm}energy = Mc^2 - m_0c^2$$ In relativity theory a particle’s (relativistic) mass is a function of kinetic energy. We can also say it is a function of an interaction with other particles, thereby avoiding any notion of ‘field’ energy. These ideas are peculiar to the center-of-mass coordinate system but are not correct from a trans-coordinate point of view. Fundamentally there is no energy associated with the particle as a whole. There is only the time derivative of $\varphi$ at each separate event inside the particle’s wave packet. There is also no kinetic energy of the particle or binding energy of a captured particle. The ‘correct’ trans-coordinate account of a coulomb interaction is given below. Virtual Interaction {#virtual-interaction .unnumbered} =================== The virtual (Coulomb) interaction cannot be thought of as a single virtual photon interacting with a single charged particle because that is not energetically possible. However, the interaction is *continuous* like Compton scattering; so in spite of the fact that the theory is based on photons the interaction does not manifest itself as discrete quantum jumps. A particle in a Coulomb interaction is therefore continuously receiving and transmitting equal amounts of energy, which means that it undergoes a change of momentum with no change of energy. The resulting behavior of the charged particle is given by a continuum of particle grids along its partition line that are related by infinitesimal Lorentz transformations. Energy and momentum are conserved on any one of these grids. Since each particle is well localized in the background metric space, predictable continuous transformations of the world line of each event in the packet are *all that is necessary* to determine the packet’s complete behavior. Nature is not concerned with the coordinate-based energies of the previous section does not need to be. Regional Coordinates and Conservation {#regional-coordinates-and-conservation .unnumbered} ===================================== Trans-coordinate physics does not provide for energy and momentum conservation in the region between particles. We cannot assign frequency or wavelength to a radiation photon in an otherwise empty space as we have seen, so we cannot say that it carries energy $h\nu$ or momentum $h/\lambda$ from one part of space to another. Also, a massive particle has no velocity or acceleration when it is considered in isolation. It moves into its future time cone over the invariant metric background following its dynamic principle, but that path does not break down into spatial and temporal directions relative to which the wave packet can be said to be moving with a kinetic energy or velocity $v$. So it cannot be said to carry a net momentum $mv$. Regional conservation of these quantities is therefore related to the possibility of system-wide coordinates that *we* construct. Having done that we can define a metric tensor throughout the region. That is, from the background invariant metric it is generally possible to find the continuous metric tensor $g_{\mu\nu}$ that goes with the chosen coordinates. If that tensor is time independent then *energy* will be conserved in the region covered by those coordinates. If it is independent of a spatial coordinate such as $x$, then *momentum in the $x$-direction* will be conserved in the region covered by the coordinates. If the metric is symmetric about some axis (in 3 + 1 space) then *angular momentum* will be conserved about that axis \[[@RM2]\]. It is therefore useful for us to construct system-wide coordinates in order to take advantage of these regional conservation principles. It is important to remember however that we do this, not nature. Nature has no need to analyze as we do over extended regions. For the most part it only *performs* on a local platform. If there is a difference in energy between $e_\gamma(\textbf{a})$ and $e_\gamma(\textbf{b})$ in Eq. 4, it is possible that the photon in Fig. 4 is *Doppler shifted* because of a relative velocity between the two particles, or that particle \#2 is at a different *gravitational potential* than particle \#1. When a coordinate system is chosen the velocity of one particle is decided relative to the other particle, and only then will the extent of the Doppler influence be determined. Only then will it be clear how the organizing power of a coordinate system makes use of gravity to explain the non-Doppler difference between $e_\gamma(\textbf{a})$ and $e_\gamma(\textbf{b})$. Trans-Coordinate Tensors {#trans-coordinate-tensors .unnumbered} ======================== Every event in a massive particle wave packet has a grid associated with it. In 3 + 1 space the spatial part is a three dimensional grid. When this is combined with the metric background the metric tensor $g_{\mu\nu}$ is determined at each event. One can therefore raise and lower indices of vectors inside the wave packet in the trans-coordinate case. However, we do not assign derivatives to $g_{\mu\nu}$ because it is not a uniquely continuous function. For a given $g_{\mu\nu}(\textbf{a})$ at event **a** there are an infinite number of ways that a continuous $g_{\mu\nu}$ field ‘might’ be applied in the region around **a**, corresponding to the infinite number coordinate systems that ‘might’ be employed in that region. But if we do not attach physical significance to coordinates, then physical significance cannot be attached to a continuous metric tensor. Derivatives of that tensor are therefore not defined in trans-coordinate physics. This applies to the derivatives in Eq. 2 as well as to the covariant derivatives of Riemannian geometry. Therefore, Christoffel symbols are not defined in trans-coordinate physics. It follows that the Riemann and Ricci tensors and the field equation of general relativity are also not fundamentally defined. Like energy and momentum conservation from which it is derived, the gravitational field equation is a regional creation of ours that is analytically useful and that gives us a satisfying big picture – but that is all. Of course the ‘curvature’ is objectively defined everywhere because it follows directly from the invariant metric background in which everything is embedded. We can be guided by our experience with general relativity when choosing the most useful coordinate system in a given region of interest. A metric tensor can then be defined; and from the symmetry of its components, energy, momentum, and angular momentum conservation can be established over the region. However, there is no assurance that one can always find an ÔagreeableÕ system, for general relativity does not guarantee that the chosen coordinates will conserve energy, momentum, or angular momentum without introducing special pseudo-tensors that are devised for that purpose \[[@LL]\]. Gravitons {#gravitons .unnumbered} ========= If general relativity is not fundamental then gravitons must be the exclusive cause of gravitational effects. The geodesics that result from graviton interactions between massive particles are not the smooth curves of general relativity, but are quantized by discrete graviton interactions. The wider effect of gravitons is to bend the background metric space between geodesics. Their influence will spread through the invariant metric space; and as a result, the curvature produced by gravitons will follow the average curvature of general relativity except that it will have the jagged edge of quantization. General Relativity is therefore a science that only approximates the underlying reality. It is a science we initiate when we introduce the coordinates that permit the definition of metric tensor derivatives and allow the formulation of EinsteinÕs field equation. Internal Coordinates {#internal-coordinates .unnumbered} ==================== In additional to regional coordinates that cover the space between particles, we want to give ourselves an *internal picture* of the particle. We want the wave function $\varphi{(\bf{a)}}$ in Eq. 1 in a form that permits analysis. To do this starting at event **a**, integrate the minus square root of the metric along the partition line going through **a** and assign a time coordinate $t_a$ with an origin at **a**. Then integrate the square root of the metric over the perpendicular going through event **a** and assign a space coordinate $x_a$ with an origin at **a**. The coordinates $x$ and $t$ may be extended over the entire object yielding a wave function that can be written in the conventional way $\varphi(x, t)$. These internal coordinates will have the same status as external coordinates. They are only created by us for the purpose of analysis. With internal coordinates we can integrate across one of the perpendiculars to find the *width* of the wave packet. It should also be possible to integrate the square modulus over a perpendicular to find the *total normalization*. That total will be equal to 1.0 if $df$ is equal to the fraction of the particle sandwiched between two differentially close partition lines as claimed. We can also use internal coordinates to give expression to the internal variables of a particle, such as its total internal energy and net momentum. Three and Four Dimensions {#three-and-four-dimensions .unnumbered} ========================= Imagine that a particle’s wave packet occupies the two-dimensional area shown on the space-like surface in Fig. 5. The surface is divided into a patchwork of squares, each of which is made to contain a given fraction of the particle, like 1/100th of the particle. ![image](tcphysfig5.eps) Each of these squares has four distinguishable crossing points or corners. A similar two-dimensional scaffold is constructed on all of the space-like surfaces through which the particle passes in time, thereby creating a continuous 2 + 1 scaffold. Each of the enclosed areas generated in this way is required to contain $1/100$ of the particle, and its corners will constitute the partition lines of the particle. As in the 1 + 1 case, these lines may be thought of as streamlines of the square modular flow of the particle through time. In the limit as this fraction goes to zero, partition lines pass through each event on the space-like surface in the figure and they do not cross one another. It is possible to find the direction of the partition line through an event **a** without having to erect a system-wide scaffolding like that of Fig. 5. Any small neighborhood of **a** has a probability that the particle will be found within it; and that probability will be ‘minimal’ when the partition line going through **a** coincides with the preferred direction of time for that neighborhood. Space-time directions are chosen for a given partition line in a way that is similar to the procedure in Fig. 2. Starting with an in Fig. 6a, move up its partition line a metrical distance $-\Delta$ to event **b**. Then find $\textbf{b}'$ by proceeding down the partition line the same invariant interval $-\Delta$. Construct a backward time cone with **b** at its vertex and a forward time cone with $\textbf{b}'$ at its vertex and identify the closed two-dimensional loop intersection shown in in the limit as $\Delta$ goes to zero. In the local inertial system, two perpendicular unit vectors $\hat{x}$ and $\hat{y}$ are chosen along the radius of the circle of radius $\Delta$ that spans the spatial part of the local grid at . For any $\Delta$, choose a space-like line beginning at **a** that is aligned with $\hat{x}$ and extends to the circumference of the circle in Fig. 6a. It intercepts the circle at the event we call $\textbf{c}'$ in . The space-like line that begins with -$\hat{x}$ intercepts the circle at the event we call **c** in . These space-like lines do not have to be ‘straight’, so long as they are initially aligned with the unit vector and intercept the circle in only one place. ![image](tcphysfig6.eps) The spatial grids of nearby events such as **a** and $\textbf{a}'$ in Fig. 6b do not have to line up in any particular way. Even if they are in each other’s spatial neighborhood for some value of $\Delta$, $\hat{x}$ and $\hat{x}'$ will generally point in different directions. In 3 + 1 space the intersection of a backward and forward time cone will produce a spherical surface like the one pictured in Fig. 6c. In this case choose four mutually perpendicular unit vectors $\hat{x}$, $\hat{y}$, $\hat{z}$, and $\hat{t}$ to form the local grid at event **a**. As before, the orientation of the spatial part of these grids is of no importance. They may be arbitrarily directed because their only purpose is to locally define all three spatial derivatives of the function $\varphi$. That function is continuous throughout the wave packet in any direction; therefore, it does not matter which grid orientation is chosen at any event for the purpose of specifying the function and its derivatives there. The Dirac solution has four components $\varphi_\mu$ where each satisfies all of the above conditions in the 3 + 1 directions. Since every event on the surface of the sphere in Fig. 6c locates a partition line, the event **a** is enclosed by a sphere with a differential volume $d\Omega$ that contains a differential fraction $df$ of the entire particle, where $$\varphi^*(\textbf{a})\varphi(\textbf{a})d\Omega= df$$ which normalizes the 3 + 1 wave function. Applying the Dynamic Principle (3 + 1) {#applying-the-dynamic-principle-3-1 .unnumbered} ====================================== The third condition on a wave function $\varphi (\textbf{a})$ in Eq. 1 requires that the dynamic principle applies throughout the space. This can be done in the 3 + 1 space of an event **a** by using the grid defined in Fig. 6c. Since we can do this at any event and for any orientation of the grid, we state the more general form of the third condition: > *The wave function $\varphi (\textbf{a})$ of a particle at any event **a** is subject to a dynamic principle that is applied locally to any four mutually perpendicular space-time directions centered at **a**, where time is directed along the partition line through **a**. This principle determines how $\varphi (\textbf{a})$ evolves relative to its own time against the metric background, and how it relates spatially to its immediate neighbors.*. The continuity condition applies to the function $\varphi$ along any finite segment of line emanating from any event. Atoms and Solids {#atoms-and-solids .unnumbered} ================ Consider how all this might apply to a hydrogen atom. Each massive particle carries a local grid that is independently defined at each event in its wave packet. This insures separate normalization at each event for each particle. The proton and electron grids may overlap but they need not be aligned because the particles do not directly interact. They are connected through the Coulomb field by virtual photons that carry no grid of their own. There are two interactions, one involving a virtual photon and the event grids of the proton, and one involving a virtual photon and the event grids of the electron. These are described in the section “Virtual Interaction". In the non-relativistic case both particles can be covered by a single *common inertial frame* in which the total energy and momentum is conserved. It does no harm and it facilitates analysis to imagine that each grid in the system is aligned with this common coordinate frame. The time $t$ assigned to each proton grid and the time $t'$ assigned to each electron grid are then set equal to each other and to the time of the common inertial frame. The retarded interaction $j_\mu A^\mu$ at each end of the interaction will then give the Coulomb intensity of $(e^2/4\pi r)\delta(t - t')$ where $r$ is the distance between the particles in the common frame \[[@FMW]\]. Relativistic corrections to this occur when the spatial components of the current fourvectors are taken into account. The above inertial system is one that we impose on the atom. By itself, the system operates on the basis of individual event grids alone. A photon passing over the atom will interact with each separate event in the proton wave function throughout its volume, and with each separate event of the electron throughout its volume. Energy and momentum conservation is required at each site, but the system will not support conservation unless the interaction Hamiltonian \[[@RM1]\] includes the entire system in a ‘single’ interaction. It is the interaction Hamiltonian that makes the difference between particles in a single interaction that conserves energy and momentum, and particles in separate interactions that may or may not conserve these quantities. In the atomic case the dynamic principle for the entire atom provides the unity that can give rise to a quantum jump that carries the product $pe\gamma$ of the proton, the electron, and the photon, into a new product $p'e'\gamma'$, conserving energy and momentum in the process. In the case of macroscopic crystals, metals, and other stationary solid forms in a flat space, each event in each particle wave packet has its own space-time grid and is separately normalized. However, they are all interactively aligned to such an extent that we can usually impose a single common coordinate system. We require the coordinates of this system to co-move with the average density of matter in the solid. If that system has the right symmetry properties it will insure macroscopic energy, momentum, and angular momentum conservation. Containers {#containers .unnumbered} ========== ![image](tcphysfig7.eps) Let the central region of the hollow spherical container in Fig. 7 be a general relativistic space of unknown curvature. The center of the sphere is initially empty (suppressing vacuum fluctuations). A massive object leaves event **a** and at some later time arrives at event **b**. At each event along the way it is propelled by its dynamic principle into its forward time cone; and since the resulting path of the packet cannot be broken down into spatial and temporal parts, its velocity, energy, momentum, and distance traveled on that path are not determined. The particle will have ‘internal’ energy and momentum that are derived from interal coordinates, but these will not be its ‘translational’ energy and momentum in the usual sense going from **a** to **b**. A radiation photon will not even have these internal properties over its path; for it will only acquire the energy and momentum in Eq. 4 when it encounters a particle in the container wall. We can certainly construct a common coordinate system over this system, extending the co-moving coordinates of the solid into the center of the sphere. We will then know how far the object goes and its velocity along the way. If the metric of that system is time independent, then total energy will be conserved throughout the trip from event **a** to event **b**. Although we can usually cover the system with extended coordinates and a metric, there is no guarantee that resulting system will conserve total energy and momentum without introducing the pseudo-potentials of [@LL]. A Gaseous System {#a-gaseous-system .unnumbered} ================ The introduction of many gas particles in the space of Fig. 7 does not change anything of substance. Molecular collisions occurring on the inside surface of the container and between molecules are distinct physical events. But we still do not have a natural basis for ascribing a numerical distance between any of these collisions or the molecular velocities between them. Molecular collisions are here assumed to be electromagnetic in nature. Parts of the colliding molecules may or may not overlap, but they each (i.e., the internal parts of each) maintain their separate grids for the purpose of normalization. These grids do not compete with one another during a collision because the interaction between them is conducted through virtual photons, and these are declared to be gridless. States {#states .unnumbered} ====== In coordinate physics we normally define a physical ‘state’ across a horizontal plane at some given time. This definition identifies an origin of coordinates relative to which the system’s particles are located at that time. That scheme will not work in the trans-coordinate case because the “same time" for separated particles is undefined. Indeed, the time of a single particle at a single location is undefined. The meaning of *state* must therefore be revised. The state of a system of three particles is now given by $$\Psi(\textbf{a}, \textbf{b}, \textbf{c}) = \phi_1(\textbf{a})\phi_2(\textbf{b})\phi_3(\textbf{c})$$ where **a**, **b**, and **c** are events anywhere within each of the given wave functions, subject only to the constraint that each event has a *space-like* relationship to the others. Each of these three functions is defined relative to its own local grid and is related to its time-like successors through its dynamic principle. These events are connected by the space-like line in Fig. 8, thereby defining the state $\Psi$ of the particles that are specified along their separate world lines $w_1$, $w_2$, . ![image](tcphysfig8.eps) A *successor state* can be written $$\Psi'(\textbf{a}', \textbf{b}', \textbf{c}') = \phi_1(\textbf{a}')\phi_2(\textbf{b}')\phi_3(\textbf{c}')$$ where events $\textbf{a}'$, $\textbf{b}'$, and $\textbf{c}'$ in the new state must also have space-like relationships to each other; and in addition, they are required to be in the forward time cones of events **a**, **b**, and **c** respectively. These events lie along a space-like line in Fig. 8 giving the state function $\Psi'$. Equation 5 does not say that each event has advanced by the same amount of time. It says only that each particle has advanced continuously along its own world line (i.e., along its own partition line) under its own dynamic principle, and has reached the designated ‘primed’ events. We might also let $\textbf{b}''$ replace event $\textbf{b}'$, where $\textbf{b}''$ has space-like relationships to $\textbf{a}'$ and $\textbf{c}'$ and is in the forward time cone of event **b**. The resulting state $\Psi''(\textbf{a}', \textbf{b}'', \textbf{c}')$ is not the same as $\Psi'(\textbf{a}', \textbf{b}', \textbf{c}')$, but it is just as much a successor of the initial state $\Psi(\textbf{a}, \textbf{b}, \textbf{c})$. Also, $\Psi''$ is a successor of $\Psi'$ because $\textbf{b}''$ is a successor of $\textbf{b}'$. This definition of state is far more general than the coordinate based (planar) definition, giving us an important degree of flexibility as will be demonstrated below and in another paper . The Hamiltonian for this kind of state can be defined in such a way as to establish the *conservation of probability current* flow, as is also shown in this reference. An Application {#an-application .unnumbered} ============== Consider the case of an atom emitting a photon that is captured by a distant detector. The initial spontaneous decay of the atom can be written in the form $$\varphi = (a_1 + a_0\gamma)D$$ where $a_1$ is the initial state of the atom, $a_0$ is its ground state, $\gamma$ is the emitted photon, and $D$ is a distant detector that is not involved in the decay. At this point we do not specify specific events or use the new definition of state. In response to the dynamic principle, the probability current flows from the first component in Eq. 6 to the second component inside the bracket, so the first component decreases in time and the second component increases in such a way as to conserve square modulus as shown in Ref. 3. At some moment of time a stochastic choice occurs and the state undergoes a quantum jump from $\varphi$ to $\varphi'$ conserving energy and momentum and giving $$\varphi' = a_0\gamma D$$ that describes the state of the system during the time the photon is in flight from the atom to the detector. When the photon interacts with the detector the equation of state becomes $$\varphi'' = a_0(\gamma D + D'')$$ where $D''$ is the detector after capture. The atom $a_0$ is not a participant in this interaction. Again, probability current flows from $\gamma D$ to $D''$ and this, we assume, results in another stochastic hit conserving energy and momentum and yielding $$\varphi''' = a_0D''$$ When the *new definition* of state is applied to this case Eq. 6 is written $$\varphi(\textbf{a},\textbf{c}) = [a_1(\textbf{a}) +a_0(\textbf{a})\gamma(\textbf{a})]D(\textbf{c})$$ where the atom and the photon overlap at event **a**. The photon uses the grid of the atom at event **a** to evaluate its frequency and wavelength, whereas the detector uses its own grid. Nonetheless, the dynamic principle in the form of the Hamiltonian defined in Ref. 3 applies to this interaction equation that is local to event **a**. Equation 7 for the proton in flight is then $$\varphi'(\textbf{a},\textbf{k},\textbf{c}) = a_0(\textbf{a})\gamma(\textbf{k})D(\textbf{c})$$ where the energy of the atom and the detector are given by their time derivatives at events **a** and **c**, but there is no energy associated with the independent photon in this equation. The function $\gamma(\textbf{k})$ is of the form exp$[i\theta(\textbf{k})$\] where **k** is the event appearing in Fig. 3, so frequency and wavelength are not given. The photon’s Hamiltonian applied to this equation equals zero. Equation 9 applies so long as the photon is located on a definite partition line of the atom; but the moment the photon event appears apart from the atom, Eq. 10 will apply. Equation 8 using the above state definition is $$\varphi''(\textbf{a},\textbf{c}) = a_0(\textbf{a})[\gamma(\textbf{c})D(\textbf{c}) + D''(\textbf{c})]$$ where the photon overlaps the detector at event **c**. In this case the photon uses the grid of the detector at event **c** to evaluate its frequency and wavelength, and the energy of the atom is given by its time derivative on the grid of . Here again the dynamic principle applies to this interaction equation that is local to event **c**. Actually the atom should be written as a product of the proton $p$ and the electron $e$ giving $a = pe$. In the parts of the atom where the proton and the electron *do not* overlap, Eq. 9 could be written as either $$\varphi''(\textbf{a},\textbf{b},\textbf{c} ) = [p_1(\textbf{a})e_1(\textbf{b}) + p_0(\textbf{a})e_0(\textbf{b})\gamma(\textbf{a})]D(\textbf{c})$$ or $$\varphi''(\textbf{a},\textbf{b},\textbf{c} ) = [p_1(\textbf{a})e_1(\textbf{b}) + p_0(\textbf{a})e_0(\textbf{b})\gamma(\textbf{b})]D(\textbf{c})$$ Both equations are correct. They both describe the interaction of the photon on different grids associated with different parts of the atom, where the dynamic principle applies in each case. Equations of this kind are used more extensively in Ref. 3, and the rules that govern them are given in the Appendix of that reference. Unifying Features {#unifying-features .unnumbered} ================= The most important non-local unifying feature of a trans-coordinate system is the *invariant metric space* in which everything is embedded. Another important unifying feature is the *dynamic principle* applied to each particle by itself and to any system of particles as a whole. *Non-local correlations* are another unifying features of the functions generated by the dynamic principle. These qualify the location of one particle relative to the location of another particle; so the equation of state of two particles is written $\Phi = p_1p_2(\textbf{a}, \textbf{b})$, rather than $\Phi= p_1(\textbf{a})p_2(\textbf{b})$. These particles have their separate grids as always, to which the dynamic principle separately applies as always. The difference is that the range of **b** depends on the value of **a** and visa versa, and their joint values determine $\Phi$. This function is local to both events **a** and **b**, so it is a bi-local function. The fourth unifying feature is the *collapse of the wave function* over finite regions of space. Modified Hellwig-Kraus Collapse {#modified-hellwig-kraus-collapse .unnumbered} =============================== A local quantum mechanical measurement can have regional consequences through the collapse of a wave function. The question is: How can that superluminal influence be invariantly transmitted over a relativistic metric space? Hellwig and Kraus answered this question by saying that the collapse takes place across the surface of the backward time cone of the triggering event \[[@HK]\]. The Hellwig-Kraus collapse has been criticized because it appears to result in causal loops \[[@AA]\], but the situation changes dramatically with the new trans-coordinate definition of state. We keep the idea that the influence of a collapse is communicated along the backward time cone; however, the state of the system that survives a collapse (i.e., the finally realized eigenstate) is not defined along a “simultaneous" surface. The increased flexibility of the new state definition allows the remaining (uncollapsed) state to retain its original relationship with the event that initiates the collapse. When this program is consistently carried out causal loops are eliminated, even in a system of two correlated particles. I will not elaborate on this idea in this paper but it is demonstrated in detail in \[[@RM1]\]. Another Approach {#another-approach .unnumbered} ================ Invariance under coordinate transformation is not discussed at any length in this paper because coordinates are not introduced in the first place; but it should be noted that the idea of coordinate invariance is limited. General relativity is not truly independent of coordinates because it does not include *all possible* coordinates in its transformation group. It does not include ‘discontinuous’ coordinate systems, many of which are capable of uniquely identifying all the events in a space-time continuum – as is claimed to be the purpose of a space-time coordinate system. For example, imagine Minkowski coordinates in which the number 1.0 is added to all irrational numbers but not to rational numbers. This system is perfectly capable of systematically and uniquely identifying all of the events in a space-time continuum, but it is thoroughly discontinuous in a way that prevents it from being useful to general relativity. It only takes one example of unfit coordinates to disqualify invariance as a fundamental requirement in physics, and there are many discontinuous coordinates like this one. Of course one can always reject coordinates that don’t work in the desired way on the basis of the fact that they don’t work in the desired way. But that avoids the issue. The point is that the influence of unnatural identification labels cannot be eliminated from physics through an invariance principle that affects only a sub-set of unnatural identification labels. Another approach is indicated. [99]{} Aharonov, Y., Albert, D. Z. (1981) *Phys. Rev. D* **24**, 359 Feynman, R. P., Morinigo, F. B., Wagner, W. G. (1995) *Feynman Lectures on Gravitation*, B. Hatfield. ed., Addison-Westley, New York, 33 Hellwig, K. E., Kraus, K. (1970) *Phys. Rev. D* **1**, 566 Landou, L. and Lifshitz, E. *The Classical Theory of Fields*, Pergamon Press, New York, (1971) p. 316 Maldacena, J. (2005) “The Illusion of Gravity”, *Sci. Am.* Nov, 56 Mould, R. A. (2002) *Basic Relatvity*, Springer,New York )2002) Eq. 8.66 Mould, R. A. (2008) “Trans-Coordinate States", arXiv:0812.1937 ’t Hooft, G. (2008) “A Grand View of Physics", *Int’l J. Mod. Phys.* **A23** 3755, sect 3; arXiv:0707.4572 [^1]: Department of Physics and Astronomy, State University of New York, Stony Brook, 11794-3800; richard.mould@stonybrook.edu; http://ms.cc.sunysb.edu/\~rmould [^2]: A region surrounded by flat space will not conserve energy and momentum if **no** coordinates are chosen in the region, or if certain discontinuous coordinates are chosen in the region. Here again conservation depends on a coordinate choice or on the choice of a transformation group.
--- abstract: 'We examine the physics content of fragmentation functions for inclusive hadron production in a quark jet and argue that it can be calculated in low energy effective theories. As an example, we present a calculation of $u$-quark fragmentation to $\pi^+$ and $\pi^-$ mesons in the lowest order in the chiral quark model. The comparison between our result and experimental data is encouraging.' address: | $^{1}$Center for Theoretical Physics, Laboratory for Nuclear Science, and Department of Physics\ Massachusetts Institute of Technology, Cambridge, Massachusetts 02139–4307\ $^{2}$Department of Physics and Atmospheric Science\ Drexel University, Philadelphia, PA 19104\ [ ]{} author: - 'Xiangdong Ji$^{1}$ and Zheng-Kun Zhu$^{1,2}$' date: 'MIT-CTP-2259.     Submitted to: [*Phys. Rev. Lett.*]{}     November 1993' title: 'QUARK FRAGMENTATION FUNCTIONS IN LOW-ENERGY CHIRAL THEORY[^1]' --- Despite lack of a rigorous proof, many believe that the color charges in Quantum Chromodynamics (QCD) are permanently confined. The building blocks of QCD, quarks and gluons, cannot emerge as asymptotic states of the theory and thus are not directly detectable in an experiment. Rather, traces of energetic quarks and gluons in a hard collision manifest in jets of hadrons with highly correlated momenta. Since their discovery in 1975, jets have become bread-and-butter physics in high-energy colliders. Parton fragmentation refers to the process of converting high-energy, colored quarks and gluons out of a hard scattering into hadron jets observed in detectors. Undoubtedly, this process involves QCD physics at many different scales and is rather complicated. However, important developments in perturbative QCD occurred in the beginning of 80’s, coupling with rich experimental data taken from high-energy colliders, have taught us a great deal about what is going on in fragmentation process [@MUE1]. In a modern view, parton fragmentation involves three key concepts: separation of short and long distance physics (factorization theorem or assumption), perturbative evolution of partons from high to low virtualities (parton shower), and non-perturbative fragmentation of partons with virtuality of order of 1 GeV to hadrons (hadronization). While the first two subjects can be treated systematically in perturbation theory, the last one is intrinsically non-perturbative and is difficult to study directly in QCD. In the past, phenomenological models, such as Feynman-Field model [@FF] or Lund string model [@AND], have been used to describe hadronization in Monte Carlo simulations. Except for heavy quarks [@XXX], little progress has been made on understanding fragmentation physics from the fundamental theory. In this Letter we attempt to study hadronization from a low energy effective theory, focusing on calculating fragmentation functions for inclusive hadron production. In order for the reader to understand the context of our calculation and to disperse possible doubts over its relevance, we begin with inclusive hadron production in $e^+e^-$ annihilation, for which a factorization theorem can be proved rigorously in perturbative QCD [@MUE2; @CS1]. The theorem asserts that in the leading order in hard momentum the inclusive hadron is produced by fragmentation of a [*single*]{} quark without influence of others (independent jet fragmentation). Consequently, the fragmentation functions, which describe hadron distributions in the jet, can be expressed in terms of the matrix elements of the quark field operator alone. If similar factorization theorems can be proved for other processes, the same functions appear in the relevant hadron-production cross sections. Like parton distribution functions, the parton fragmentation functions are scale dependent, and the scale evolution is governed by renormalization group equations[@MUE2]. At low-energy scales, the fragmentation functions contain no large momenta and shall be calculable in low-energy models. To illustrate this, we consider pion production in a quark jet using the chiral quark model of Manohar and Georgi [@GM]. The tree level result for $\pi^+$ and $\pi^-$ productions in a $u$-quark jet shows an impressive similarity with the EMC data when evolved to appropriate energy scales. The higher-order corrections can be taken into account systematically in a chiral expansion. To begin our discussion, we consider the hadron tensor for inclusive hadron production in $e^+e^-$ annihilation, $$\hat W_{\mu\nu} = {1\over 4\pi}\sum_X \int d^4\xi e^{iq\cdot \xi} \langle 0|J_\mu(\xi)|H(P)X\rangle \langle H(P)X|J_\nu(0)|0\rangle $$ \label{W1}$$ where $q$ is the moment of a time-like virtual photon, $P$ is the moment of the observed hadron $H$ and $X$ represents other unobserved hadrons and is summed over. In the following discussion, we choose a special coordinate system defined by two light-cone vectors $p={\cal P}(1,0,0,1)$ and $n=1/(2{\cal P})(1,0,0,-1)$ with $p\cdot n=1$, in which the hadron and photon momenta are collinear: $P=p + nM^2/2 $, $q=p/z + \nu n$. In the deep-inelastic limit ($Q^2 = q^2 \rightarrow \infty$, $\nu=P\cdot q \rightarrow \infty$, and ${2\nu /Q^2 } = z = $ finite), the factorization theorem guarantees that the leading contribution to the hadron tensor, neglecting the calculable perturbative corrections, comes from the diagrams in Fig. 1[@MUE2], $$\begin{aligned} \hat W_{\mu\nu} & = & 3\sum_a e_a^2 (\hat f_1^a(z,Q^2) +\hat f_1^{\bar a}(z, Q^2))/z^2 \nonumber\\ &\times& \left[z(-g^{\mu\nu} + q^\mu q^\nu/q^2) - 2/\nu(P^\mu-{z\over 2}q^\mu) (P^\nu-{z\over 2}q^\nu)\right] \label{W2}\end{aligned}$$ where $a$ sums over quark flavors and $$\hat f_1(z, \mu^2) = {1\over 4}z\int {d\lambda \over 2\pi} e^{-i\lambda/z} \langle 0 | {\mathrel{\mathop{n\!\!\!/}}}_{\alpha\beta}\psi_\beta(0)|H(P)X\rangle \langle H(P)X|\bar \psi_\alpha(\lambda n)|0\rangle \label{hf1}$$ is the quark fragmentation function represented by a quark-hadron four-point vertex in Fig. 1. In Eq. (\[hf1\]), $\mu^2$ labels the renormalization-point dependence and the light-cone gauge $A\cdot n =0$ has been used (otherwise a gauge link has to be explicitly included to ensure gauge invariance). Except for a scale dependence, Eq. (\[W2\]) is the naive parton-model result proposed by Feynman before QCD [@F]. It resembles a similar prediction for the hadron structure functions in deep-inelastic scattering, which can be justified by the operator production expansion in QCD. However, validity of Eq. (\[W2\]) in QCD is somewhat more remarkable, for the color charges are knowned to be confined at a scale of order 1 fm, at which something must happen to ensure the quark and antiquark keeping flying apart. The factorization theorem says whatever mechanism it is, it does not affect the hadron content of a jet. To understand better about this independent parton fragmentation picture, we recall the way that the gluon exchanges between the quark and antiquark jets are treated when the factorization theorem is proved[@CSS]. There are two types of gluon exchanges which are important in the so-called leading diagrams. The first is the collinear gluons emitted by a quark, with their momenta parallel to the other quark. These gluons are longitudinally polarized, and are summed to a gauge link to make Eq. (3) gauge invariant. The second is the soft gluons which either are emitted by the jets or link the two at large separations. With use of the soft-gluon approximation and the Slavnov-Taylor identities they can be factorized, and are subsequently cancelled by unitarity when final states, excluding the observed hadron, are summed. Thus, it appears that the study of inclusive hadron production in $e^+e^-$ annihilation reduces to evaluating Eq. (\[hf1\]). Notice the close similarity of $\hat f_1(z)$ with the quark distribution function $f_1(x)$ in a hadron of momentum $P$, $$f_1(x, \mu^2) = {1\over 2}\int {d\lambda \over 2\pi} e^{i\lambda x} \langle P|\bar \psi(0) {\mathrel{\mathop{n\!\!\!/}}} \psi(\lambda n)|P \rangle ~~. \label{f1}$$ Our experiences in calculating the latter provide us with valuable insights in calculating the former: First, the fact that the spectators $X$ in Eq. (3) are colored states is not a problem in a real calculation. Similar colored intermediate states occur in the distribution functions if a complete set of states is inserted in-between the quark fields. \[In the MIT bag model, these are di-quark states.\] Second, the fragmentation functions at $\mu^2$ less then 1 GeV$^2$ involve only low-energy scales and are entirely dominated by non-perturbative QCD physics. As such, techniques useful for calculating the parton distributions can in principle be used to calculate the fragmentation functions. The explicit sum over the spectators can not be eliminated in the fragmentation functions, even if one is only interested in their moments. This renders lattice QCD and QCD sum rule methods largely useless. However, the low-energy chiral theory is an exception. One version of the theory particularly useful here is the chiral quark model of Manohar and Georgi [@GM], which is an effective theory of QCD at scales between $\Lambda_\chi =4\pi f_\pi$, the chiral symmetry breaking scale, and $\Lambda_{\rm QCD}$, the QCD confinement scale. Emergence of such a theory at low energy can be argued as follows: As an energy scale decreases below $\Lambda_\chi$, the instability of the perturbative QCD vacuum leads to spontaneous breaking of the flavor $SU(3)_L\times SU(3)_R$ chiral symmetry, creating an octet of Goldstone bosons. Meanwhile, the quarks and gluons acquires their constituent masses through non-zero vacuum condensates. The interactions between the constituent quarks and gluons and Goldstone bosons are determined by chiral dynamics and are controlled by expansion of small parameters $m_\pi^2/\Lambda_\chi^2$ and $k^2/\Lambda_\chi^2$, where $k$ is a small momentum. Matching the QCD quarks above $\mu=\Lambda_\chi$ and the constituent quarks below deserves some explanations. First of all, to find the exact matching conditions one has to solve both QCD and the effective theory around $\Lambda_\chi$ completely. Second, an effective theory is effective only if matching conditions are simple. In this study, we take the most naive assumption that a QCD quark [*is*]{} just a constituent quark at the matching scale. This is motivated by successes of similar assumptions used in other constituent quark models. We also note that the matching conditions should be used in conjunction with the way that the effective theory is treated. We will return to this point later when we choose a cut-off for ultra-violet momenta. For simplicity, we will neglect the gluon fragmentation at low energy scale, because in the effective theory gluons interact weakly with quarks, $\alpha_s^{\rm eff}\sim 0.3$. This, of course, means that our result is unreliable for small $z$, where hadrons are mostly produced by bremsstrahlung gluons. In particular, the so-called hump-back plateau in hadron spectra, caused by intrajet coherence effects, is beyond our scope [@MSDK]. Thus to the leading order, the effective lagrangian for quarks and Goldstone bosons is $${\cal L} = \bar \psi (i {\mathrel{\mathop{D\!\!\!/}}} + {\mathrel{\mathop{V\!\!\!/}}} -m) \psi + g_a\bar \psi {\mathrel{\mathop{A\!\!\!/}}} \gamma_5 \psi \label{lagr}$$ where $\psi$ carries implicit color, favor, and spin indices. The vector and axial-vector fields are defined as, $$(V_{\mu}, A_{\mu}) = {i\over 2}(\xi^\dagger\partial_\mu \xi \pm \xi\partial_{\mu} \xi^{\dagger})$$ where $\xi = \exp(i\pi/f_\pi)$ and $\pi = \sum_a \pi^a T^a$ with $f_\pi=93$ MeV and Tr$T^aT^b=\delta^{ab}/2$. Under the chiral transformation: $$\begin{aligned} \Sigma (= \xi^2) &\rightarrow& L\Sigma R^\dagger, \nonumber \\ \xi & \rightarrow & L\xi U^\dagger=U\xi R^\dagger, \nonumber \\ \psi & \rightarrow & U\psi,\end{aligned}$$ where $L$ and $R$ are group elements of $SU(3)_L\times SU(3)_R$, ${\cal L}$ is invariant. Let us first consider the $\pi^+$ production from a $u$-quark jet. The momentum-space Feynman rules for $f_1(z)$ can be derived easily when re-writing $\hat f_1$ as, $$\hat f_1(z, \mu^2) = {1\over 4}z\int {dk^-d^2k_\perp \over (2\pi)^4} \int d^4\xi e^{-i\xi\cdot k} \sum_X \langle 0| \gamma^+_{\alpha\beta} \psi_\beta(0)|H(P)X\rangle \langle H(P)X|\bar \psi_\alpha(\xi)|0\rangle \label{hf11}$$ with $zk^+=p^+$, and each matrix element is transformed to the interaction picture [@COLLINS]. The lowest order diagram is shown in Fig. 2. A simple calculation yields, $$\hat f_1^{\pi^+}(z) = {1\over 2z} g_a^2 \int {d k_\perp^2 \over (4\pi f_\pi)^2}$$ In contrast to logarithmic theories, e.g., QED and QCD, the pion transverse momentum integration has no collinear singularity. In large momentum region, it diverges quadratically. In our calculation, we cut off this type of integrations at the scale $\Lambda_{\chi}$, beyond which the effective theory ceases to be valid. Of course, the result depends sensitively on ways that the cut is imposed, more so than in logarithmic theories. However, we believe that the arbitrariness is cancelled when a choice is used in conjunction with the corresponding matching conditions. Here, we make a simplest choice, $k_\perp^{\rm max}=\Lambda_\chi$. Thus, $$f_1^{\pi^+}(z, \Lambda_{\chi}^2) = {1\over 2z}g_a^2$$ where $\Lambda_\chi =4\pi f_\pi$ has been used. To confront this with experimental data, we must evolve this to appropriate scales using the Altarelli-Parisi equation [@FIELD]. In Fig. 3, we show a comparison between the evolved result ($g_a=0.75$) and the data from EMC measurement [@EMC]. Considering the simplicity of the approach, we think the agreement is impressive. A more intricate case is $\pi^-$ production from the $u$-quark jet, which is an unfavored process. In the lowest order, $\pi^-$ has to be produced together with a $\pi^+$ meson. There are two way to accomplish this: The first is a sequential emission of pions through the axial-vector coupling, and the second is a sea-gull emission through the vector coupling. The two processes interfere as shown in Fig. 4. The sign of the interference term is completely determined by the sign of the vector coupling, which in turn is fixed by chiral symmetry. The resulting expression for $\hat f_1(z)$ is complicated and we evaluate it numerically. A few salient features can be said briefly. First, there is a $1/z$ divergence for the longitudinal momentum integration of $\pi^+$. A natural cut-off for this is $m_\pi/{\Lambda_\chi} \sim 0.1 $, the mass of pion over the scale of the virtuality of the quark. This is because pions cannot be produced with a $z$ smaller than this due to energy conservation. Second, the numerical result shows a strong cancellation between diagrams in Fig. 4a and 4b and the interference diagrams in Fig. 4c and 4d. The cancelation is maximum if $g_a=1/\sqrt{2}$, i.e., the vector coupling is the square of the axial-vector coupling. In Fig. 5, we have shown the EMC data and our relsult for $g_a=1.0$. The fact that a slightly larger $g_a$ is needed to reproduce the experimental data reflects the oversimplified matching conditions we use. Finally, we present a result for the chiral-odd fragmentation function $\hat e(z)$, for which there is no data available. In Ref. [@JAFFEJI], Jaffe and Ji pointed out the importance of this fragmentation function in measuring the transversity distribution of the nucleon in deep-inelastic scattering. The QCD definition for $\hat e(z)$ is , $$\hat e_1(z, \mu^2) = {1\over 4M}z\int {d\lambda\over 2\pi} e^{-i\lambda/z} \langle 0| \psi_\alpha(0)|H(P)X\rangle \langle H(P)X|\bar \psi_\alpha(\xi)|0\rangle$$ where $M$ is taken to be the nucleon mass. A simple calculation in the chiral quark model yields, $$\hat e(z) = z\hat f_1(z) {m_q\over M} \sim {1\over 3} z\hat f_1(z)$$ where $m_q$ is the constituent quark mass. Note that this relation is only true at the scale $\Lambda_\chi$, beyond which $\hat e(z)$ evolves in a much complicated way (twist-three) [@JI]. However, as a rough estimate for $\hat e(z)$, one can take (12) to be true beyond the model, using it in conjunction with the experimental data for $\hat f_1(z)$. To summaries, we argue that the fragmentation functions can be calculated in low-energy effective theories. As an example, we show how the pion fragmentation functions are calculated in the chiral quark model. The results seem to be encouraging. A study for other fragmentation functions, including $K^\pm$ and extending possibly to $P(\bar P)$ productions, will be presented elsewhere. We thank J. Collion, M. Strikman for discussions on factorization theorems and A. Manohar for discussions on the chiral quark model. Z. Zhu would like to thank Professor Da Hsuan Feng for his numerous encouragement and support. A. H. Mueller, [*Perturbative Quantum Chromodynamics*]{}, World Scientific, 1989. R. D. Field and R. P. Feynman, Phys. Rev. D15 (1977) 2590. B. Andersson, G. Gustafson, G. Ingelman, and T. Sjostrand, Phys. Rep. 97 (1980) 31. C. Peterson et al., Phys. Rev. D27 (1983) 105; E. Braaton, K. Cheung, and T. C. Yuan, Phys. Rev. D48 (1993) 5049; R. L. Jaffe and L. Randall, MIT CTP-preprint No. 2189. A. H. Mueller, Physics Report 73 (1981) 237. J. Collins and D. Soper, Nucl. Phys. B185 (1981) 172. A. Manohar and H. Georgi, Nucl. Phys. B234 (1984) 189. R. P. Feynman, [*Photon-Hadron Interactions*]{}, Benjamin-Cummings, 1972. J. C. Collins, D. E. Soper, and G. Sterman, in [*Perturbative Quantum Chromodynamics*]{}, ed. by A. H. Mueller, World Scientific, 1989. A. H. Mueller, Phys. Lett. 104B (1981) 161; Yu. L. Dokshitzer, V. S. Fadin, and V. A. Khoze, Z. Phys. C15 (1982) 325. J. Collins, Nucl. Phys. B396 (1993) 161. R. D. Field, [*Applications of Perturbative QCD*]{}, Addison-Wesley, 1989. M. Arneodo [*et. al.*]{} (the EMC collaboration), Nucl. Phys. B321 (1989) 541; J. Aubert [*et. al.*]{} (the EMC collaboration), Phys. Lett. B160 (1985) 417. R. L. Jaffe and X. Ji, Phys. Rev. Lett. 71 (1993) 2547. X. Ji, MIT-CTP preprint No. 2219, to be appear in Phys. Rev. D, 1994. \[fig1\] [^1]: This work is supported in part by funds provided by the U.S. Department of Energy (D.O.E.) under contract \#DE-AC02-76ER03069 (MIT) and the National Science Foundation (Drexel).
--- abstract: 'We study resonant tunneling through a layered medium with a negative index medium (NIM) slab as a constituent layer. We demonstrate large delays in transmission mediated by the surface and the guided modes of the structure with low losses. We show how important it is to include NIM dispersion for correct assessment of the nature and magnitude of the delay. We also point out the role of NIM absorption for the feasibility of such compact delay devices.' address: | $^1$ Indian Institute of Technology Kharagpur, Kharagpur 721302 , India\ $^{2}$ School of Physics, University of Hyderabad, Hyderabad 500046, India author: - 'Dheeraj Golla$^1$, Subimal Deb$^2$ and S Dutta Gupta$^2$' title: Competition between structural and intrinsic dispersion in delay through a left handed medium --- Introduction ============ In recent years there has been a great deal of interest in negative index materials (NIMs), which can exhibit exotic properties [@veselago1968; @shalaevbook; @shalaev2007; @sar]. The rich physics of these materials (not occuring in nature) were discussed theoretically by Veselago [@veselago1968], though their potential application for beating the diffraction limit [@pendry2000] and experimental realization in the microwave range [@shelby1] triggered off the vast current activities. Now such materials (also known as metamaterials or left handed materials) have been fabricated in other wavelength domains right upto the visible range [@shalaev2007; @sar]. Applications of these metamaterials now range from super-lensing and super-resolution [@pendry2000; @fang] to lasing spasers [@zheludev] and optical nanocircuits [@engheta]; invisibility cloaks to electromagnetically induced transparency and slow or stopped light [@sdgprb; @zheludev-arxiv], etc. The real challenge now is to fabricate a low loss metamaterial with high figure of merit (FOM=$-Re(n)/Im(n)$ where $n$ is the refractive index of the metamaterial) and to broaden the frequency domain where both the permittivity and permeability are negative. It is also important to push the domain to even higher frequencies maintaining low losses. In the context of slow light using NIM, the role of dispersion cannot be underestimated. As pointed out by Veselago, a lossless NIM is essentially dispersive [@veselago1968]. The important role played by dispersion has been felt by others in the context of waveguides [@kivshar2003] and especially with reference to cavity QED applications [@zhu1; @fleischhauer2005; @xu2009]. In the context of any finite structure, the frequency dependence of its important characteristics (e.g., transmission through it) emerges from two sources [@sdg1998]. The first is due to the material dispersion of its constituents, while the second is due to the boundary conditions. For example, the transmission through an empty Fabry-Perot (FP) cavity, or the cavity filled with a dispersive material, can be quite different. These characteristics can be quite different as compared to the case of propagation through the bulk dispersive sample. We refer to the first as material or intrinsic dispersion, while the second is labelled as structural dispersion. In this paper our goal is to bring out the salient features of how these sources of dispersion affect the transmission through a NIM guide in a resonant tunnelling (RT) geometry [@pendry2008; @sdg2009]. Unfortunately, many of the papers on the NIM waveguides do not address the issue of material dispersion for calculating the overall dispersion features [@kong2003]. In many others the system is investigated at a particular wavelength, concentrating on structural dispersion only, avoiding thus all material dispersion related issues [@jose]. Our choice of the RT geometry is motivated by several facts. It was shown recently that light can be slowed down using a gap plasmon guide in RT configuration [@sdg2009]. Very recently a multilayered metal/dielectric structure in the same configuration was studied experimentally to show enhanced transmission mediated by the modes of the structure [@pendry2008]. We show that a NIM guide in RT configuration can lead to significant delays provided the losses are low. Our calculations are carried out with experimental data for the permittivity and the permeability of the NIM [@dolling2006], albeit with the approximation of low losses. We also assumed a homogeneous and isotropic character for the NIM. We studied both $\sin{}$/$\cos{}$ and the $\sinh{}$/$\cosh{}$ types of modes of the NIM guide [@kong2003]. We refer to the former as guided modes and the latter as the plasmon-like modes. Indeed, the latter ones (TE polarized) resemble the TM polarized surface plasmon polaritons of a thin metal film or a gap plasmon guide. We correlate the angular location of the peaks in resonant transmission to the corresponding modes in a bare NIM guide. In all our calculations, along with retention of both the sources of dispersion, we present results where material dispersion is suppressed. We show that such a step can lead to wrong conclusions as regards the nature and the magnitude of the delay. In fact, there is a danger of predicting superluminal transit while actually it is subluminal. It is believed that NIMs have to be essentially lossy [@stockman2007; @kinsler2008prl]. In order to have some quantitative idea about the effect of absorption on the delay, we assume a causal response of the NIM medium. We show that even in presence of a much improved magnetic response it may be a difficult job to have high-Q guided and surface modes in the optical and near IR domains. Our study thus reiterates the important observation of Veselago in the context of absorption and dispersion of a realistic metamaterial. The structure of the paper is as follows. In section 2, we consider a generic negative index material slab in order to reveal the roles of intrinsic and structural dispersion. We show that there can be qualitative differences in the group delay if the material dispersion is suppressed. In section 3, we pick the NIM parameters from a recent experiment [@dolling2006], albeit with the assumption of small damping. We study group delay through a structure supporting resonant tunnelling. After a brief discussion of the NIM dispersion and the resulting delay in bulk, we investigate the plasmon-like modes of the structure. We show that the excitation of such modes can lead to enhanced delay as compared to FP modes, albeit at the expense of lower transmission. In section 4, we discuss the guided modes of the same structure, except for lower values of refractive index of the spacer layers. Like in the case of plasmon-like modes, these guided modes are also shown to lead to enhanced delay. In section 5, we fit the experimental data [@dolling2006] for the magnetic resonance to a causal response and vary the relevant parameters in order to obtain better features of the NIM. We show that even a substantial improvement of the magnetic response fails to excite true high-Q guided and surface modes in the NIM guide. Finally, we summarize the main results in Conclusions. We make an important observation on how the large damping in the present day metamaterials can destroy the resonant tunnelling features, stressing the dire need to fabricate ‘transparent’ metamaterials. Group delay in a FP cavity containing a generic NIM =================================================== Consider first the propagation of an optical pulse through a bulk NIM sample. Let the permittivity $\epsilon$ and the permeability $\mu$ of the NIM be given by the following expressions [@sdgprb; @kroll2000] $$\epsilon(\omega)=\frac{\omega^2-\omega_p^2}{\omega^2}, ~~ \mu(\omega)=\frac{\omega^2-\omega_b^2}{\omega^2-\omega_0^2+i\Gamma\omega} .\label{eq:1}$$ Recall that the group delay for a segment of length $d$ in the bulk is given by $$\tau = \frac{d}{v_g} = \left( \frac{d}{c} \right)n_g , ~ n_g = n(\omega) + \omega \frac{\partial n}{\partial \omega}, \label{eq:2}$$ where $n_g$ and $n$ are the group index and the refractive index of the material, respectively. It can be easily seen from (\[eq:2\]) that group index $ n_g$ is the same as the normalized delay $\tau/(d/c)$. Since the source of delay in (\[eq:2\]) is the material dispersion, we will refer to this as delay due to intrinsic dispersion. From a different viewpoint, delay $\tau$ through a segment of length $d$ can be linked to the frequency derivative of the phase $\phi_t$ (accumulated over distance $d$) of the (amplitude) transmission coefficient $t$ ($ = |t|e^{i\phi_t} $ ) [@wigner1955] as $$\tau=\frac{\partial \phi_t}{\partial \omega}|_{\omega=\omega_c},\label{eq:3}$$ where $\omega_c$ is the carrier frequency of the pulse. $\tau$ in (\[eq:3\]) is also referred to as the Wigner phase time. As mentioned earlier, for finite structures, there is another source of frequency dependence of important parameters like transmission and reflection through it. For example, if one considers a slab of width $d$ of a material with some refractive index, embedded in a medium with another index, the characteristic signatures of the Airy resonances of the FP cavity will be imprinted on the delay features. We will refer to such features (with neglect of material dispersion), as delay due to pure structural dispersion. The overall delay emerges as an interplay and competition of these two distinct sources of dispersion. It is thus clear that none of the sources of dispersion can be ignored for correct assessment of the delay. Irrespective of the nature of the delay, it can be evaluated by calculating the phase of the amplitude transmission coefficient and using the expression for the Wigner phase time. ![(a) Real (solid) and imaginary (dotted) parts of the refractive index of the NIM, (b) the normalized delay for a bulk sample. The parameters chosen are: $f_p = 12 $ GHz, $f_b = 6 $ GHz, $f_0 = 4 $ GHz and $\Gamma/\omega_0 =10^{-3}$.[]{data-label="fig:1"}](figure1){width="11cm"} The group delay of a pulse passing through a NIM FP cavity was studied in detail and an analytical expression for the delay was derived [@sdgprb]. It was shown that such a system can lead to large group delays at frequencies corresponding to the Airy resonances. A closer inspection of the expression of the Wigner phase time reveals that the delay has a very pronounced dependence on the frequency derivatives of the permittivity and the permeability. It is thus expected that negligence of the material dispersion would lead to quite erroneous conclusions, even to the extent of the sign of the delay. In other words, one may wrongly predict superluminal transit, while it is actually subluminal. In order to have a quantitative assessment of the contributions of intrinsic and structural dispersion for the delay, we first study the dispersive properties of the bulk material given by (\[eq:1\]). Parameters were taken from the work of Smith [@kroll2000], namely $f_p = 12 $ GHz, $f_b = 6 $ GHz, $f_0 = 4 $ GHz and $\Gamma/\omega_0 =10^{-3}$, where $f_{p,b,0}=\omega_{p,b,0}/(2\pi)$. The frequency dependence of the real and imaginary parts of the refractive index is shown in figure \[fig:1\]a. In the bottom panel (figure \[fig:1\]b) we have plotted the time taken for propagation through a distance $d$ in the bulk sample. It is clear that over the specified range of frequencies the bulk material exhibits normal dispersion which leads to a delay of the pulse. If propagation is through a slab of such a ![(a) Intensity transmission coefficient $T$ (top) and the normalized delay $\tau/(d/c)$ (bottom) for a FP cavity as functions of frequency; (b) the same as functions of angle of incidence $\theta_i$ for $f=5$ GHz, $\Gamma/\omega_0 =10^{-3}$ and $d=10$ cm. The bottom right panel compares the normalized delays with (solid) and without (dashed) material dispersion.[]{data-label="fig:2"}](figure2a_2b "fig:"){width="12cm"}** material, embedded in air, the FP resonances are imprinted on this background leading to large delays at these resonances. Now we demonstrate how suppression of the material dispersion can lead to completely erroneous conclusions. Consider the same FP slab, but now with frequency independent $ \epsilon = \epsilon_{f=5{\rm GHz}} $, $ \mu = \mu_{f=5{\rm GHz}}$. For reference we have chosen the material parameters at an intermediate frequency, namely, $f = 5 $ GHz. The Airy resonances of this FP cavity are shown in the top panel of figure \[fig:2\]a. The bottom panel shows the corresponding normalized delay. As can be seen from this figure, just the structural dispersion of the slab for the said parameters leads to negative delay characteristics, which is just the opposite of the actual situation. The same results can be verified from the angle scan of the transmission and the normalized delay at a particular frequency of incident light. In figure \[fig:2\]b we show the dependence of the intensity transmission $T$ and the normalized phase time $\tau/(d/c)$ as functions of the angle of incidence $\theta_i$ at $f=5$ GHz. For each angle of incidence we evaluated the complex amplitude transmission coefficients at two infinitesimally close frequencies $\omega \pm \Delta\omega$ and extracted the corresponding phases. The ratio of the differences in the phases to that of the frequencies lead to the phase time at $\omega$. Results for the cases with (solid line) or without (dashed line) intrinsic dispersion are shown in the lower panel of figure \[fig:2\]b. Material dispersion was suppressed by assuming $\epsilon(\omega \pm \Delta\omega) = \epsilon(\omega) $, $\mu(\omega \pm \Delta\omega) = \mu(\omega) $. Again it is clear that neglect of dispersion can lead to the incorrect result of negative delay. ![Schematics of the layered structure with central NIM layer sandwiched between two spacer layers and high index prisms. All other materials except for the NIM are assumed to be non-magnetic.[]{data-label="fig:3"}](figure3){width="7cm"} Delay in resonant tunnelling mediated by plasmon-like modes =========================================================== In this section we consider a structure which can lead to resonant tunnelling via the excitation of the surface (guided) modes. The structure is assumed to be symmetric ($\epsilon_i=\epsilon_f, ~ \epsilon_1=\epsilon_3, ~ d_1=d_3$) with the central layer made up of NIM (see figure \[fig:3\]). In order to establish a proper connection with the current experimental activities, and also to test their feasibility with our current proposal, we pick the NIM parameters like $\epsilon(\omega)$ and $\mu(\omega)$ from the experiment of Dolling [@dolling2006]. As mentioned earlier, despite the huge volume of literature on novel metamaterials, very few give the complete dispersion data (real and imaginary parts of both $\epsilon$ and $\mu$) for the NIM [@valentine2008; @zhang2005; @kildishev2006]. Most of these data are extracted from transmission studies. The work of Dolling reported a figure of merit of about 3 at $\lambda \sim 1.4 \mu$m, where $Re(n)=-1$, much needed for perfect lensing applications. We used the data digitized from their results for $\epsilon$ and $\mu$ with two significant changes. It is understood that a thin layer of (120nm in their experiment) metamaterial is highly anisotropic. In our calculations we assumed the NIM material to be isotropic and homogeneous. Hopefully 3d near isotropic metamaterials will be realized in near future. The second approximation concerns the losses in the metamaterial. We later comment on how the actual losses of currently available metamaterials can wash out all the interesting effects reported here. Thus the other issue concerns the dire need of truly low-loss metamaterials. Instead of the actual experimentally observed losses, we assume a small loss of $i\gamma$ at the working frequency $\overline{\omega}$ for both permittivity and permeability. Thus at the specified frequency for the NIM we write $$\begin{aligned} \eqalign{ \epsilon_2(\overline{\omega}) &= Re(\epsilon_d(\overline{\omega})) + i\gamma, \\ \mu_2(\overline{\omega}) &= Re(\mu_d(\overline{\omega})) + i\gamma, }\label{eq:epsmugamma} \end{aligned}$$ where $\epsilon_d(\overline{\omega})$ and $\mu_d(\overline{\omega})$ are taken from the work of Dolling [@dolling2006]. The motivation for introducing the small damping is to understand, at least qualitatively, the effect of damping on the resonant tunnelling through a NIM guide. Henceforth, in this and the next section, we consider only the low-loss NIM guide with properties given by (\[eq:epsmugamma\]) excited by a TE-polarized light. ![(a) Intensity transmission coefficient T (b) phase of transmission (in units of $\pi$) and (c) normalized delay as a function of the angle of incidence. The parameters used are $ \lambda = 1.425\mu m, \epsilon_i = \epsilon_f = 6.145, \epsilon_1 = \epsilon_3 = 2.25, d_1 = d_3 = 1 \mu m, d_2 = 2 \mu m$ for $\gamma = $ 0.0001 (dashed), 0.001 (solid) and 0.01 (dotted). (Figure \[fig:4\]a inset) Plot of $(k_2^2-k_1^2)d^2$ as a function of $\alpha_2 d$ as in the work of Wu [@kong2003]. (Figure \[fig:4\]c inset) Normalized delays with (solid lines, values reduced by a factor of 700) and without (dashed lines) material dispersion for $\gamma=0.001$. []{data-label="fig:4"}](figure4a_4c){width="12cm"} In order to have the plasmon like modes of the NIM guide we choose the spacer layer refractive index $n_1$ larger than $|Re(n_2)|$ at the working wavelength. In particular, we choose the following system parameters: $ \lambda = 2\pi\overline{\omega}/c = 1.425\mu m, \epsilon_i = 6.145, \epsilon_1 = \epsilon_3 = 2.25, d_1 = d_3 = 1 \mu m, d_2 = 2 \mu m$. The thickness of the spacer layers was chosen so as to optimize the excitation of a given mode of a NIM guide. The results for the intensity transmission ($T$), phase of transmission ($\phi_t$) and corresponding normalized delay ($\tau/(d_T/c)$, $d_T=d_1+d_2+d_3$) as functions of the angle of incidence are shown in figure \[fig:4\] a, b and c, respectively. The dashed, dotted and solid curves are for three different values of $\gamma$, namely, $\gamma = $ 0.0001, 0.001 and 0.01, respectively. The nature of the peaks in figure \[fig:4\] a and c can be easily understood if one recalls the critical angles for the relevant interfaces at the specified wavelength. For the prism/silica interface the critical angle is about 37.24$^\circ$ while that for the prism/NIM interface it is about 35.25$^\circ$. For $\theta_i>37.24^\circ$ waves are evanescent in silica as well as the NIM slab. Thus the peak occurring at $\theta \sim 39^\circ$ can be associated with the surface-plasmon-like mode of the NIM guide. Such a mode is the dual of the TM-polarized surface plasmon polariton in a metal film [@kong2003] and have been discussed in detail in [@kivshar2003; @kong2003] and used even for QED applications [@xu2009]. Following Wu [@kong2003] we now determine the symmetry of the mode occuring at $\theta_i=39.45^\circ$. For this purpose we consider a symmetric NIM guide with width $d_2=2\mu m$ embedded in silica (with $\epsilon=2.25$). For such a guide with $\mu_1/\mu_2=-0.7487$ the symmetric (cosh-type) and anti-symmetric (sinh-type) mode dispersion curves are shown in the inset of figure \[fig:4\]a. From the parameters used in figure \[fig:4\]a, $(k_2^2-k_1^2)d^2=-15.7594$ (horizontal line in the inset of figure \[fig:4\]a). We obtain two values of $\alpha_2 d=(k_x^2-k_2^2)^{1/2} d$ corresponding to the symmetric (solid curve) and anti-symmetric (dashed curve) modes. For the symmetric mode $\alpha_2 d \sim 5.949$ (vertical line in the inset of figure \[fig:4\]a). The excitation angle corresponding to this value of $\alpha_2 d$ is 39.65$^\circ$ and is closer to the value of $\theta_i$ from the RT data. Thus the resonance at $\theta_i=39.45^\circ$ in figure \[fig:4\]a is recognized as the symmetric plasmon-like mode. The peak close to $\theta_i=34.5^\circ$ corresponds to one of the FP modes of the layered medium, since waves are propagating both in silica as well as in the NIM layer. It is clear from figure \[fig:4\]b that the phase of the transmission coefficient undergoes qualitatively different (positive and negative) changes as one sweeps the angles through the FP or the plasmon-like resonances. The oscillatory (plasmonic) modes are associated with positive (negative) jumps in phase. However, both lead to large delays. It is also clear how increasing damping gradually erases the RT features. For values of NIM losses as in [@dolling2006], the transmission peak due to the ‘plasmonic’ mode gets washed completely though the enhanced delay properties survive. Of course, there is no use of a delayed signal if its peak amplitude is vanishingly small. FP modes and the associated delay still survive to some extent even for large losses. One other important aspect that should be noted from figure \[fig:4\] is that the plasmon-like modes can lead to much larger delays than the FP modes, due to the large quality factors associated with such modes. Delay in resonant tunnelling mediated by guided modes ===================================================== In this section, we consider again the RT configuration (see figure \[fig:3\]), albeit with two changes. We choose a spacer layer with $\epsilon_1=\epsilon_3=1.0$, so that $n_1< Re(n_2)$ and the guided modes could be excited. In other words, the spacer layer is chosen to be optically rarer than the NIM. The other change is in the width of the spacer layers, namely, $d_1=d_3=0.25\mu m$, in order to optimize the excitation of the modes. The critical angles for the prism/air interface is 23.79$^\circ$ and prism/NIM interface (at $\lambda=1.425\mu m$) is 35.25$^\circ$. For angles greater than 23.79$^\circ$ and less than 35.25$^\circ$, the waves are evanescent in the spacer layers, while they are propagating in the NIM layer. Thus guided modes can be excited in the NIM layer only for this range of angles. Any peak observed in the transmission profile in this range can be recognized as due to the excitation of these guided modes. ![Same as in figure \[fig:4\] except that $\epsilon_1 = \epsilon_3 = 1, ~\mu_1=\mu_3=1, ~d_1 = d_3 = 0.25 \mu m$. Inset in figure \[fig:5\]a shows the effective index of the modes as a function of guide width. Inset in figure \[fig:5\]c shows the normalized delays with (solid lines, values reduced by a factor of 200) and without (dashed lines) material dispersion for $\gamma=0.001$. []{data-label="fig:5"}](figure5a_5c){width="15cm"} We show the intensity transmission $T$, phase of transmission $\phi_t$ and the normalized delay $\tau/(d_T/c)$ as functions of the angle of incidence $\theta_i$ in figures \[fig:5\]a, b and c, respectively. The dashed, solid and dotted curves correspond to $\gamma=$ 0.0001, 0.001 and 0.01, respectively. It is clear from figure \[fig:5\] that two guided modes corresponding to $\theta_i=$ 27.76$^\circ$ and 33.51$^\circ$, respectively, are supported in the NIM layer. In the inset of figure \[fig:5\]a we plot the effective index, $n_{\rm eff}=k_x/k_0 $, of the modes of a lossless NIM guide in air as a function of the guide width. Different curves are labeled by the corresponding mode numbers. Recall that for a NIM guide the $m=0$ mode does not exist and the lowest allowed mode ($m=1$) has a cutoff (also noted in [@kivshar2003; @kong2003]). From the inset we confirm that for $d_2=2\mu m$ (vertical dashed line in the inset), only two guided modes (marked with circles) are supported. The excitation angles ($\theta_i$) for these values of $n_{\rm eff}$ are 31.61$^\circ$ (for $m=1$) and 27.28$^\circ$ (for $m=2$), respectively, and they match well with the RT data. The mismatch can be attributed to the fact that losses were ignored. The loading of the guide with the prism in the RT configuration is another important source of this mismatch. The plot of the phase of transmission (figure \[fig:5\]b) shows positive jumps at the location of the guided mode resonances (compare with figure \[fig:4\]b). The plot of normalized delay (figure \[fig:5\]c) shows positive delays for both the guided modes. The intensity of transmission gets considerably weakened with increased damping though the delay characteristic is retained. In the inset of figure \[fig:5\]c we plot the normalized delays for $\gamma=0.001$ with (solid line, values reduced by a factor of 200) and without (dashed line) including the effect of the material dispersion. We note again that consideration of material dispersion removes the erroneous negative delay features (obtained by disregarding the material dispersion) of NIM. Finally, a comparison of figures \[fig:5\]c and \[fig:4\]c reveal that the delay with the plasmonic mode is usually higher than that with the guided mode. This is perhaps due to the tighter confinement of the field and the larger quality factor of the plasmonic mode. Effects of losses in the framework of a causal response ======================================================= The treatment in the previous sections (3 and 4) at a given frequency was simple and losses were introduced just to demonstrate their devastating effects on resonant tunnelling features. The realistic treatment of losses over the whole spectral domain is much more complicated. In order to have a causal response, one must have susceptibilities analytic in the upper half complex plane, which is a direct consequence of the fact that response functions vanish for negative arguments. Moreover, the real and imaginary parts of the susceptibility are related by the Kramers-Kronig relation. Recent analysis of causality in the context of NIM dispersion sheds much insight on the underlying phenomena (with or without additional gain) [@stockman2007; @kinsler2008prl; @nistad2008prl]. These studies are based on the analytic properties of $n^2(\omega)=\epsilon(\omega)\mu(\omega)$. ![The real (solid) and imaginary (dashed) parts of refractive index (top row), the normalized delay (middle row) and FOM (bottom row) as a function of $\lambda$. The left top panel shows the fitted curves for digitized data from Dolling [@dolling2006]. The right panel shows the corresponding curves for improved magnetic resonance ($f=10$).[]{data-label="fig:6"}](figure6a_6b){width="15cm"} A compensation or reduction of material losses narrows down the regime over which the left-handed behaviour is observed. On the other hand, as noted in the preceeding sections, increasing loss washes out the interesting features associated with NIM guides. Attainment of a lossless NIM thus reaches an impasse. There should thus be a trade-off between the material loss and the regime of negative index behaviour for practical metamaterials. Magnetic losses are known to play a dominant role in controlling the negative index behaviour of metamaterials. In this section we analyze a material with Lorentz type magnetic permeability given by [@zhang2005oe] $$\mu (\omega) = \mu_\infty - \frac{F \omega_0^2}{ \omega^2 - \omega_0^2+ i\gamma_m \omega},\label{eq:mucausal}$$ where $\mu_\infty$, $\omega_0$, $F$ and $\gamma_m$ have their usual meanings. Note that the experimental data of Dolling [@dolling2006] can be fitted to (\[eq:mucausal\]) with the following values of the parameters $\mu_\infty=0.6$, $\lambda_0=2\pi c/\omega_0$=1.459 $\mu m$, $\gamma_m/\omega_0=0.028$ and $F=4.425 (\gamma_m/\omega_0)$. We used the digitized data for the permittivity. The dependence for the said parameter values is shown in the top panel of figure \[fig:6\]a, while the middle and the bottom panels show the corresponding normalized delay in bulk sample and the FOM, respectively. It was mentioned earlier that in a guide with such material parameters, the RT features are completely washed out. Hence we probe the case of an NIM with improved magnetic response, whereby we scale down the damping $\gamma_m$ by a factor $f_\gamma$ with simultaneous scaling up of the oscillator strength $F$ by $f_o$. The corresponding improved response for $f_o=f_\gamma=10$ is shown in figure \[fig:6\]b. ![Intensity transmission coefficient $T$ (top row), phase of transmission $\phi_t$ in units of $\pi$ (middle row) and the time delays (bottom row) as functions of the angle of incidence $\theta_i$ for the RT structure supporting plasmon-like modes (a) and guided modes (b). Parameters chosen for panel (a) are $\epsilon_1=\epsilon_3=4$, $d_1 = d_3 = 1 \mu m$, $d_2 = 0.5 \mu m$ and for panel (b) are $\epsilon_1=\epsilon_3=1$, $d_1 = d_3 = 0.05 \mu m$, $d_2 = 1 \mu m$. The common parameters are $\epsilon_i=\epsilon_f=6.145$ and $f=$1 (solid), 10 (dashed), 100 (dotted).[]{data-label="fig:7"}](figure7a_7b){width="15cm"} We studied the transmission and pulse delay through a RT geometry for $f_o=f_\gamma=10, 100$ and compared it with the material of Dolling [@dolling2006] ($f_o=f_\gamma=1$). Delay in RT mediated by plasmon-like modes were studied in a structure with $d_1=d_3=1\mu m$, $\epsilon_1=\epsilon_3=4$ and $d_2=0.5\mu m$ at $\lambda=1.425\mu m$ and the results are shown in figure \[fig:7\]a. The critical angles for the prism/NIM interface is $44.42^\circ$ and the prism/spacer layer interface is $53.79^\circ$. We show that plasmon-like modes can be supported in this structure with large time delays for improved magnetic responses. These features disappear for the experimental data of Dolling [@dolling2006]. A study of time delays mediated by guided modes with parameters $d_1=d_3=0.05\mu m$, $\epsilon_1=\epsilon_3=1$ and $d_2=1.0\mu m$ at $\lambda=1.425\mu m$ (figure \[fig:7\]b) shows that the supported guided mode shows enhanced transmission and delay for improved magnetic responses ($f_o=f_\gamma=10,100$) whereas the data corresponding to Dolling [@dolling2006] ($f_o=f_\gamma=1$) barely shows any transmission or delay features. Thus it should be possible to have enhanced delay characteristics with materials having improved magnetic responses. Conclusions =========== We studied pulse delay through a layered medium containing a NIM layer. We considered three specific cases, namely, a generic NIM, a realistic, recently reported NIM with the assumption of low losses and NIM with improved magnetic response. We also assumed the NIM to be homogeneous. We showed that large delays in transmission, mediated by the modes of the NIM guide, are achievable exploiting a resonant tunneling geometry only for very low loss structures. Both guided and plasmon-like modes were exploited to this end. We probed the role of structural and intrinsic NIM dispersion in this delay and stressed the need to retain both for its correct assessment. We also commented on how the large damping of contemporary metamaterials can destroy the RT transmission features. We probed a NIM with better magnetic response to show the reemergence of these features. These calculations are carried out in the framework of a full causal theory. We thus ingeminated Veselago’s observation about the importance and necessity of retaining intrinsic NIM dispersion in all relevant problems. We also emphasized the need for truly low-loss metamaterials for the realization of such compact delay devices. The authors are thankful to the Department of Science and Technology, Government of India, for supporting this work. SDG also thanks Girish S Agarwal for interesting discussions. References {#references .unnumbered} ========== [99]{} Veselago V G 1968 [*Sov. Phys. Usp.*]{} [**10**]{} 509 Sarychev A K and Shalaev V M 2007 [*Electrodynamics of Metamaterials*]{} World Scientific Singapore Shalaev V M 2007 [*Nature Photonics*]{} [**1**]{} 41 (and references therein) Ramakrishna S A and Grzegorczyk T M 2008 [*Physics and Applications of Negative Refractive Index Materials*]{} London: CRC press Pendry J B 2000 [*Phys. Rev. Lett.*]{} [**85**]{} 3966 Shelby R A, Smith D R, Nemat-Nasser S C and Schulz S 2001 [*Appl. Phys. Lett.*]{} [**78**]{} 489 Fang N and Zhang X 2003 [*Appl. Phys. Lett.*]{} [**82**]{} 161 Zheludev N I, Prosvirnin S L, Papasimakis N and Fedotov V A 2008 [*Nature Photonics*]{} [**2**]{} 351 Engheta N 2007 [*Science*]{} [**317**]{} 1698 Dutta Gupta S, Arun R and Agarwal G S 2004 [*Phys. Rev. B*]{} [**69**]{} 113104 Papasimakis N, Fedotov V A and Zheludev N I 2008 [*Phys. Rev. Lett.*]{} [**101**]{} 253903 Shadrivov I V, Sukhorukov A A and Kivshar Y S 2003 [*Phys. Rev. E*]{} [**67**]{} 057602 Yang Y, Xu J, Chen H, and Zhu S 2008 [*Phys. Rev. Lett.*]{}, [**100**]{} 043601 Kästel J and Fleischhauer M 2005 [*Phys. Rev. A*]{} [**71**]{} 011804 Xu J P, Yang Y P, Lin Q and Zhu S Y 2009 [*Phys. Rev. A*]{} [**79**]{} 043812 Dutta Gupta S 1998 [*Progress in Optics (ed. Wolf E)*]{}, North Holland, Amsterdam, [**38**]{} 1 Tomita S, Yokoyama T, Yanagi H, Wood B, Pendry J B, Fujii M and Hayashi S 2008 [*Opt. Exp.*]{} [**16**]{} 9942 Dutta Gupta S 2009 [*Pramana*]{} [**72**]{} 303 Wu B, Grzegorczyk T M, Zhang Y and Kong J A 2003 [*J. Appl. Phys.*]{} [**93**]{} 9386 Jose J 2009 [*J. Phys. B: At. Mol. Opt. Phys.*]{} [**42**]{} 095401 Dolling G, Enkrich C, Wegener M, Soukoulis C M and Linden S 2006 [*Opt. Lett.*]{} [**31**]{} 1800 Stockman M I 2007 [*Phys. Rev. Lett.*]{} [**98**]{} 177404 Kinsler P and McCall M W 2008 [*Phys. Rev. Lett.*]{} [**101**]{} 167401 Smith D R and Kroll N 2000 [*Phys. Rev. Lett.*]{} [**85**]{} 2933 Wigner E P 1955 [*Phys. Rev.*]{} [**98**]{} 145 Valentine J, Zhang S, Zentgraf T, Ulin-Avila E, Genov D A, Bartal G and Zhang X 2008 [*Nature*]{} [**455**]{} 376 Zhang S, Fan W, Panoiu N C, Malloy K J, Osgood R M and Brueck S R J 2005 [*Phys. Rev. Lett.*]{} [**95**]{} 137404 Kildishev A V, Cai W, Chettiar U K, Yuan Hsiao-Kuan, Sarychev A K, Drachev V P, Shalaev V M 2006 [*JOSA B*]{} [**23**]{} 423 Nistad B and Skaar J 2008 [*Phys. Rev. E*]{} [**78**]{} 036603 Zhang S, Fan W, Malloy K J, Brueck S R J, Panoiu N C and Osgood R M 2005 [*Opt. Exp.*]{} [**13**]{} 4922
--- abstract: 'In this paper we present an algorithm for computing all algebraic intermediate subfields in a separably generated unirational field extension (which in particular includes the zero characteristic case). One of the main tools is Gröbner bases theory, see [@BW93]. Our algorithm also requires computing computing primitive elements and factoring over algebraic extensions. Moreover, the method can be extended to finitely generated ${\mathbb{K}}$-algebras.' address: 'Faculty of Science, University of Cantabria, E-39071 Santander, Spain' author: - Jaime Gutierrez - David Sevilla title: Computation of unirational fields --- Introduction ============ The goal of this paper is to study the problem of computing intermediate fields between a rational function field and a given subfield of it. This computational problem has many applications, not only in other areas of mathematics like Algebraic Geometry, but also in Computer Aided Geometric Design. The question of the structure of the lattice of such intermediate fields is of theoretical interest by itself; we will focus on the computational aspects, like deciding if there are proper intermediate fields and computing them in the affirmative case. In the univariate case, the problem can be stated as follows: given $f_1,\ldots,f_m\in{\mathbb{K}}(t)$, find a field ${\mathbb{F}}$ such that ${\mathbb{K}}(f_1,\ldots,f_m)\varsubsetneq{\mathbb{F}}\varsubsetneq{\mathbb{K}}(t)$. By Lüroth’s Theorem this is equivalent to the problem of decomposing the rational functions. Algorithms for decomposition of univariate rational functions can be found in [@Zip91] and [@AGR95]. In the multivariate case, the problem can be stated as: Let ${\mathbb{K}}$ be a field and ${\mathbb{K}}(x_1,\ldots,x_n)={\mathbb{K}}({\mathbf{x}})$ be the rational function field in the variables ${\mathbf{x}}=(x_1,\ldots,x_n)$. Given rational functions $f_1,\ldots,f_m\in{\mathbb{K}}({\mathbf{x}})$, compute a proper unirational field ${\mathbb{F}}$ between ${\mathbb{K}}(f_1,\ldots,f_m)$ and ${\mathbb{K}}({\mathbf{x}})$, if it exists. Any unirational field is finitely generated over ${\mathbb{K}}$ (see [@Nag93]). Thus, by computing an intermediate field we mean that such a finite set of generators is to be calculated. Regarding algorithms for this problem, see [@MS99], where the authors generalize the method of [@AGR95] to several variables, by converting this problem into the calculation of a primary ideal decomposition. Primary ideal decomposition can be computed by Gröbner Bases. The book [@BW93] by T. Becker and Volker Weispfenning is an excellent reference guide to this important theory and their application. It is not difficult to realize that the solution of the problem is trivial and uninteresting for most choices of $f_1,\ldots,f_m$, since it is easy to construct infinitely many intermediate fields when the transcendence degree of ${\mathbb{K}}(f_1,\ldots,f_m)$ over ${\mathbb{K}}$ is smaller than $n$. Due to this, we will focus on the following version of the problem. \[prob-alg\] Given functions $f_1,\ldots,f_m \in{\mathbb{K}}({\mathbf{x}})$, find all the fields ${\mathbb{F}}$ between ${\mathbb{K}}(f_1,\ldots,f_m)$ and ${\mathbb{K}}({\mathbf{x}})$ that are algebraic over ${\mathbb{K}}(f_1,\ldots,f_m)$. There are finitely many algebraic intermediate fields if the original extension is separable. The special case of Problem \[prob-alg\] when the transcendence degree of ${\mathbb{K}}(f_1,\ldots,f_m)/{\mathbb{K}}$ is 1 has been treated in [@GRS01]. In this case a generalization of Lüroth’s Theorem applies so the problem is equivalent to the so-called uni-multivariate decomposition. The paper [@GRS02] provides a very efficient constructive proof of the theorem mentioned above and it also contains different decomposition algorithms for multivariate rational functions. In some sense, Problem \[prob-alg\] can be seen as a generalization of the univariate rational function decomposition problem. In this paper we will combine several techniques of Computational Algebra to create an algorithm that finds all the intermediate fields that are algebraic over the smaller field. Moreover, our method can be extended to finitely generated ${\mathbb{K}}$-algebras, that is, the case where the ambient field is ${\mathbb{K}}(z_1,\ldots,z_n)={\mathbb{K}}({\mathbf{z}})$ for some $z_1,\ldots,z_n$ transcendental over ${\mathbb{K}}$ that need not be algebraically independent, and ${\mathbb{K}}({\mathbf{z}})$ is the quotient field of a polynomial ring, so that we have $${\mathbb{K}}({\mathbf{z}})=QF\left({\mathbb{K}}[x_1,\ldots,x_n]/I\right)$$ for some prime ideal $I\subset{\mathbb{K}}[x_1,\ldots,x_n]$ that will be given explicitly by means of a finite system of generators. Unsurprisingly, the algorithm will be much simpler when ${\mathbb{K}}({\mathbf{x}})$ is rational, that is, when $I=(0)$. Main Results ============ First, we can use Gröbner bases to compute and manipulate various elements in our extensions, see [@Swe93] and [@BW93]. We can compute transcendence and algebraic degrees of unirational fields, decide whether an element is transcendental or algebraic over a field, compute its minimum polynomial in the latter case, and decide membership. Moreover, we can compute bases in the separable case without using, properly, Gröbner bases, see [@Ste00]. The next step is solving the problem when the given extension is algebraic. We can rewrite the fields in the following way: - There exist rational functions $\hat\alpha_1,\ldots,\hat\alpha_n$ such that ${\mathbb{K}}(\hat\alpha_1,\ldots,\hat\alpha_n)/{\mathbb{K}}$ is a purely transcendental extension, with $${\mathbb{K}}(\hat\alpha_1,\dots,\hat\alpha_n)\subset{\mathbb{K}}(f_1,\dots,f_m)\subset{\mathbb{K}}(x_1,\dots,x_n).$$ - There exist $\hat\alpha_{n+1},f$ algebraic over ${\mathbb{K}}(\hat\alpha_1,\ldots,\hat\alpha_n)$ such that $$\begin{array}{ll} {\mathbb{K}}(f_1,\dots,f_m)= & {\mathbb{K}}(\hat\alpha_1,\ldots,\hat\alpha_n,\hat\alpha_{n+1}), \\ {\mathbb{K}}(x_1,\dots,x_n)= & {\mathbb{K}}(\hat\alpha_1,\dots,\hat\alpha_n,f). \end{array}$$ Also, for any intermediate field in the extension there is $h$ algebraic over ${\mathbb{K}}(\hat\alpha_1,\ldots,\hat\alpha_n)$ such that $${\mathbb{F}}={\mathbb{K}}(\hat\alpha_1,\ldots,\hat\alpha_n,h).$$ Thus, we can work in an algebraic simple extension. Let ${\mathbb{E}}={\mathbb{K}}(t_1,\ldots,t_n)$ be a purely transcendental field over ${\mathbb{K}}$, ${\mathbb{E}}[\alpha]/{\mathbb{E}}$ an algebraic separable extension. Then, there exists a bijection between the set of intermediate fields of ${\mathbb{E}}\subset{\mathbb{E}}[\alpha]$ and the set of subgroups of the Galois group $G$ that contain $G_\alpha$. Moreover, if ${\mathbb{E}}[\beta],{\mathbb{E}}[\gamma]\subset{\mathbb{E}}[\alpha]$ are intermediate fields, we can decide if ${\mathbb{E}}[\beta]\subset{\mathbb{E}}[\gamma]$. It turns out that by factoring the minimal polynomial of $\alpha$ over ${\mathbb{E}}[\alpha]$, we can compute the intermediate fields of the extension ${\mathbb{E}}[\alpha]/{\mathbb{E}}$. This is accomplished by means of using decomposition blocks and, from the computational point of view, factorization of polynomials in algebraic extensions, see [@Tra76], [@YNT89], [@Rub01] and [@LM85]. \[alg-algebraic\] $ $ [**\[A\]**]{} Factor $p_\alpha(z)$ in $E[\alpha]$. [**\[B.1\]**]{} If $p_\alpha(z)$ has more than one linear factor: $$p_\alpha(z)=(z-\alpha)(z-p_2(\alpha))\cdots(z-p_r(\alpha)) p_{r+1}(z,\alpha)\cdots p_{r'}(z,\alpha)$$ - Compute a minimal subgroup $G_{\psi}$ of $<\{\sigma_2:\alpha\mapsto p_i(\alpha)\}>$. - Consider $h(z)=\prod_{\sigma\in G_{\psi}}(z-\sigma(\alpha))=a_ux^u+\cdots+a_0$. - Take $a_i$ such that ${\mathbb{E}}[a_i]$ is a proper subfield of ${\mathbb{E}}\subset{\mathbb{E}}[\alpha]$. [**\[B.2\]**]{} If $p_\alpha(z)=(z-\alpha)p_2(z,\alpha)\cdots p_{r'}(z,\alpha)$, with $p_i$ non-linear. - Consider a factor $P_2(z)=h(z, \alpha)(z-\alpha)$ of $p_\alpha(z)$. $$P_2=(z-\alpha)h(z,\alpha)=a_ux^u+\cdots+a_0.$$ - If ${\mathbb{E}}[a_i]= {\mathbb{E}}[\alpha]$ for all $i$, then take another factor. In order to solve the general problem, we will compute the algebraic closure of the given field in the ambient field. We will look for the minimum field ${\mathbb{F}}_0$ that contains all the intermediate algebraic fields over the given one. We adapt our data according to the algorithm in [@BV93] and [@Vas98]. - Let $h$ be the minimum common denominator of the rational functions $f_i\in{\mathbb{K}}({\mathbf{x}})$. - Let $\Phi:\ {\mathbb{K}}[y_1,\ldots,y_m]\to{\mathbb{K}}[x_1,\ldots,x_n,1/h]$, defined as $\Phi(y_i)=f_i$ for each $i=1,\dots,m$. - Let ${\mathbb{D}}_1=\Phi({\mathbb{K}}[y_1,\ldots,y_m])={\mathbb{K}}[f_1,\ldots,f_m]$. We have that ${\mathbb{D}}_1={\mathbb{K}}[y_1,...,y_m]/{\operatorname{Ker}}(\Phi)$ is a finitely generated ${\mathbb{K}}$-algebra. Also, the field of fractions of ${\mathbb{D}}_1$ is ${\mathbb{K}}(f_1,...,f_m)$. - Let ${\mathbb{D}}_2={\mathbb{D}}_1[x_1,\ldots,x_n]={\mathbb{K}}[x_1,\ldots,x_n,1/h]$. The field of fractions of ${\mathbb{D}}_2$ is ${\mathbb{K}}({\mathbf{x}})$. - Let $t$ be a new variable and ${\mathbb{D}}={\mathbb{D}}_1[t,x_1,\ldots,x_n]\subset{\mathbb{D}}_2[t]$, it is a birational monomorphism. Compute the integral closure $\overline{\mathbb{D}}$ of the extension ${\mathbb{D}}\subset{\mathbb{D}}_2[t]$ according to [@Vas98]. The integral closure of the extension ${\mathbb{D}}_1\subset{\mathbb{D}}_2$ is ${\mathbb{D}}_0=\overline{\mathbb{D}}\cap{\mathbb{D}}_2$. - Then ${\mathbb{F}}_0$ is the field of fractions of ${\mathbb{D}}_0$. Summarizing the results we have presented, we have the following algorithm to find intermediate unirational fields over a given field, if the extension is separable. \[alg-general\] $ $ <span style="font-variant:small-caps;">Input</span>: $f_1,\ldots,f_m\in{\mathbb{K}}({\mathbf{x}})$. <span style="font-variant:small-caps;">Output</span>: rational functions $h_1,\ldots,h_r$ such that $${\mathbb{K}}(f_1,\ldots,f_m)\varsubsetneq{\mathbb{K}}(h_1,\ldots,h_r)\varsubsetneq{\mathbb{K}}({\mathbf{x}}).$$ <span style="font-variant:small-caps;">A</span>. Compute the algebraic closure of ${\mathbb{K}}(f_1,\ldots,f_m)$ relative to ${\mathbb{K}}({\mathbf{x}})$. <span style="font-variant:small-caps;">B</span>. Find a separating basis of ${\mathbb{K}}(f_1,\ldots,f_m)$. <span style="font-variant:small-caps;">C</span>. Rewrite the fields to obtain a simple algebraic extension. <span style="font-variant:small-caps;">D</span>. Factor the minimum polynomial obtained in the algebraic extension. <span style="font-variant:small-caps;">E</span>. Compute the decomposition blocks that correspond to the factors found before. <span style="font-variant:small-caps;">F</span>. If such a block exists, compute an intermediate field. <span style="font-variant:small-caps;">G</span>. Recover the generators of the intermediate field in terms of the variables ${\mathbf{x}}$. Something that is worth mentioning is the fact that all the computations can also be performed if the ambient field is not a rational field but one of type $QF\left({\mathbb{K}}[x_1,\ldots,x_n]/I\right)$ for some prime ideal $I$, the given extension being separable. However, the theoretical and practical efficiency increases greatly, since the representations of the elements are larger and all the checks of type $f=0$ become $f\in\mathcal{B}_{{\mathbb{K}}({\mathbf{x}})/{\mathbb{K}}}$. Conclusions =========== We have presented algorithms for resolving several issues related to rational function field. Our approach has combined useful computational algebra tools. We also unresolved many interesting questions. Unfortunately, we do not know if the computed intermediate field is rational or not, the reason is that the algorithm produce an intermediate field generated always by the transcendence degree plus one elements. Should be interesting to investigate under which circumstances our algorithm can display an intermediate subfield generated by as many elements as the transcendence degree. From a more practical point of view, we would like to have either a good algorithm or a good implementation to compute a factorization of a polynomial over an algebraic extension. Concerning applications, we regard the future interrelation of our techniques to the factorization of morphisms and regular maps between affine and projective algebraic sets. [AGR95]{} C. Alonso, J. Gutierrez, and T. Recio. A rational function decomposition algorithm by near-separated polynomials. , 19(6):527–544, 1995. J. Brennan and W. Vasconcelos. Effective computation of the integral closure of a morphism. , 86(2):125–134, 1993. T. Becker and V. Weispfenning. . Springer-Verlag, New York, 1993. J. Gutiérrez, R. Rubio, and D. Sevilla. Unirational fields of transcendence degree one and functional decomposition. pages 167–174, 2001. J. Gutiérrez, R. Rubio, and D. Sevilla. On multivariate rational function decomposition. , 33(545–562):5, 2002. S. Landau and G. L. Miller. Solvability by radicals is in polynomial time. , 30(2):179–208, 1985. J. M[ü]{}ller-Quade and R. Steinwandt. Basic algorithms for rational function fields. , 27(2):143–170, 1999. M. Nagata. . Translations of Mathematical Monographs, 125. American Mathematical Society, Providence, RI, 1993. R. Rubio. . PhD. Thesis. Dep. of Mathematics, University of Cantabria, 2001. R. Steinwandt. On computing a separating transcendence basis. , 34(4):3–6, 2000. M. Sweedler. Using gr[ö]{}bner bases to determine the algebraic and transcendental nature of field extensions: return of the killer tag variables. , 673:66–75, 1993. B. Trager. Algebraic factoring and rational function integration. pages 219–228, 1976. W. Vasconcelos. . Springer-Verlag, 1998. K. Yokoyama, M. Noro, and T. Takeshima. Computing primitive elements of extensions fields. , 8(6):553–580, 1989. R. Zippel. Rational function decomposition. pages 1–6, 1991. [Jaime Gutierrez]{} [jaime.gutierrez@unican.es]{}[http://personales.unican.es/gutierrj/]{} is an an Associate Professor of Mathematics at the University of Cantabria, Spain, since 1991. His main interests are Computational Algebra, Coding Theory and Cryptography. [David Sevilla]{}[david.sevilla@unican.es]{}[http://personales.unican.es/sevillad/]{} is a Ph. D. in Mathematics since March 2004 and is currently researching Functional Decomposition and other topics of Computational Algebra in University of Cantabria, Spain.
--- abstract: 'We consider the compressive sensing of a sparse or compressible signal ${\bf x} \in {\mathbb R}^M$. We explicitly construct a class of measurement matrices, referred to as the low density frames, and develop decoding algorithms that produce an accurate estimate $\hat{\bf x}$ even in the presence of additive noise. Low density frames are sparse matrices and have small storage requirements. Our decoding algorithms for these frames have $O(M)$ complexity. Simulation results are provided, demonstrating that our approach significantly outperforms state-of-the-art recovery algorithms for numerous cases of interest. In particular, for Gaussian sparse signals and Gaussian noise, we are within 2 dB range of the theoretical lower bound in most cases.' author: - 'Mehmet Akçakaya,  Jinsoo Park and Vahid Tarokh [^1]' title: Compressive Sensing Using Low Density Frames --- Low density frames, compressive sensing, sum product algorithm, expectation maximization, Gaussian scale mixtures Introduction ============ ${\bf x} = (x_1, \dots, x_M) \in {\mathbb R}^M$ with $||{\bf x}||_0 = |\{x_i: x_i \neq 0\}| = L$. ${\bf x}$ is said to be sparse if $L << M$. Consider the equation $$\label{noisy_cs_eq} {\bf y = Ax + n},$$ where ${\bf A}$ is an $N \times M$ measurement matrix and ${\bf n} \in {\mathbb R}^N$ is a noise vector. When $N < M$, ${\bf y}$ is called the compressively sensed version of ${\bf x}$ with measurement matrix ${\bf A}$. In this paper, we are interested in coming up with a good estimate $\hat{\bf x}$ for a sparse vector ${\bf x}$ from the observed vector ${\bf y}$ and the measurement matrix ${\bf A}$. We refer to the case ${\bf n = 0}$ as *noiseless compressive sensing*. This is the only case when ${\bf x}$ can be perfectly recovered. In particular, it can be shown [@Candes-Tao; @Needell3] that if ${\bf A}$ has the property that every of its $N$ columns are linearly independent, then a decoder can recover ${\bf x}$ uniquely from $N = 2L$ samples by solving the $\ell_0$ minimization problem $$\label{ell0_eq} \min ||{\bf x}||_0 \:\:\: \textrm{ s. t. } \:\:\: {\bf y = Ax}.$$ However, solving this $\ell_0$ minimization problem for general ${\bf A}$ is NP-hard [@Tropp]. An alternative solution method proposed in the literature is the $\ell_1$ regularization approach, where $$\label{ell1_eq} \min ||{\bf x}||_1 \:\:\: \textrm{ s. t. } \:\:\: {\bf y = Ax},$$ is solved instead. Criteria has been established in the literature as to when the solution of (\[ell1\_eq\]) coincides with that of (\[ell0\_eq\]) in the noiseless case for various classes of measurement matrices [@Candes-Tao; @Donoho2]. In an important contribution, for ${\bf A}$ belonging to the classes of Gaussian and partial Fourier ensembles, Candès and Tao showed [@Candes-Tao] that this recovery problem can be solved for $L = O(M)$ with $N=O(L)$ as long as the observations are not contaminated with (additive) noise. It can be shown that there is a relationship between the solution to problem (\[noisy\_cs\_eq\]) and minimum Hamming distance problem in coding theory [@AkTar; @AkTar-isit; @Baron1]. This approach was further exploited in [@Xu]. Using this connection, we constructed ensembles of measurement matrices[^2] and associated decoding algorithms that solves the $\ell_0$ minimization problem with complexity $O(MN)$ for $L = O(M)$ with $N=O(L)$ in the noiseless case [@AkTar; @AkTar-isit]. For problem (\[noisy\_cs\_eq\]) with non-zero ${\bf n}$, referred to as *noisy compressive sensing*, the $\ell_1$ regularization approach of (\[ell1\_eq\]) can also be applied. For a measurement matrix ${\bf A}$ that satisfies a property called the restricted isometry principle (RIP), the quadratic program $$\min ||{\bf x}||_1 \:\:\: \textrm{ s. t. } \:\:\: ||{\bf Ax - y}||_2 \leq \epsilon,$$ can be solved for a parameter $\epsilon$ related to $||{\bf n}||_2$, and an estimate $\hat{\bf x}_{\textrm{QP}}$ can be obtained such that $||\hat{\bf x}_{\textrm{QP}} - {\bf x}||_2 \leq C_1\: \epsilon,$ where $C_1$ is a constant that depends on ${\bf A}$ [@Candes-Romberg-Tao]. If ${\bf n} \sim {\cal N}(0, \sigma^2 {\bf I}_N)$, another approach is the Dantzig Selector $$\min ||{\bf x}||_1 \:\:\: \textrm{ s. t. } \:\:\: ||{\bf A^{*}(Ax - y)}||_{\infty} \leq \gamma,$$ where $\gamma$ is a function of $\sigma$ and $M$. This gives an estimate $\hat{\bf x}_{\textrm{DS}}$ such that ${\mathbb E}_{\bf n}||\hat{\bf x}_{\textrm{DS}} - {\bf x}||_2^2 \leq C_2 (\log M) \sum_i \min(x_i^2, \sigma^2),$ where $C_2$ is a constant that depends on ${\bf A}$ [@Dantzig]. Both these methods may not be easily implemented in real time with the limitations of today’s hardware. To improve the running time of $\ell_1$ methods, some authors have investigated using sparse matrices for ${\bf A}$ [@Berinde2]. Using the expansion properties of the graphs represented by such matrices, it was shown in [@Berinde2] that it is possible to obtain an estimate $\hat{\bf x}_{\textrm{E}}$ such that $||\hat{\bf x}_{\textrm{E}} - {\bf x}||_1 \leq C_3 ||{\bf n}||_1$, where $C_3$ is a constant that depends on ${\bf A}$. Another strand of work studies recovery algorithms based on the matching pursuit algorithm [@Gilbert]. Recently, variants of this algorithm, e.g. Subspace Pursuit [@Dai] and CoSaMP [@Needell3], have been proposed. Both algorithms provably work for measurement matrices satisfying RIP, and guarantee perfect reconstruction in the noiseless setting for $N = O(L \log(M/L))$ as the $\ell_1$ recovery methods do. For the noisy problem, they also offer similar guarantees to $\ell_1$ methods. These algorithms have complexity $O({\cal L} \log R)$, where ${\cal L}$ is the complexity of matrix-vector multiplication ($O(MN)$ for Gaussian matrices, $O(N\log N)$ for partial Fourier ensembles) and $R$ is a precision parameter bounded above by $||{\bf x}||_2$ (which is $O(N)$ for a fixed signal-to-noise ratio). In [@Berinde3], Sparse Matching Pursuit (SMP) was proposed for sparse ${\bf A}$ and this algorithm has $O(M \log(M/L))$ complexity . Yet another direction in compressive sensing is the use of the Bayesian approach. In [@Carin], the idea of relevance vector machine (RVM) [@Tipping] has been applied to compressive sensing. Although simulation results indicate that the algorithm has good performance, it has complexity $O(MN^2)$. In this paper, we study the construction of measurement matrices that can be stored and manipulated efficiently in high dimensions, and fast decoding algorithms that generate estimates with small $\ell_2$ distortion. The ensemble of measurement matrices are a generalization of LDPC codes and we refer to them as *low density frames* (LDFs). For our decoding algorithms, we combine various ideas from coding theory, statistical learning theory and theory of estimation. Simulation results are provided indicating an excellent distortion performance at high levels of sparsity and for high levels of noise. The outline of this paper is given next. In Section \[sec:LDF\], we introduce low density frames and study their basic properties. In Section \[sec:suprem\], we introduce various concepts used in algorithms and describe the decoding algorithms. In Section \[sec:sims\], we provide extensive simulation results for a number of different testing criteria. Finally in Section \[sec:conc\], we make our conclusions and provide directions for future research. Low Density Frames {#sec:LDF} ================== Let ${\mathcal{F}} = \{ {\boldsymbol{\phi}}_1, {\boldsymbol{\phi}}_2, \cdots, {\boldsymbol{\phi}}_M \}$ be a frame consisting of $M \ge N $ non-zero vectors which span ${\mathbb R}^N$. Let ${\boldsymbol{\phi}}_i = (\phi_{1,i}, \cdots, \phi_{N,i})$ for $i=1,2, \cdots, M$. This frame could be represented in matrix form as an $N \times M$ matrix $$\begin{aligned} {\bf F} = \left( \begin{array}{cccc} \phi_{1,1} & \phi_{1,2} & \cdots & \phi_{1,M} \\ \phi_{2,1} & \phi_{2,2} & \cdots &\phi_{2,M} \\ \vdots & \ddots & \ddots & \vdots \\ \phi_{N,1} & \phi_{N,2} & \cdots & \phi_{N,M} \end{array} \right).\end{aligned}$$ A [*low density frame*]{} (LDF) ${\cal F}$ is defined by a matrix ${\bf F}$ where the vast majority of elements of each column and each row of ${\bf F}$ are zeroes. Formally, we define a $(d_v, d_c)$-[*regular*]{} LDF as a matrix ${\bf F}$ that has $d_c$ non-zero elements in each row and $d_v$ non-zero elements in each column. Clearly $M d_v = N d_c$. We also note that the redundancy of the frame is $r = M/N = d_c/d_v$. We will restrict ourselves to [*binary*]{} regular LDFs, where the non-zero elements of ${\bf F}$ are ones. The density $\rho$ of a frame ${\bf F}$ is the ratio of the number of non-zero entries of ${\bf F}$ to the dimension of ${\bf F}$. In this paper, we consider regular LDFs for which $\rho = (M d_v)/(MN) = d_v / N << 1$. As with LDPC codes, it is natural to represent LDFs using bipartite graphs. Furthermore, there is a well-established literature on inference in graphical models. Some of these methods can be used as a basis for recovery algorithms in the context of compressive sensing. To this end, we next summarize two important ideas from graphical models, namely factor graphs and the sum-product algorithm, and show how LDFs can be viewed as factor graphs. Factor Graphs ------------- Factor graphs are used to represent factorizations of functions of several variables [@Bishop; @FactorGraphs]. Let $f({\bf w})$ be a function of several variables that can be factored as $$\label{fac1} f({\bf w}) = \prod_{s} f_s ({\bf w}_s).$$ In this factorization each *factor* $f_s$ is only a factor of ${\bf w}_s$, the subset of *variable nodes* ${\bf w}$. A *factor graph* depicting (\[fac1\]) consists of variable nodes represented by circles, factor nodes represented by bold squares, and undirected edges connecting each factor node to all the variable nodes involved in that factor. ![Factor graph representing the function in Equation \[basfac\].[]{data-label="factor_basic"}](factor_basic_pn.PNG){width="1.2in"} As an example, the factor graph representing $$\label{basfac} f({\bf w}) = f_a (w_1, w_2, w_3) f_b(w_2, w_3, w_4) f_c(w_1, w_4)$$ is depicted in Figure \[factor\_basic\]. Sum-Product Algorithm --------------------- The natural inference algorithm for factor graphs is the sum-product algorithm [@Bishop; @Gallager]. This algorithm is an exact interference algorithm for tree-structured graphs (i.e. graphs with no cycles), and is usually described over discrete alphabets. However, the ideas also apply to continuous random variables with the sum being replaced by integration. In doing so, the computational cost of implementation increases and this issue will be addressed later. Suppose the goal is to find the marginal density $p(w)$ for a particular variable $w$. In particular, we have $$p(w) = \sum_{{\bf w} \setminus w} p({\bf w}).$$ One treats $w$ to be the root node of a tree, and looks at the subtrees connected to $w$ via factor nodes. Using this approach, the joint distribution can be written as $$\label{factor1} p({\bf w}) = \prod_{s \in ne(w)} F_s(w, W_s),$$ where $ne(w)$ is the neighborhood of $w$, i.e. the set of factor nodes that are connected to $w$, and $W_s$ is the set of variable nodes in the subtree connected to the factor node $f_s$ in $ne(w)$ [@Bishop]. $F_s(w, W_s)$ represents the product of the factors in the subtree associated with $f_s$. Interchanging the summation and the products yields $$p(w) = \prod_{s \in ne(w)} \mu_{f_s \to w} (w),$$ where $\mu_{f_s \to w} (w)$ is the message sent from factor node $f_s$ to variable node $w$. One can show [@Bishop] that $$\mu_{f_s \to w} (w) = \sum_{{\bf w}_s \setminus w} f_s(w, {\bf w}_s) \prod_{m \in ne(f_s) \setminus w} \mu_{w_m \to f_s} (w_m),$$ where ${\bf w}_s$ are all variable nodes connected to the factor node $f_s$, including $w$, and $ne(f_s)$ are the set of variable nodes connected to the factor node $f_s$. One can also show [@Bishop] $$\mu_{w_m \to f_s} (w_m) = \prod_{l \in ne(w_m)\setminus f_s} \mu_{f_l \to w_m} (w_m).$$ Thus there are two types of messages, one type going from factor nodes to variable nodes denoted $\mu_{f \to w}$ and the other going from variable nodes to factor nodes denoted $\mu_{w \to f}$. The message propagation starts from the leaves of the factor graph. A leaf variable node sends an identity message $\mu_{w \to f}(w) = 1$ to its parent, whereas a leaf factor node $f$ sends $\mu_{f \to w}(w) = f(w)$, a description of $f$ to its parent. These expressions for messages give an efficient way of calculating the marginal probability distribution. We note that in writing out the factorization in (\[factor1\]), it is essential that the graph has a tree structure so that the factors in the joint probability distribution $p({\bf w})$ can be partitioned into groups, each of which is associated with a single factor node in $ne(w)$. The algorithm is easily modified to calculate the marginal for every variable node in the graph [@Bishop]. This modification results in only twice as many calculations as calculating a single marginal. A more interesting case is when there are observed variables in the graph, ${\bf v}$. In this case the sum-product algorithm could be used to calculate posterior marginals $p(w_i | {\bf v} = \hat{\bf v})$. Graphical Representation of Low Density Frames ---------------------------------------------- The main connection between the $\ell_0$ minimization problem and coding theory involves the description of the underlying code [@AkTar], ${\cal V}$ of ${\bf F}$, where $${\cal V} = \{{\bf d} \in {\mathbb R}^M: {\bf Fd = 0}\}.$$ One can view ${\cal V}$ as the set of vectors whose product with each row of ${\bf F}$ “checks” to $0$. In the works of Tanner, it was noted that this relationship between the checks and the codewords of a code can be represented by a bipartite graph [@Tanner]. This bipartite graph consists of two disjoint sets of vertices, $V$ and $C$, where $V$ contains the variable nodes and $C$ contains the factor nodes representing checks that codewords need to satisfy. Thus we have $|V| = M$ and $|C| = N$. Furthermore node $j$ in $V$ will be connected to node $i$ in $C$ if and only if the $(i,j)^{\textrm{th}}$ element of ${\bf F}$ is non-zero. Thus the number of edges of the graph is equal to the number of non-zero elements in the measurement matrix ${\bf F}$. For an LDF, this leads to a sparse bipartite graph. ![A frame ${\bf F}$ and its graphical representation.[]{data-label="basic_ldpc"}](basic_ldpc_pn.PNG){width="3.2in"} A simple example of this graphical representation is depicted in Figure \[basic\_ldpc\]. For representation of LDFs, it is convenient to use a factor node, depicted by a $\boxplus$, called a parity check node. This node has the property that the variable nodes connected to it should sum to zero. We also note that for the purposes of decoding, it is more convenient to use syndromes [@Mackay-book] that represent the measurement vector, ${\bf r}$. This is done by connecting a variable node representing the $j^{\textrm{th}}$ component of ${\bf r}$ to the $j^{\textrm{th}}$ check node. In this case, the parity check node has the property that the variable nodes connected to it sum to $r_j$. Thus the graph now represents the set $\{{\bf d} \in {\mathbb R}^M: {\bf Fd = r}\}$ which is a coset of the underlying code of the frame. It is important to note that the graph representing an LDF will have cycles. Without the tree structure, the sum-product algorithm will only be an approximate inference algorithm. However, it has been empirically shown that for sparse graphs this approximate algorithm works very well [@Mackay; @Urbanke; @Wiberg]. Sum Product Algorithm with Expectation Maximization {#sec:suprem} =================================================== It is well-known in coding theory literature, that the standard decoding algorithm for codes on graphs is the sum-product algorithm (SPA) [@Gallager; @FactorGraphs; @Mackay]. Given a set of observations, this algorithm can be used to approximate the posterior marginal distributions. In fact, when there is no noise, variants of this algorithm [@Sipser] has been successfully adapted to compressive sensing [@Baron1; @Xu]. However, when we are interested in the practical case of noisy observations, these algorithms no longer can be applied in a straightforward manner. Some authors have tried to circumvent this difficulty by using a two-point Gaussian mixture approach [@Baron2], however the complexity of this algorithm may grow quickly as the number of Gaussian components in the mixtures could grow exponentially, unless some approximation is applied. However, using these approximations degrades the performance of the LDF approach. In this paper, we consider Gaussian Scale Mixture (GSM) priors with Jeffreys’ non-informative hyperprior to obtain an algorithm that provides estimates for the noisy compressive sensing problem $${\bf r= Fx + n},$$ as well as the noiseless problem. Throughout the paper we assume that $${\bf n} \sim {\cal N}(0, \sigma^2 {\bf I}_N).$$ However simulation results (not included in this paper) indicate that the algorithms still work well even for non-Gaussian noise. We define the signal-to-noise ratio (SNR) as $$\textrm{SNR} = 10 \log_{10} \frac{||{\bf Fx}||^2}{{\mathbb E}_{\bf n}||{\bf n}||^2} = 10 \log_{10} \frac{||{\bf Fx}||^2}{\sigma^2 N}.$$ Gaussian Scale Mixtures {#sec:GSM} ----------------------- The main difficulty in using the sum-product algorithm (SPA) in compressive sensing setting is that the variables of interest are continuous. Nonetheless SPA can be employed naturally when the underlying continous random variables are Gaussian [@Weiss]. Since any Gaussian pdf ${\cal N}(x |a, A)$ can be determined by its mean $a$ and variance $A$, these constitute the messages in this setting. At the variable nodes, the product of Gaussian probability density functions (pdf) will result in a (scaled) Gaussian pdf, and at the check nodes, the convolution of Gaussian pdfs will also result in a Gaussian pdf. i.e. $${\cal N}(x | a_1, A_1) \ast {\cal N}(x | a_2, A_2) \propto {\cal N}(x | a_1+a_2 , A_1+A_2),$$ and $${\cal N}(x | a_1, A_1) \cdot {\cal N}(x | a_2, A_2) \propto {\cal N}(x | b, B),$$ where $\propto$ denotes normalization up to a constant, and $$B = (A_1^{-1} + A_2^{-1})^{-1},$$ $$b = B (A_1^{-1} a_1 + A_2^{-1} a_2).$$ We note that all the underlying operations for SPA preserve the Gaussian structure. It is well-known that the Gaussian pdf is not “sparsity-enhancing”. Thus some authors propose the use of the Laplacian prior $$\label{Laplacian} p({\bf x}) = \prod_i p_{x_i}(x_i) = \prod_i \frac{\lambda}{2} \exp(- \lambda |x_i|).$$ Clearly with this prior and for Gaussian noise ${\bf n}$ $$\label{laplace-lasso} p({\bf x}|{\bf y}) \propto p({\bf y}|{\bf x}) p({\bf x}) \propto \exp\big(- ||{\bf y - Ax}||_2^2 - \lambda' ||{\bf x}||_1\big),$$ and maximization of $p({\bf x}|{\bf y})$ is equivalent to minimizing $$||{\bf y - Ax}||_2^2 + \lambda' ||{\bf x}||_1,$$ which is the objective function for the LASSO algorithm [@Tropp2; @Wainwright2]. However, a straightforward implementation of this algorithm may not be computationally feasible. In this paper, we consider the family of Gaussian Scale Mixtures (GSM) densities [@Andrews], given by $$x = \sqrt{\beta} u,$$ where $u$ is a zero-mean Gaussian and $\sqrt{\beta}$ is a positive scalar random variable. Hence $$p_{x | \beta} (x|\beta) \sim {\cal N}(x | 0, \beta),$$ and $$p_x(x) = \int_0^{\infty} p_{x | \beta} (x|\beta) p_{\beta} (\beta) d\beta.$$ This family of densities are symmetric, zero-mean and have heavier tails than a Gaussian, and have been successfully used in image processing [@Dias; @Nowak2; @Portilla], and learning theory [@Tipping]. In order to completely specify our model, we need to choose a pdf for $p_{\beta}(\beta)$. In this paper, we use $$p_{\beta}(\beta) \propto \sqrt{\det(I(\beta))}, \quad I(\beta) = {\mathbb E}\Bigg(-\frac{\partial^2 \log p_{x|\beta}(x|\beta)}{\partial \beta^2} \bigg| \beta \Bigg)$$ where $I(\beta)$ is the Fisher information matrix. This is referred to as the Jeffreys’ prior, which can be shown to be a scalar invariant prior suitable for sparse estimation [@Robert]. In our model, the prior is given by $$p_{\beta_i} (\beta_i) = \frac{1}{\beta_i},$$ which has no parameters to optimize. We note that this is an improper density, i.e. it cannot be normalized. In Bayesian statistics, these kind of improper priors are used frequently, since only the relative weight of the prior determines the a-posteriori density [@Robert]. This density also has a singularity at the origin. This fact is usually ignored as long as it does not create computational problems [@Portilla]. As an alternative one might set the prior to $0$ in a small interval $\beta \in [0,\beta_{min})$. We also note that with this choice for $p_{\beta_i} (\beta_i)$, $p_{x_i} (x_i) \propto 1 / |x_i|$, which is a very heavy-tailed density. ![Contour plots for a Gaussian distribution (left), a GSM with $\beta_1 = \beta_2$ distributed according to Jeffreys’ prior (middle), a GSM with $\beta_1$ and $\beta_2$ i.i.d. with Jeffreys’ prior (right).[]{data-label="indep_betas"}](contourspl2.png){width="3.5in"} To enhance sparsity in each coordinate, it is important to have independent $\beta_i$ for all $i$ [@Tipping2]. As depicted in the middle subplot of Figure \[indep\_betas\], compared to a Gaussian distribution, a GSM with $\beta_i$ distributed according to Jeffreys’ prior has a much sharper peak at the origin even when $\beta_1 = \beta_2$. However, the subplot on the right demonstrates that if the $\beta_i$s are indeed independent, the GSM will be highly concentrated not only around the origin, but along the coordinate axes as well, which is a desired property if we have no further information about the locations of the sparse coefficients of ${\bf x}$. In our model, we will assume that $$p({\bf x}, {\bm \beta}) = \prod_{i=1}^M p(x_i | \beta_i) \prod_{i=1}^M p(\beta_i)$$ in order to enhance sparsity in all coordinates. This independence assumption is natural and commonly used in the literature [@Nowak2; @Tropp2; @Wainwright2]. Expectation Maximization ------------------------ The expectation maximization (EM) algorithm is a method for finding maximum-likelihood (ML) estimates of parameters in a model with observed and hidden variables [@Moon]. Let ${\bf y}$ be the observed data and let ${\bf z}$ be the hidden data. Let the probability density function of $({\bf y,z})$ be $f({\bf y,z}|{\bm \theta})$, parametrized by the vector ${\bm \theta}$. The EM algorithm iteratively improves on an initial estimate ${\bm \theta}^{(0)}$ using a two-step procedure. In the expectation step (E-step), we calculate $$Q({\bm \theta} | {\bm \theta}^{(k)}) = {\mathbb E}_{\bf z} \Big(\log f({\bf y,z}|{\bm \theta}) | {\bf y}, {\bm \theta}^{(k)} \Big)$$ given an estimate ${\bm \theta}^{(k)}$ from the previous iteration. It is important to distinguish the two arguments of the $Q$ function are different. The second argument is the conditioning argument for the expectation and is fixed during the E-step. In the second step, called the maximization step or M-step, a new estimate $${\bm \theta}^{(k+1)} = \arg \max_{\bm \theta} Q({\bm \theta} | {\bm \theta}^{(k)})$$ is calculated. It can be shown that the estimates monotonically increases the likelihood with respect to the observed data ${\bf y}$ [@Moon], $$f({\bf y} | {\bm \theta}^{(k+1)}) \geq f({\bf y} | {\bm \theta}^{(k)}).$$ When ${\bm \theta}$ is itself a random variable, the M-step maximizes $\big(Q({\bm \theta} | {\bm \theta}^{(k)}) + \log f({\bm \theta})\big)$, and the EM algorithm can be used to find a maximum a-posteriori (MAP) estimate of ${\bm \theta}$ [@Krishnan]. SuPrEM Algorithm I ------------------ ![The factor graph for a (3,6)-regular LDF with the appropriate hyperpriors.[]{data-label="suprem_FG"}](factorgraph_suprem3_pn.PNG){width="3.5in"} The factor graph for decoding purposes is depicted in Figure \[suprem\_FG\]. Here, ${\bf r}$ is the vector of observed variables, ${\bf x}$ is the vector of hidden variables and ${\bm \beta}$ is the vector of parameters. We next propose the *Sum Product with Expectation Maximization (SuPrEM)* Algorithm I. At every iteration $t$, this algorithm uses a combination of the Sum-Product Algorithm (SPA) and EM algorithm to generate estimates for the hyperpriors $\{\beta_k^{(t)}\}$, as well as a point estimate $\{\hat{x}_k^{(t)} \}$. In the EM stage of the algorithm, $Q({\bm \beta} | {\bm \beta}^{(t)})$ for the **E-step** is given by $$\begin{aligned} Q({\bm \beta} &| {\bm \beta}^{(t)}) = {\mathbb E}_{\bf x} \bigg( \log p({\bf x, y}, {\bm \beta}) \big| {\bf y}, {\bm \beta}^{(t)} \bigg) \nonumber\\ &= {\mathbb E}_{\bf x} \bigg( \log \big( p({\bf y}|{\bf x}) p({\bf x}|{\bm \beta}) p({\bm \beta}) \big) \big| {\bf y}, {\bm \beta}^{(t)} \bigg) \nonumber\\ &= {\mathbb E}_{\bf x} \bigg( \log p({\bf y}|{\bf x})\big| {\bf y}, {\bm \beta}^{(t)} \bigg) + {\mathbb E}_{\bf x} \bigg( \log p({\bf x}, {\bm \beta}) \big| {\bf y}, {\bm \beta}^{(t)} \bigg) \nonumber\\ &= C_1 + \sum_{i=1}^M {\mathbb E}_{\bf x} \bigg( \log p(x_i ,\beta_i) \big| {\bf y}, {\bm \beta}^{(t)} \bigg) \nonumber\\ &= C_1 + \sum_{i=1}^M {\mathbb E}_{x_i} \bigg( \log p(x_i ,\beta_i) \big| {\bf y}, {\bm \beta}^{(t)} \bigg),\end{aligned}$$ where $C_1 = {\mathbb E}_{\bf x} \big( \log p({\bf y}|{\bf x})\big| {\bf y}, {\bm \beta}^{(t)} \big)$ is a term independent of ${\bm \beta}$. Let $Q(\beta_i|{\bm \beta}^{(t)}) = {\mathbb E}_{x_i} \big( \log p(x_i ,\beta_i)| {\bf y}, {\bm \beta}^{(t)} \big)$. We have $$Q({\bm \beta} | {\bm \beta}^{(t)}) = C_1 + \sum_{i=1}^M Q(\beta_i|{\bm \beta}^{(t)})$$ Since in our setting, the underlying variables are Gaussian, the density $p(x_i|{\bf y}, {\bm \beta}^{(t)})$ produced by the SPA is also Gaussian, with mean $\mu_i$ and variance $\nu_i$. One can explicitly write out $Q(\beta_i|{\bm \beta}^{(t)})$ as $$\begin{aligned} Q(\beta_i|{\bm \beta}^{(t)}) &= {\mathbb E}_{x_i}\bigg(\log p(x_i, \beta_i) \big| {\bf y}, {\bm \beta}^{(t)} \bigg) \nonumber \\ &= {\mathbb E}_{x_i}\Big(\log \big(\frac{1}{\sqrt{2\pi \beta_i}} \exp(-\frac{x_i^2}{2\beta_i}) \: \frac{1}{\beta_i}\big) \big) \big| {\bf y}, {\bm \beta}^{(t)}\Big) \nonumber \\ &= C_2 - \frac{3}{2} \log \beta_i - \frac{1}{2\beta_i} {\mathbb E}_{x_i}(x_i^2 | {\bf y},{\bm \beta}^{(t)}) \nonumber \\ &= C_2 - \frac{3}{2} \log \beta_i - \frac{1}{2\beta_i} (\mu_i^2 + \nu_i),\end{aligned}$$ where $C_2$ is independent of $\beta_i$. For the **M-step**, we find $${\bm \beta}^{(t+1)} = \arg \max_{{\bm \beta}} Q({\bm \beta} | {\bm \beta}^{(t)}).$$ Clearly $Q({\bm \beta} | {\bm \beta}^{(t)})$ can be maximized by maximizing each $Q(\beta_i|{\bm \beta}^{(t)})$. Hence we have the simple local update rule $$\beta_i^{(t+1)} = \arg \max_{\beta_i} Q(\beta_i|{\bm \beta}^{(t)}) = \frac{\mu_i^2 + \nu_i}{3}$$ We summarize SuPrEM I in Algorithm \[alg:suprem\]. The inputs to the algorithm contain a stopping criterion ${\cal T}$ and a message-passing schedule ${\cal S}$. The stopping criterion does not really affect the behavior of the algorithm, and there are a few alternatives for a reasonable criterion, which are discussed in Section \[sec:sims\]. It turns out the message-passing schedule is rather important for achieving the maximum performance. To this end, we develop a message-passing schedule that attains such good performance and we describe this schedule in detail in Appendix I. For all our simulations, we use this fixed schedule. Simulation results indicate that with this fixed schedule, the algorithm is robust in various different scenarios. The overall complexity of SuPrEM is $O(M)$ for a fixed number of iterations. We also note that in the presence of noise, the output of the algorithm will not be exactly sparse and a sparse estimate can be constructed using soft-thresholding techniques such as those described in [@Nowak2]. = ***Inputs***: The observed vector ${\bf r}$, the measurement matrix ${\bf F}$,\ the noise level $\sigma^2$, a stopping criterion ${\cal T}$, and a message-\ passing schedule ${\cal S}$.\ ***1. Initialization:*** Let ${\beta}_k^{(0)} = |({\bf F}^T {\bf r})_k|^2/d_v^2$. Initial outgoing\ messages from variable node $x_k$ is $(0, {\beta}_k^{(0)})$.\ ***2. Check Nodes:*** For $i=1,2,\dots,N$\ $\quad$ Let $\{i_1, i_2, \dots, i_{d_c}\}$ be the indices of the variable nodes\ connected to the $i^{\textrm{th}}$ check node $r_i$. Let the message coming\ from variable node $x_{i_j}$ to the check node $r_i$ at $t^{\textrm{th}}$ iteration\ be $(\mu_{i_j}^{(t)}, \nu^{(t)}_{i_j})$ for $j = 1, \dots, d_c$. Then the outgoing message\ from check node $r_i$ to variable node $x_{i_j}$ is\ $\quad \quad \quad (r_i - \sum_{k = 1, k \neq j}^{d_c} \mu_{i_k}^{(t)}, \sum_{k = 1, k \neq j}^{d_c} \nu_{i_k}^{(t)} + \sigma^2)$.\ $\quad$ The messages are sent according to the schedule ${\cal S}$.\ ***3. Variable Nodes:*** For $k=1,2,\dots, M$\ $\quad$ Let $\{k_1, k_2, \dots, k_{d_v}\}$ be the indices of the check nodes\ connected to the $k^{\textrm{th}}$ variable node $x_k$. Let the incoming\ message from the check node $r_{k_j}$ to the variable node $x_k$ at\ the $t^{\textrm{th}}$ iteration be $(\mu_{k_j}^{(t)}, \nu_{k_j}^{(t)})$ for $j = 1, \dots, d_v$.\ $\quad$ **a.** *EM update:* Let\ $V_k^{(t)} = \bigg(\sum_{j=1}^{d_v} \frac{1}{\nu_{k_j}^{(t)}} + \frac{1}{\beta_k^{(t-1)}} \bigg)^{-1}$, $\mu_k^{(t)} = V_k^{(t)} \bigg(\sum_{j=1}^{d_v} \frac{\mu_{k_j}^{(t)}}{\nu_{k_j}^{(t)}} \bigg).$\ $\quad$ Then the EM update is $\beta_k^{(t)} = \frac{(\mu_k^{(t)})^2 + V_k^{(t)}}{3}$.\ $\quad$ **b.** *Message updates:* The outgoing message from variable\ node $x_k$ to check node $r_{k_i}$ at the $(t+1)^{\textrm{th}}$ iteration is given\ by $(\mu_{k_i}^{(t+1)},\nu_{k_i}^{(t+1)}),$ where\ $\quad \quad \quad \quad \:\:\nu_{k_i}^{(t+1)} = \bigg(\sum_{j=1, j\neq i}^{d_v} \frac{1}{\nu_{k_j}^{(t)}} + \frac{1}{\beta_k^{(t)}} \bigg)^{-1}$\ and\ $\quad \quad \quad \quad \mu_{k_i}^{(t+1)} = \nu_{k_i}^{(t+1)} \bigg(\sum_{j=1, j \neq i}^{d_v} \frac{\mu_{k_j}^{(t)}}{\nu_{k_j}^{(t)}} \bigg).$\ $\quad$ The messages are sent according to the schedule ${\cal S}$.\ ***4. Iterations:***Repeat (2) and (3) until stopping criterion ${\cal T}$ is\ reached.\ ***5. Decisions:*** For the $k^{\textrm{th}}$ variable node $x_k$, let the incoming\ messages be $(\mu_{k_j}^{({\cal T})}, \nu_{k_j}^{({\cal T})})$ for $j = 1, \dots, d_v$. Let\ $\quad \quad \quad \quad \quad \hat{V}_k = \bigg(\sum_{j=1}^{d_v} \frac{1}{\nu_{k_j}^{({\cal T})}} + \frac{1}{\beta_k^{({\cal T})}} \bigg)^{-1}$\ and\ $\quad \quad \quad \quad \quad \hat{x}_k = \hat{V}_k \bigg(\sum_{j=1}^{d_v} \frac{\mu_{k_j}^{({\cal T})}}{\nu_{k_j}^{({\cal T})}} \bigg)$.\ ***Output:*** The estimate is $\hat{{\bf x}} = (\hat{x}_1, \hat{x}_2, \dots, \hat{x}_M)^T$. SuPrEM Algorithm II ------------------- When the ratio $L/N$ is relatively large, SuPrEM I does not perform well, in particular for high SNRs, since it does not enforce strict sparsity. Thus we propose SuPrEM Algorithm II that enforces sparsity at various stages of the algorithm and sends messages between the nodes of the underlying graph accordingly. To this end, we keep a set of candidate variable nodes ${\cal O}$ that are likely to have non-zero values, and modify the messages from the variable nodes that do not belong to a specified subset of ${\cal O}$ denoted by ${\cal O}_1$. Similar ideas have been used in developing state-of-the-art recovery algorithms for compressive sensing, such as Subspace Pursuit [@Dai] and CoSaMP [@Needell3]. The full description is given in Algorithm \[alg:suprem2\]. = ***Inputs***: The observed vector ${\bf r}$, the measurement matrix ${\bf F}$,\ the sparsity level $L$, a stopping criterion ${\cal T}$, the noise level\ $\sigma^2$ (optional), and a message-passing schedule ${\cal S}$.\ ***1. Initialization:*** Let ${\beta}_k^{(0)} = |({\bf F}^T {\bf r})_k|^2/d_v^2$ and let ${\cal O}_1 = \emptyset$.\ Initial outgoing messages from variable node $x_k$ is $(0, {\beta}_k^{(0)})$.\ ***2. Check Nodes:*** Same as in Algorithm I.\ ***3. Variable Nodes:*** Same as in Algorithm I.\ ***4. Sparsification:***\ **a.** After the $\beta_k$s have been updated, find the indices of the $L$\ largest $\beta_k$s. Let these indices be ${\cal O}_2$.\ **b.** Merge ${\cal O}_1$ and ${\cal O}_2$, i.e. Let ${\cal O} = {\cal O}_1 \cup {\cal O}_2$.\ **c.** For all indices in $k \in {\cal O}$ make a decision on $\hat{x}_k$ (as in Step\ 5 of Algorithm I). For all indices $k \notin {\cal O}$, let $\hat{x}_k = 0$.\ **d.** Identify the indices corresponding to the $L$ largest (in\ absolute value) coefficients of $\hat{\bf x}$. Update ${\cal O}_1$ to be this set of\ $L$ indices.\ **e.** The variable vertices $k \in {\cal O}_1$, send out their messages as\ was decided in Step 3 of Algorithm I. The variable vertices\ $k \notin {\cal O}_1$, send out their messages with $0$ mean and the\ variance that was decided in Step 3 of Algorithm 1.\ ***5. Decisions:*** Make decisions only the vertices in ${\cal O}$. Once\ these are calculated, keep the $L$ indices with the largest\ $|\hat{x}_k|, k \in {\cal O}$. Set all other indices to 0.\ ***6. Iterations:*** Repeat (2), (3), (4) and (5) until stopping\ criterion ${\cal T}$ is reached.\ ***Output:*** The estimate is $\hat{{\bf x}} = (\hat{x}_1, \hat{x}_2, \dots, \hat{x}_M)^T$. The main modification to SuPrEM I is the addition of a sparsification step. Intuitively $\beta_k^{(t)}$ is the reliability of the hypothesis $\hat{x}_k^{(t)} \neq 0$. Throughout the algorithm we maintain a list of variable nodes ${\cal O}_1$ that correspond to the largest $L$ coefficients of $\hat{\bf x}^{(t)}$ at iteration $t$. We also keep a list of variable nodes ${\cal O}_2$ corresponding to the $L$ largest elements of ${\bm \beta}^{(t)}$, i.e. those with the largest reliabilities of the hypothesis $\hat{x}_k^{(t)} \neq 0$. In the sparsification stage, these two sets are merged, ${\cal O} = {\cal O}_1 \cup {\cal O}_2$. The addition and deletion of elements from ${\cal O}$ allow refinements to be made with each iteration. We note $L \leq |{\cal O}| \leq 2L$ at any given iteration. Decisions are made on the elements of ${\cal O}$, and ${\cal O}_1$ is updated. Finally for variable nodes not in ${\cal O}_1$, the mean value of the messages is forced to be 0, but the variance (i.e. the uncertainty about the estimate itself) is kept. By modifying the messages this way, we not only enforce sparsity at the final stage, but also throughout the algorithm. We note that the noise level $\sigma^2$ is an optional input to the algorithm. Our simulations indicate that the algorithm works without this knowledge also. However, if this extra statistical information is available, it is easily incorporated into the algorithm in a natural way and results in a performance increase. SuPrEM II has complexity $O(M)$. The only significant operation different than those in SuPrEM I is the determination of the largest $L$ elements of ${\bm \beta}$ and $\hat{\bf x}$. This could be done with $O(M)$ complexity, as described in [@whitebook] (Chapter 9). A more straightforward implementation for this stage might use sorting of the relevant coefficients, which would result in a higher complexity of $O(M\log M)$ for the overall algorithm. Reweighted Algorithms {#reweigh} --------------------- For high $L/N$ ratios, simulation results show that SuPrEM I and SuPrEM II still perform well. However more iterations are needed to achieve very low distortion levels, which may be undesirable. Thus we propose a modification to SuPrEM I and SuPrEM II to speed up the convergence that uses estimates generated within a few iterations. In compressive sensing, employing prior estimates to improve the final solution has been used for $\ell_1$ approximation [@Candes-Wakin], but this increases the running time by a factor of reweighing steps. Next, we motivate for our reweighing approach. In our algorithms, the initial choice of $\{ \beta_k^{(0)} = |({\bf F}^T {\bf r})_k|^2/d_v^2 \}$ is based on the intuition that $\beta_k$ must be proportional to $|x_k|^2$. By providing a better estimate for the initial $\{\beta_k^{(0)}\}$, the rate of convergence may be improved. The algorithm is initiated with ${\bm \beta}^{(0)}$ as above and is run for $T_{r_1}$ iterations. At the end of this stage, we re-initialize ${\bm \beta}^{(0)'}$ to be $$\beta_k^{(0)'} = \big|\hat{x}_k^{(T_{r_1})}\big|^2 + \big|({\bf F}^T ({\bf r - F\hat{x}}^{(T_{r_1})}))_k\big|^2/d_v^2,$$ and the algorithm is run for $T_{r_2}$ iterations. This process is repeated recursively until convergence or ${\cal R}$ times. We note that $\sum_{k=1}^{\cal R} T_{r_k} = T$, where $T$ is the original number of fixed iterations. Thus the total number of iterations remains unchanged when we use reweighing. Simulation Details {#sec:sims} ================== Simulation Setup ---------------- In our simulations we used LDFs with parameters $(3,6)$, $(3,12)$ and $(3, 24)$ for $M/N = 2, 4, 8$ and $M = 10000$. We constructed these frames using the progressive edge growth algorithm [@peg], avoiding cycles of length 4 when possible [^3]. Simulations will be presented for SNR$= 12,24,36$ dB, as well as the noiseless case. For various choices of $L$ and SNR, we ran 1000 Monte-Carlo simulations for each value, where ${\bf x}$ is generated as a signal with $L$ non-zero elements that are picked from a Gaussian distribution. The support of ${\bf x}$ is picked uniformly at random. Once ${\bf x}$ is generated, it is normalized such that $||{\bf Fx}||_2 = \sqrt{N}$. Thus SNR$=10 \log_{10}\frac{1}{\sigma^2}$. Let ${\cal G}$ be the genie decoder that has full information about supp$({\bf x}) = \{i: x_i \neq 0\}$. Let the output of this decoder be $\hat{\bf x}_{genie} = {\cal G}({\bf r})$ obtained by solving the least squares problem involving ${\bf r}$ and the matrix formed by the columns of ${\bf F}$ specified by supp$({\bf x})$. We define the following genie distortion measure: $$\bar{d}_g ({\bf x},\hat{\bf x}_{genie}) = \frac{||{\bf x} - \hat{\bf x}_{genie}||_2^2}{||{\bf x}||_2^2}.$$ This distortion measure is invariant to the scaling of ${\bf x}$ for a fixed SNR. For any other recovery algorithm that outputs an estimate $\hat{\bf x}$, we let $$\bar{d}_e ({\bf x},\hat{\bf x}_{e}) = \frac{||{\bf x} - \hat{\bf x}_{e}||_2^2}{||{\bf x}||_2^2},$$ where the subscript $e$ denotes the estimation procedure. We will be interested in the performance of an estimation procedure with respect to the genie decoder. To this end, we define $${\cal D}_{e / g} ({\bf x},\hat{\bf x}_{e}, \hat{\bf x}_{genie}) = \frac{||{\bf x} - \hat{\bf x}_{e}||_2^2}{||{\bf x} - \hat{\bf x}_{genie}||_2^2} = \frac{\bar{d}_e({\bf x},\hat{\bf x}_{e})}{\bar{d}_g({\bf x},\hat{\bf x}_{genie})}.$$ We will be interested in this quantity averaged over $K$ Monte-Carlo simulations, and converted to dB. The closer this quantity is to 0 dB means the closer the performance of the estimation procedure is to the performance of the genie decoder. In other cases, such as the noiseless case, we will be interested in the empirical probability of recovery. For $K$ Monte-Carlo simulations, this is given by $$P_{rec} = \frac1K \sum_{k=1}^K {\mathbb I}({\bf x} \sim \hat{\bf x}_e),$$ where ${\mathbb I}(\cdot)$ is the indicator function for $(\cdot)$ (1 if $(\cdot)$ is true, 0 otherwise). We will define the relation ${\bf x} \sim \hat{\bf x}_e$ to be true only if supp$({\bf x}) = $ supp$(\hat{\bf x}_e)$, unless otherwise specified. A number of different stopping criterion can be used for ${\cal T}$: 1) $\hat{{\bf x}}$ converges, 2) The minimum value of $\{||{\bf r - F\hat{x}}^{(t)}||_2\}_t$ does not change for $T^d$ iterations , 3) A fixed number of iterations $T$ is reached. In our simulations we use criterion two with $T^d = 30$ and $T = 500$. These values were chosen to make sure that the algorithms did not stop too prematurely. The message passing schedule ${\cal S}$ is described in detail in Appendix \[schedules\]. Finally, for the reweighted algorithm we use 10 reweighings with $T_{r_1} = \dots = T_{r_{10}} = T/10$. Simulation Results ------------------ Simulation results are presented in Figure \[gaussian\_results\] for exactly sparse signals. ![image](gaussian7.PNG){width="7.2in"} For comparison to our algorithms, we include results for CoSaMP [@Needell3] and $\ell_1$ based methods [@Candes-Romberg2; @Candes-Romberg-Tao; @Candes-Tao; @Donoho2; @gpsr]. For these algorithms we used partial Fourier matrices as measurement matrices. The choice of these matrices is based on their small storage requirements (in comparison to Gaussian matrices), while still satisfying restricted isometry principles. For CoSaMP, we used 100 iterations of the algorithm (and 150 iterations of Richardson’s iteration for calculating least squares solutions). For $\ell_1$ based methods, we used the [L1MAGIC]{} package in the noiseless case. In the noisy case, we used both [L1MAGIC]{}, and the [GPSR]{} package (with Barzilai-Borwein Gradient Projection with continuation and debiasing). Since these two methods approximately perform the same, we include the results for [GPSR]{} here. In the implementation of GPSR we fine-tune the value of $\tau$ and observe that $\tau = 0.001 ||{\bf F}^T{\bf r}||_{\infty}$ gives the best performance. Since the outputs of $\ell_1$ based methods and SuPrEM I are not sparse, we threshold ${\bf x}$ to its $L$ largest coefficients and postulate these are the locations of the sparse coefficients. For all methods, we solve the least squares problem involving ${\bf r}$ and the matrix formed by the columns of ${\bf F}$ specified by the final estimate for the locations of the sparse coefficients. For partial Fourier matrices we use Richardson’s iteration to calculate this vector, whereas for LDFs we use the LSQR algorithm which also has $O(M)$ complexity [@lsqr]. Discussion of The Results ------------------------- The simulation results indicate that the SuPrEM algorithms outperform the other state-of-the-art algorithms. In the low SNR regime (SNR = 12 dB), SuPrEM algorithms and the $\ell_1$ methods have similar performance. In moderate and high SNR regimes, we see that SuPrEM algorithms significantly outperform the other algorithms both in terms of distortion and in terms of the maximum sparsity they can work at. Furthermore for different values of $N$, the maximum sparsity scales as $L = O(N/ \log(M/N))$, which is the same scaling as those of other methods. As we discussed previously the performance of SuPrEM I degrades as sparsity and SNR increases. We also observe that the reweighted SuPrEM II algorithm outperforms the regular SuPrEM II algorithm, even though the maximum number of iterations are the same. Finally, compared to the other methods for the noiseless problem, the SuPrEM algorithms can recover signals that have a higher number of non-zero elements. In this case, the reweighted algorithm performs the best, and converges faster. We also note that the results presented for CoSaMP and $\ell_1$ based methods for the noiseless case are optimistic, since we declare success in recovery if $\bar{d}_e ({\bf x},\hat{\bf x}_{e}) <10^{-6}$. We needed to introduce this measure, since these algorithms tend to miss a small portion of the support of ${\bf x}$ containing elements of small magnitude. We also note that for both partial Fourier matrices and LDFs, the quantity $\bar{d}_g ({\bf x},\hat{\bf x}_{genie})$ is almost the same for a fixed $L$ and SNR. This means that ${\cal D}_{e / g} ({\bf x},\hat{\bf x}_{e}, \hat{\bf x}_{genie})$ provides an objective performance criterion in terms of relative mean-square error with respect to the genie bound, as well as in terms of absolute distortion error $\bar{d}_e ({\bf x},\hat{\bf x}_{e})$. Simulation Results for Natural Images ------------------------------------- For the testing of compressible signals, instead of using artificially generated signals, we used real-world compressible signals. In particular, we compressively sensed the [db2]{} wavelet coefficients of the $256 \times 256$ (raw) peppers image using $N = 17000$ measurements. Then we used various recovery algorithms to recover the wavelet coefficients, and we did the inverse wavelet transform to recover the original image. ![image](comparison_new2.PNG){width="7.2in"} For SuPrEM algorithms, we used a rate $(3, 12)$ LDF with $M = 68000$ (the wavelet coefficients vector was padded with zeros to match the dimension). We set $L = 8000$ (the maximum sparsity the algorithm converged at) for SuPrEM II. We ran the algorithm first with $\sigma = 0$. We also accomodated for noise, and estimated the per measurement noise to be $\sigma = 0.1\frac{||{\bf r}||_2}{\sqrt{N}}$ and ran the algorithm again[^4]. We ran our algorithms for just 50 iterations. For the reweighted SuPrEM II algorithm, we let $\sigma = 0$ and we reweighed after 5 steps of the algorithm for a total of 10 reweighings. For SMP, we used the [SMP]{} package [@Berinde3]. We used a matrix generated by this package, and $L = 8000$. For the remaining methods, we used partial Fourier matrices whose rows were chosen randomly. For $\ell_1$ with equality constraints, we used the [L1MAGIC]{} package. For LASSO, we used the [GPSR]{} package and $\tau = 0.001 ||{\bf F}^T{\bf r}||_{\infty}$, as described previously, and we thresholded the output to $L=8000$ sparse coefficients and solved the appropriate least squares problem to get the final estimate. For CoSaMP and Subspace Pursuit, we used 100 iterations of the algorithm (and 150 iterations for the Richardson’s iteration for calculating the least square solutions). For these algorithms, we used $L = 3000$ for CoSaMP, and $L = 3500$ for Subspace Pursuit. These are slightly lower than the maximum sparsities they converged at ($L = 3500$ and $L=4000$ respectively), but the values we used resulted in better visual quality and PSNR values. The results are depicted in Figure \[pepper\_comparison\]. The PSNR values for the methods are as follows: 23.41 dB for SuPrEM II, 23.83 dB for SuPrEM II (with non-zero $\sigma^2$), 24.79 for SuPrEM II (reweighted), 20.18 dB for CoSaMP, 19.51 dB for SMP, 21.62 dB for $\ell_1$, 23.61 dB for LASSO, 21.27 dB for Subspace Pursuit. Among the algorithms that assume no knowledge of noise, we see that SuPrEM II outperforms the other algorithms both in terms of PSNR value and in terms of visual quality. The two algorithms that accomodate noise, SuPrEM II (in this case SuPrEM I also produces a similar output) and LASSO have similar PSNR values. Finally, the reweighted SuPrEM II also assumes no knowledge of noise, and outperforms all other methods by about 1 dB and also in terms of visual quality, without requiring more running time. Further Results --------------- We studied the effect of the change of degree distributions. For a given $M/N$ ratio, we need to keep the ratio of $d_c/d_v$ fixed however the values can be varied. Thus we compared the performance of $d_v = 3$ LDFs to $d_v = 5$ LDFs, and observed that the latter actually performed sligthly better. However, having a higher $d_v$ means more operations are required. We also observed that the number of iterations required for convergence was slightly higher. Thus we chose to use $d_v = 3$ LDFs that allowed faster decoding. We also note that increasing $d_v$ too much (while keeping $M/N$ fixed) results in performance deterioration, since the graph becomes less sparse, and we run into shorter cycles which affect the performance of SPA. We also tested the performance of our constructions and algorithms at $M = 100000$. With $L/M$ and $N/M$ fixed, interestingly the performance improves as $M \to \infty$ for Gaussian sparse signals for a fixed maximum number of 500 iterations. This is in line with intuitions drawn from Shannon Theory [@AkTar2]. Another interesting observation is that the number of iterations remain unchanged in this setting. In general, we observed that the number of iterations required for convergence is only a function of $L/M$ and does not change with $M$. Conclusion {#sec:conc} ========== In this paper, we constructed an ensemble of measurement matrices with small storage requirements. We denoted the members of this ensemble as Low Density Frames (LDF). For these frames, we provided sparse reconstruction algorithms that have $O(M)$ complexity and that are Bayesian in nature. We evaluated the performance of this ensemble of matrices and their decoding algorithms, and compared their performance to other state-of-the-art recovery algorithms and their associated measurement matrices. We observed that in various cases of interest, SuPrEM algorithms with LDFs outperformed the other algorithms with partial Fourier matrices. In particular, for Gaussian sparse signals and Gaussian noise, we are within 2 dB range of the theoretical lower bound in most cases. There are various interesting research problems in this area. One is to find a deterministic message-passing schedule that performs as well as (or better than) our probabilistic message-passing schedule and that is amenable to analysis. Another open problem is to analyze the performance of the iterative decoding algorithms for the LDFs theoretically, which may in turn lead to useful design tools (like Density Evolution [@Urbanke]) that might help with the construction of LDFs with irregular degree distributions. Adaptive measurements using the soft information available about the estimates, as well as online decoding (similar to Raptor Codes [@Raptor]) is another open research area. Finally, if further information is available about the statistical properties of a class of signals (such as block-sparse signals or images represented on wavelet trees as in [@modelbased]), the decoding algorithms may be changed accordingly to improve performance. Details On The Message-Passing Schedule {#schedules} ======================================= A message-passing schedule determines the order of messages passed between variable and check nodes of a factor graph. Traditionally, with LDPC codes, the so-called “flooding” schedule is used. In this schedule, at each iteration, all the variable nodes pass messages to their neighboring check nodes. Subsequently, all the check nodes pass messages to their neighboring variable nodes. For a cycle-free graph, SPA with a flooding schedule correctly computes a-posteriori probabilities [@Bishop; @Banihashemi-bunch]. An alternative schedule is the “serial” schedule, where we go through each variable node serially and compute the messages to the neighboring nodes. The order in which we go through variable nodes could be lexicographic, random or based on reliabilities. In this section, we propose the following schedule based on the intuition derived from our simulations and results from LDPC codes [@Banihashemi-prob; @Banihashemi-bunch]: For the first iteration, all the check nodes send messages to variable nodes and vice-versa in a flooding schedule. After this iteration, with probability $\frac12$ each check node is “on” or “off”. If a check node is off, it marks the edges connected to itself as an “inactive”, and sends back the messages it received to the variable nodes. If a check node is on, it marks the edges connected to itself as “active” and computes a new message. At the variable nodes, when calculating the new beta, we only use the information coming from active edges. That is for $k=1,2,\dots, M$, let $\{k_1, k_2, \dots, k_{d_v}\}$ be the indices of the check nodes connected to the $k^{\textrm{th}}$ variable node $x_k$. Let the incoming message from the check node $r_{k_j}$ to the variable node $x_k$ at the $t^{\textrm{th}}$ iteration be $(\mu_{k_j}^{(t)}, \nu_{k_j}^{(t)})$ for $j = 1, \dots, d_v$. We will have $$\lambda_k^{(t)} = \bigg(\sum_{(k, k_j) \textrm{ is an active edge}} \frac{1}{\nu_{k_j}^{(t)}} + \frac{1}{\beta_k^{(t-1)}} \bigg)^{-1},$$ $$\mu_k^{(t)} = \lambda_k^{(t)} \Bigg(\sum_{(k, k_j) \textrm{ is an active edge}} \frac{\mu_{k_j}^{(t)}}{\nu_{k_j}^{(t)}} \Bigg),$$ and $$\beta_k^{(t)} = \frac{(\mu_k^{(t)})^2 + \lambda_k^{(t)}}{3}.$$ Thus when there is no active edge, we do not perform a $\beta$ update. For the special case when there is only one active edge $(k, k_j)$, we let $\mu_k^{(t)} = \mu_{k_j}$. This is because the intrinsic information is more valuable, and the estimate on $\beta_k^{(t-1)}$ tends to be not as reliable. When we calculate the point estimate, we use all the information at the node, including the reliable and unreliable edges, i.e. $$\hat{V}_k^{(t)} = \bigg(\sum_{j=1}^{d_v} \frac{1}{\nu_{k_j}^{(t)}} + \frac{1}{\beta_k^{(t)}} \bigg)^{-1},$$ $$\hat{x}_k^{(t)} = \hat{V}_k^{(t)} \bigg(\sum_{j=1}^{d_v} \frac{\mu_{k_j}^{(t)}}{\nu_{k_j}^{(t)}} \bigg).$$ It is noteworthy that the flooding schedule and serial schedules tend to converge to local minima and they do not perform as well as this schedule we proposed. [99]{} M. Akçakaya and V. Tarokh, “A Frame Construction and A Universal Distortion Bound for Sparse Representations,” [*IEEE Trans. Sig. Proc.*]{}, vol. 56, pp. 2443-2550, June 2008. M. Akçakaya and V. Tarokh, “On Sparsity, Redundancy and Quality of Frame Representations,” IEEE Int. Symposium on Information Theory (ISIT), Nice, France, June 2007. M. Akçakaya and V. Tarokh, “Shannon theoretic limits on noisy compressive sampling,” arXiv:0711.0366v1 \[cs.IT\], Nov. 2007. D. Andrews and C. Mallows, “Scale mixtures of normal distributions,” [*J. R. Stat. Soc.*]{}, vol. 36, pp. 99 - 102, 1974. R. Baraniuk, V. Cevher, M. Duarte, and C. Hegde, “Model-based compressive sensing,” arXiv:0808.3572v2, Sept. 2008. R. Berinde, A. C. Gilbert, P. Indyk, H. Karloff, and M. J. Strauss, “Combining geometry and combinatorics: A unified approach to sparse signal recovery,” preprint, 2008. R. Berinde, P. Indyk, and M. Ruzić, “Practical near-optimal sparse recovery in the ell-1 norm,” Proc. Allerton Conference on Communication, Control, and Computing, Monticello, IL, September 2008. J. M. Bioucas-Dias, “Bayesian Wavelet-Based Image Deconvolution: A GEM Algorithm Exploiting a Class of Heavy-Tailed Priors,” [*IEEE Trans. Image Proc.*]{}, vol. 15, pp. 937-951, Apr. 2006. C. M. Bishop, [*Pattern Recognition and Machine Learning*]{}, First Edition, Springer, New York, NY, 2006. E. J. Candès, J. Romberg, “Practical signal recovery from random projections,” presented at the [*Wavelet Appl. Signal Image Process. XI, SPIE Conf.*]{}, San Diego, CA, 2005. E. J. Candès, J. Romberg, T. Tao, “Stable signal recovery for incomplete and inaccurate measurements,” [*Commun. Pure Appl. Math.*]{}, vol. 59, pp. 1207-1223, Aug. 2006. E. J. Candès and T. Tao, “The Dantzig selector: statistical estimation when p is much larger than n,” Annals of Statistics, 35, pp. 2313-2351, Dec. 2007. E. J. Candès, T. Tao, “Decoding by Linear Programming,” [*IEEE Trans. Inf. Theory*]{}, vol. 51, pp. 4203-4215, Dec. 2005. E. J. Candès, M. Wakin and S. Boyd, “Enhancing sparsity by reweighted l1 minimization,” J. Fourier Anal. Appl., vol. 14, pp. 877-905. T. Cormen, C. Lesierson, L. Rivest, and C. Stein, [*Introduction to Algorithms*]{}, Second Edition, MIT Press, Cambridge, MA, 2001. D. L. Donoho, “Compressed Sensing,” [*IEEE Trans. Inf. Theory*]{}, vol. 52, pp. 1289-1306, April 2006. W. Dai and O. Milenkovic, “Subspace pursuit for compressive sensing: Closing the gap between performance and complexity,” arXiv:0803.0811v2 \[cs.NA\], March 2008. M. A. T. Figueiredo and R. Nowak, “Wavelet-based image estimation: An empirical bayes approach using Jeffreys’ noninformative prior,” IEEE Trans. Image Proc., vol. 10, pp. 1322-1331, Sep. 2001. M. A. T. Figueiredo, R. D. Nowak and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” *IEEE Journal of Selected Topics in Signal Processing*, vol. 1, pp. 586-598, Dec. 2007. A. K. Fletcher, S. Rangan and V. K. Goyal, “Necessary and Sufficient Conditions on Sparsity Pattern Recovery,” arXiv:0804.1839v1 \[cs.IT\], Apr. 2008. R. G. Gallager, [*Low-Density Parity-Check Codes*]{}, MIT Press, Cambridge, MA, 1963. X.-Y. Hu, E. Eleftheriou and D. M. Arnold, “Regular and irregular progressive edge-growth tanner graphs,” [*IEEE Trans. Inf. Theory*]{}, vol. 51, pp. 386-398, Jan. 2005. S. Ji, Y. Xue and L. Carin, “Bayesian compressive sensing,” [*IEEE Trans. on Sig. Proc.*]{}, vol. 56, pp. 2346-2356, June 2008. F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, “Factor Graphs and the Sum-Product Algorithm,” [*IEEE Trans. Inf. Theory*]{}, vol. 47, pp. 498-519, Feb. 2001. D. J. C. MacKay, “Good error correcting codes based on very sparse matrices,” [*IEEE Trans. Inf. Theory*]{}, vol. 45, pp. 399-431, Mar. 1999. D. J. C. MacKay, [*Information Theory, Inference, and Learning Algorithms*]{}, First Edition, Cambridge University Press, Cambridge, UK, 2002. Y. Mao and A. H. Banihashemi, “Decoding Low-Density Parity-Check Codes With Probabilistic Scheduling,” [*IEEE Comm. Letters*]{}, vol. 5, pp. 414-416, Oct. 2001. G. J. McLachlan and T. Krishnan, [*The EM Algorithm and Extensions*]{}, First Edition, John Wiley & Sons, New York, NY, 1997. T. K. Moon, “The EM algorithm in signal processing,” [*IEEE Sig. Proc. Mag.*]{}, vol. 13, pp. 47-60, Nov. 1996. D. Needell and J. A. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” arXiv:0803.2392v2 \[math.NA\], Apr. 2008. C. C. Paige and M. A. Saunders, “LSQR: Sparse Linear Equations and Least Squares Problems,” *ACM Transactions on Mathematical Software (TOMS)*, vol. 8, pp.195-209, June 1982. J. Portilla, V. Strela, M. J. Wainwright and E. P. Simoncelli, “Image Denoising Using Scale Mixtures of Gaussians in the Wavelet Domain,” [*IEEE Trans. Image Proc.*]{}, vol. 12, pp. 1338-1351, Nov. 2003. T. J. Richardson and R.L. Urbanke, “The capacity of low-density parity-check codes under message passing decoding,” [*IEEE Trans. Inf. Theory*]{}, vol. 47, no. 2, pp. 599-618, Feb. 2001. C. Robert, [*The Bayesian Choice: A Decision Theoretic Motivation*]{}, First Edition, New York, NY, Springer-Verlag, 1994. S. Sarvotham, D. Baron, and R. Baraniuk, “Sudocodes - Fast Measurement and Reconstruction of Sparse Signals,” [*Proc. IEEE Int. Symp. on Inf. Theory (ISIT)*]{}, Seattle, WA, July 2006. S. Sarvotham, D. Baron, and R. Baraniuk, “Compressed Sensing Reconstruction via Belief Propagation,” preprint, 2006. A. Shokrollahi, “Raptor codes,” [*IEEE Trans. Inf. Theory*]{}, vol. 52, pp. 2551-2567, June 2006. M. Sipser and D. A. Spielman,“Expander codes,” [*IEEE Trans. Inf. Theory*]{}, vol. 42, pp. 1710-1722, Nov. 1996. R. M. Tanner, “ A Recursive Approach to Low Complexity Codes,” [*IEEE Trans. Inf. Theory*]{}, vol. 27, pp. 533-547, Sept. 1981. M. E. Tipping, “Sparse Bayesian learning and the relevance vector machine,” Journal of Machine Learning Research, vol. 1, pp. 211-244, 2001. M. E. Tipping, “Bayesian inference: An introduction to principles and practice in machine learning,” in O. Bousquet, U. von Luxburg, and G. Rätsch (Eds.), Advanced Lectures on Machine Learning, pp. 41-62, Springer, 2004. J. A. Tropp, “Topics in Sparse Approximation”, Ph.D. dissertation, Computational and Applied Mathematics, UT-Austin, August 2004. J. A. Tropp, “Just relax: Convex programming methods for identifying sparse signals”, [*IEEE Trans. Inf. Theory*]{}, vol. 51, no. 3, pp. 1030-1051, Mar. 2006. J. A. Tropp, A. C. Gilbert, “Signal recovery from partial information via Orthogonal Matching Pursuit”, [*IEEE Trans. Inf. Theory*]{}, vol. 53, pp.4655-4666, Dec. 2007. M. J. Wainwright, “Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting,” Technical Report, UC Berkeley, Department of Statistics, Jan. 2007. M. J. Wainwright, “Sharp thresholds for noisy and high-dimensional recovery of sparsity using ${\ell}_1$-constrained quadratic programming,” Technical report, UC Berkeley, Department of Statistics, May 2006. Y. Weiss and W. T. Freeman, “Correctness of belief propagation in Gaussian graphical models of arbitrary topology,” [*Proc. Adv. Neural Inform. Processing Syst.*]{}, vol. 12, Dec. 1999. N. Wiberg, “Codes and decoding on general graphs,” Ph.D. dissertation, Linköping University, Sweden, 1996. H. Xiao and A. H. Banihashemi, “Graph-Based Message-Passing Schedules for Decoding LDPC Codes,” [*IEEE Trans. on Comm.*]{}, vol. 52, pp. 2098-2105, Dec. 2004. W. Xu and B. Hassibi, “Efficient compressive sensing with deterministic guarantees using expander graphs,” [*Proc. IEEE Inf. Theory Workshop*]{}, Lake Tahoe, CA, Sept. 2007. [^1]: M. Akçakaya, J. Park and V. Tarokh are with the School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, 02138. (e-mails: {akcakaya, vahid}@seas.harvard.edu, park10@fas.harvard.edu) [^2]: We use the terms “frame” and “measurement matrix” interchangeably throughout the rest of the paper. [^3]: We also tested LDFs with 4 cycles and this does not seem to have an adverse effect on the average distortion in the presence of noise. [^4]: With this value of $\sigma$, SuPrEM I also provides a similar performance. However since the output in this case is very similar to that of SuPrEM II, we do not include it in the figure.
--- abstract: 'This paper outlines the steps taken toward pre-processing the 55,134 images of the MORPH-II non-commercial dataset. Following the introduction, section two begins with an overview of each step in the pre-processing pipeline. Section three expands upon each stage of the process and includes details on all calculations made, by providing the OpenCV functionality paired with each step. The last portion of this paper discusses the potential improvements to this pre-processing pipeline that became apparent in retrospect.' address: 'Department of Mathematics and Statistics, The University of North Carolina Wilmington, Wilmington, NC 28403, USA' author: - 'B. Yip, R. Towner, T. Kling, C. Chen, and Y. Wang' bibliography: - 'references.bib' title: 'Image Pre-processing Using OpenCV Library on MORPH-II Face Database' --- Introduction ============ The MORPH data is one of the largest publicly available longitudinal face databases [@ricanek2006morph]. Since its first release in 2006, it has been cited by over 500 publications. Multiple versions of MORPH have been released, but for our face image analysis study, we use the 2008 MORPH-II non-commercial release. The MORPH-II dataset includes 55,134 mugshots with longitudinal spans taken between 2003 and late 2007. For each image, the following metadata is included: subject ID number, picture number, date of birth, date of arrest, race, gender, age, time since last arrest, and image filename. Because of its size, longitudinal span, and inclusion of relevant metadata, the MORPH-II dataset is widely utilized in the field of computer vision and pattern recognition, including a variety of race, gender, and age face imaging tasks. [0.35]{} ![Different types of variation present in the MORPH-II dataset.[]{data-label="variety"}](figures/variety_headtilt "fig:"){width="\textwidth"} \[variety\_headtilt\] [0.35]{} ![Different types of variation present in the MORPH-II dataset.[]{data-label="variety"}](figures/variety_distance "fig:"){width="\textwidth"} \[variety\_distance\] [0.35]{} ![Different types of variation present in the MORPH-II dataset.[]{data-label="variety"}](figures/variety_illumination "fig:"){width="\textwidth"} \[variety\_illumination\] [0.35]{} ![Different types of variation present in the MORPH-II dataset.[]{data-label="variety"}](figures/variety_female "fig:"){width="\textwidth"} \[variety\_female\] However, despite the fairly standard format of police photography, many of the images vary greatly in terms of head-tilt, camera distance, and illumination. A great number of the images also contain large, empty backgrounds or excess occlusion that add a corresponding amount of noise to the data. This longitudinal dataset had an average of approximate 4 images per subject and some appearances varied greatly from one image to the next. Preliminary results showed women having an increased overall variation in their images due to changes in makeup and hairstyle. Figure \[variety\] showcases some variety found during initial examination of the image dataset. Consequently, the pre-processing step is crucial for the image analysis on the MORPH-II dataset. For our purposes, we utilized the Open Source Computer Vision 2 (OpenCV) library in Python to extract the face from each mugshot using the image vectors [@opencv_library]. The stages of this process are outlined in section 2. Procedural Overview =================== This section provides a global description for the six stages of our pre-processing algorithm. The premise is to minimize image noise by the use of bounding boxes around necessary region of interest (ROI). Both Figures \[pipeline1\] and \[pipeline2\] are visual representations of each step and what is accomplished, from different prospects. ![Stages of the pre-processing pipeline with successful face and eye detection[]{data-label="pipeline1"}](a "fig:"){width=".6\textwidth"}\ **Initial face and eyes detection.** ![Stages of the pre-processing pipeline with successful face and eye detection[]{data-label="pipeline1"}](b "fig:"){width=".6\textwidth"}\ **Rotation.** ![Stages of the pre-processing pipeline with successful face and eye detection[]{data-label="pipeline1"}](c "fig:"){width=".6\textwidth"}\ **Face and eye re-detection.** ![Stages of the pre-processing pipeline with successful face and eye detection[]{data-label="pipeline1"}](d "fig:"){width=".6\textwidth"}\ **Cropping and scaling** ![Stages of the pre-processing pipeline with successful face and eye detection[]{data-label="pipeline1"}](e "fig:"){width=".6\textwidth"}\ **Pre-processed image.** **Note:** While the intermediate stages of the process in Figure \[pipeline1\] are shown with color images, all computer vision tasks were done only on the grayscale versions of each image (converted with OpenCV).\ \[ImageProcessPipeline\] [0.25]{} ![Face preprocessing pipeline with successful face and eye detection.[]{data-label="pipeline2"}](figures/pic1 "fig:"){width="\textwidth"} \[\] [0.25]{} ![Face preprocessing pipeline with successful face and eye detection.[]{data-label="pipeline2"}](figures/pic2 "fig:"){width="\textwidth"} \[\] [0.25]{} ![Face preprocessing pipeline with successful face and eye detection.[]{data-label="pipeline2"}](figures/pic3 "fig:"){width="\textwidth"} \[\] [0.25]{} ![Face preprocessing pipeline with successful face and eye detection.[]{data-label="pipeline2"}](figures/pic4 "fig:"){width="\textwidth"} \[\] [0.25]{} ![Face preprocessing pipeline with successful face and eye detection.[]{data-label="pipeline2"}](figures/pic5 "fig:"){width="\textwidth"} \[\] [0.25]{} ![Face preprocessing pipeline with successful face and eye detection.[]{data-label="pipeline2"}](figures/pic7 "fig:"){width="\textwidth"} \[\] Grayscale --------- Research in computer vision showed converting images to grayscale increased the accuracy of locating the necessary facial features as it reduced the effect illumination variance had on the images. Doing so was the first introduction to the OpenCV library. For each image, we utilized the OpenCV function $cv2.cvtColor(src, channel)$ where $src$ is the input image and $channel$ represents the color channel for the output image. In our case we used $Color_BGR2Gray$. This results in the new image pixel value, $Y$, where $Y = 0.299 \cdot R + 0.587 \cdot G + 0.114 \cdot B$, and $R$, $G$, $B$ represent Red, Green and Blue respectively. The values of $R$, $G$, $B$, are from the original image pixel. Effects are shown in Figure \[pipeline2\](B). Face Detection -------------- The initial face detection step located and marked the position of a face within an image, which can be seen in Figure \[pipeline2\](C). This eliminates backgrounds and hairstyles, which are image properties that are not useful for computer vision tasks. This procedure increased the accuracy of future detection steps. If a face was not successfully detected, the image was stored in a face not found (fnf) folder for manual detection later on. Both face and eye detection steps were accomplished using Haar-Feature based cascade classifiers from OpenCV. The function used was $cv2.cascadeClassifier.detectMultiScale(src, sf, mn)$ where $src$ is the input image, $sf$ is the scale factor at each image scale, and $mn$ is the minimum amount of neighbors each candidate face rectangle should acquire. Note that: Both face and eye detection were done using the the Haar-feature based Cascade Classifiers from OpenCV (the .xml files can be obtained from the OpenCV GitHub repository). For our purposes, we only had to adjust the parameters of the OpenCV detection function to locate the face in each image. Eye detection, however, required additional steps. Eye Detection ------------- We implemented a very similar algorithm for locating the eyes, illustrated in Figure \[pipeline2\](D). After the face is detected, the eyes are located within the region of interest (ROI) determined by the face (i.e. within the bounding box for the face), and the eye centers are computed as the center of the bounding box for each eye. The new domain and range of the image matrix come from the bounding box around a face that was found successfully. We then marked bounding boxes around each eye with the same Haar cascade function from OpenCV in section 2.2 with different parameter values to account for the smaller scope. In many cases, wrinkles, shadows, and other facial blemishes were detected as eyes. To account for this, we implemented two conditions to eliminate as many of the incorrect eye detections as possible: if 1) the angle between eye centers was greater than fifteen degrees, or 2) the interocular distance (number of pixels between eye centers) was less than one fifth of the image width, the located features were discarded. Following the test of these conditions, a while loop conditioned on successful eye detection was used to refine the parameters of the detect function when eyes were not found. If this too proved unsuccessful, the image was stored for manual detection. When both eyes were successfully found, figure 2(D), we captured the coordinate location of the right $(x_r,y_r)$ and left $(x_l, y_l)$ eye centers by calculating the center location of the new bounding boxes. These eye centers were crucial for future steps. Rotation -------- Given successful eye-detection, the image is rotated based on the angle between the eye centers, as illustrated in Figure \[pipeline2\](E). Rotating the image began with the eye centers $(x_r,y_r)$ and $(x_l, y_l)$. We added a conditional to ensure the left eye stored in $(x_l, y_l)$ was actually the left eye of the subject. $\theta$ was then calculated by subtracting the left eye from the right eye, $(x_r-x_l, y_r-y_l)$, yielding the displacement between the eyes. We had to convert $\theta$ to the complex plane to allow use of the numpy angle function. $\theta$ was then a parameter used in the getRotationMatrix2D OpenCV function to create the necessary transformation matrix, M. cv2.getRotationMatrix2D(center, theta, scale) center = center of rotation source image scale = scale factor $$\begin{bmatrix} \alpha & \beta & (1-\alpha)\cdot center.x - \beta \cdot center.y \\ -\beta & \alpha & \beta \cdot center.x + (1-\alpha) \cdot center.y \\ \end{bmatrix},$$ where, $\alpha = scale \cdot \cos(angle), \beta = scale \cdot \sin(angle)$. After the transformation matrix is calculated, it is applied to each pixel in the source image to produce the necessary rotated image. The OpenCV function warpAffine reconstructs the source image by producing the desired rotated image, [*dst*]{}: cv2.warpAffine(src, M, (column, row)) $$dst(x,y)=src(M_{11}x+M_{12}y+M_{13}, M_{21}x+M_{21}y+M_{21}).$$ Face and Eye Re-detection ------------------------- Following rotation, the face and eyes are re-detected as above, as illustrated in Figures \[pipeline1\] and \[pipeline2\](E). Images with unsuccessfully detected faces were stored in a “fnf\_r” (face not found, rotated image) folder for manual detection later on. Images with undetected eyes were stored similarly, in a “enf\_r” (eyes not found, rotated image) folder. Cropping and Scaling -------------------- After the eye centers were successfully re-located in the rotated image, a new bounding box for the face was determined based on the interocular distance. This step ultimately found the white bounding box in Figure \[pipeline2\](F) for the cropped image based on the interocular distance. Eye centers were relocated in the rotated image using the Haar cascade in section 2.3. The new eye center coordinates gave the interocular distance used for defining the height and width of the new bounding box. Finding the upper left corner of the box began with subtracting one interocular distance from the x-value midpoint of the eyes. The y-coordinate was calculated by adding four fifths times the interocular distance to the current eye height $(x=eyemidpoint-interoc, y=eyeheight+0.8\cdot interoc)$. \[Note: 0.8 was chosen as the best option for capturing an appropriate region of the face\]. The final bounding box was a slice of the rotated image with proportions 2:2.35 times the interocular distance. The image was then cropped according to this frame and scaled down to 70 pixels tall by 60 pixels wide, using the following OpenCV function: cv2.resize(cropped_img, (60, 70)) If an image could not be reduced to this size (i.e. something went wrong earlier on), it was stored in another folder for manual detection. Manual Pre-processing --------------------- The images in which the face or eyes were not successfully detected were handled by manually clicking the eye centers of each subject. These new eye centers were then used for rotating the image and the rest of the pipeline followed suit. The images below in Figure \[errant detection\] are examples of problem images that included errant detection. ![Examples of problem images and errant detection[]{data-label="errant detection"}](bad1){width=".5\textwidth"} ![Examples of problem images and errant detection[]{data-label="errant detection"}](bad2){width=".5\textwidth"} ![Examples of problem images and errant detection[]{data-label="errant detection"}](bad3){width=".5\textwidth"} In Retrospect ------------- When performing the manual eye-detection, the images were rotated based on the eye centers. However, we avoided re-clicking the eyes in the rotated image by simply applying the rotation matrix to the coordinates of the eye centers in the unrotated image. Had this been recognized in the original code, the face and eye re-detection step could have been skipped entirely. This would likely have drastically reduced run time and decreased the number of images necessitating manual eye detection. Conclusion ========== When performing the manual eye-detection, the images were rotated based on the eye centers (as above). However, we avoided re-clicking the eyes in the rotated image by simply applying the rotation matrix to the coordinates of the eye centers in the unrotated image. Had this been recognized in the original code, the face and eye re-detection step could have been skipped entirely (effectively merging stages **2.3** and **2.4**). This would likely have drastically reduced run time and decreased the number of images necessitating manual eye detection. Acknowledgments =============== This material is based in part upon work supported by the National Science Foundation under Grant Numbers DMS-1659288. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
--- abstract: 'Symbiotic stars are long-period interacting binary systems in which an evolved red giant star transfers material to its much hotter compact companion. Such a composition places them among the most variable stars. In addition to periodic variations due to the binary motion they often show irregular changes due to nova-like eruptions of the hot component. In some systems the cool giant is a pulsating Mira-type star usually surrounded by a variable dust shell. Here, I present results of optical and IR monitoring of symbiotic systems as well as future prospects for such studies.' author: - 'Joanna Miko[ł]{}ajewska' title: 'Optical and Near-IR Monitoring of Symbiotic Binary Systems' --- Introduction ============ Most stars in the Universe are binaries. Among them, symbiotic stars are interacting binaries in which an evolved giant transfers material to a hot and compact companion. In a typical configuration, a symbiotic binary contains an MIII giant and a white dwarf accreting material lost in the cool giant wind. The wind is ionised by the hotter of the binary components giving rise to symbiotic nebula (cf. Miko[ł]{}ajewska 1997). Based on the near-IR colours two distinct classes of symbiotic stars were defined (Allen 1983): the S-type (stellar) with normal red giants, and the D-type (dusty) with Mira primaries surrounded by a warm dust shell. The distinction between S and D types seems to be one of orbital separation: the binary must have enough space for the red giant, and yet allow it to transfer sufficient mass to its companion. In fact, all symbiotic systems with known orbital periods – of the order of a few years – belong to the S-type, while the orbital periods for D-type systems are generally not known probably because they are longer than periods covered by existing observations (cf. Belczy[ń]{}ski et al. 2000). Symbiotic stars are thus interacting binaries with the longest orbital periods and the largest component separations, and their study is essential to understand the evolution and interactions of detached and semi-detached binary stars. They are also among the brightest (intrinsically) stars, which makes them excellent observational targets both in our Galaxy as well as in nearby galaxies even for relatively small telescopes. In the following, I will present results of optical and infrared monitoring of symbiotic stars as well as future prospects for such studies. Variable phenomena in symbiotic stars ===================================== The composition of a typical symbiotic binary, specifically the presence of an evolved giant and its accreting companion, places symbiotic stars among the most variable stars. They can fluctuate in several different ways, which can be revealed and studied by patient monitoring of their light curves and radial velocity changes. Namely, binary motion can be manifested by eclipses of the hot component by the giant, modulation of the giant’s light due to reflection effect (with orbital period) and due to tidal distortion (with $P_{\rm orb}/2$), as well as radial velocity changes. The cool giant can also show intrinsic variability, in particular, radial pulsations (all D-type and some S-type systems) and semi-regular variations (S-type) with timescales of order of months and years as well as long-term light variations due to variable obscuration by circumstellar dust (most D-type systems), solar-type cycles, spots, etc. The effects of mass accretion onto the hot component also involve different variable phenomena. The hot component in the vast majority of symbiotic systems seems in fact to be a luminous ($\sim 1000\, \rm L_{\sun}$) and hot ($\sim 10^5\, \rm K$) white dwarf powered by thermonuclear burning of the material accreted from its companion’s wind. Depending on the accretion rate, these systems can be either in a steady burning configuration or undergo a hydrogen shell flashes. In many cases such flashes can last for decades due to low mass of the white dwarf (Miko[ł]{}ajewska 1997). In addition, the hot components in many systems show activity with timescales of a few years which cannot be simply accounted for by the thermonuclear models. A possible and promising explanation involves fluctuations in mass transfer and/or accretion instabilities. Below, I present examples of light curves for well-studied, though not yet completely understood, symbiotic binaries: RX Pup, CI Cyg and CH Cyg, which are representative for the wealth of variable phenomena observed in these systems. RX Puppis: a possible recurrent nova with a symbiotic Mira companion -------------------------------------------------------------------- RX Pup is a symbiotic binary composed of a long-period Mira variable pulsating with $P \approx 578$ days, surrounded by a thick dust shell, and a hot $\sim 0.8\, \rm M_{\odot}$, white dwarf companion. The binary separation could be as large as $a \geq 50$ a.u. (corresponding to $P_{\rm orb} \geq 200$ yr) as suggested by the permanent presence of a dust shell around the Mira component (Miko[ł]{}ajewska et al. 1999). In particular, the Mira is never stripped of its dust envelope, and even during relatively unobscured phases the star resembles the high-mass loss galactic Miras with thick dust shells. In general, the binary component separations in D-type systems must be larger than the dust formation radius. Assuming a typical dust formation radius of $\ga 5 \times R_{\rm Mira}$, and a Mira radius $R_{\rm Mira} \sim 1 \div 3\, {\rm au}$ (e.g. Haniff, Scholz & Tuthill 1995), the minimum binary separation is $a \ga 20\, {\rm au}$, and the corresponding binary period is $P_{\rm orb} \ga 50$yr, for [*any*]{} D-type system. Recent analysis of multifrequency observations shows that most, if not all, photometric and spectroscopic activity of RX Pup in the UV, optical and radio range is due to activity of the hot component, while the Mira variable and its circumstellar environment is responsible for practically all changes in the infrared range (Miko[ł]{}ajewska et al. 1999, and Fig. 1). In particular, RX Pup underwent a nova-like eruption during the last three decades. The hot component contracted in radius at nearly constant luminosity from 1975 to 1986, and was the source of strong stellar wind, which prevented it from accreting material lost in the Mira wind. Around 1988/9, the hot component turned over the HR diagram and by 1991 its luminosity had faded by a factor of $\sim 30$ with respect the maximum plateau value (see the very deep minimum in the visual light curve in Fig.1) and the hot wind had practically ceased. By 1995 the nova remnant started to accrete material from the wind, as indicated by a general increase of the optical flux. The earliest observational records from the 1890s suggest that another nova-like eruption of RX Pup occurred around 1894. The near-IR light curves show significant long-term variations in addition to the Mira pulsation (Fig. 1). The long-term changes are best visible in the $\langle J \rangle$ light curve after removal of the Mira pulsation (middle panel in Fig. 1). Miko[ł]{}ajewska et al. (1999) have found large changes in the reddening towards the Mira accompanied by fading of the near IR flux. However, the reddening towards the hot component and emission line regions remained practically constant and was generally less than that towards the Mira. These changes do not seem related to the orbital configuration nor to the hot component activity. Similar dust obscuration events seem to occur in many well covered symbiotic Miras (e.g. Whitelock 1998), as well as in single Miras (e.g. Mattei 1997, Whitelock 1998), and they are best explained as intrinsic changes in the circumstellar environment of the Mira variable, possibly due to intensive and variable mass loss. The last increase in extinction towards the Mira in RX Pup has been accompanied by large changes in the degree of polarization in the optical and red spectral ranges. This confirms that these long-term variations are driven by changes in the properties of the dust grains, such as variable quantity of dust and variable particle size distribution, due to dust grain formation and growth (Miko[ł]{}ajewska 2001). CI Cygni: a tidally distorted giant with a disc-accreting secondary ------------------------------------------------------------------- Although most symbiotic binaries seem to interact by wind-driven mass loss, a few of them may contain a Roche-lobe filling giant. They also show activity with time scales of order of years that can be related to the presence of accretion discs. Among them, CI Cyg is one of the best studied. Kenyon et al. (1991) demonstrated that it consists of an M5 II asymptotic branch giant, $M_{\rm g} \sim 1.5\, \rm M_{\sun}$, and a $\sim 0.5\, \rm M_{\sun}$ hot companion separated by 2.2 au. They also argued that the hot companion is a disc-accreting main sequence star. However, quiescent IUE data from early 1990s can be also accounted by a hot and luminous stellar source powered by thermonuclear burning which makes the case for CI Cyg as an accreting MS star less clear. The outburst light curve of CI Cyg ( Fig. 2) in addition to deep eclipses of the hot component by the red giant shows a $0.5-1.0$ oscillations with a period $\sim 0.9\, P_{\rm orb}$. Some other S-type systems show similar secondary periodicities best visible in their outburst light curves, in all cases $10-15\,\%$ shorter than the orbital periods. The nature of this secondary periodicity is unknown. Recently, Galis et al. (1999) suggested that in the case of AG Dra, it is due to radial pulsations of the giant, and the outbursts are driven by resonances between the pulsations and binary motion. On the other hand, the secondary periodicities are best visible in the optical light where the contribution from the giant, especially during the outburst, is very low or negligible. There is also a striking similarity between these variations and the superhumps of the SU UMa class of CVs which may indicate that they are rather related to the presence of accretion discs (Miko[ł]{}ajewska & Kenyon 1992). During the outburst and its decline, eclipses in the $UBV$ continuum and optical H[ ii]{}, He[i]{} and He[ii]{} emission lines were narrow with well-defined eclipse contacts whereas at quiescence very broad minima and continuous nearly sinusoidal changes are observed. In addition, the quiescent $VRI$ light curves show a modulation with $P_{\rm orb}/2$ as expected for an ellipsoidal light curve. The amplitude of this modulation, $\Delta I \sim 0.15$ is consistent with the system inclination, $i \sim 73^{\circ}$, and the mass ratio, $M_{\rm g}/M_{\rm h} \sim 3$ derived by Kenyon et al. (1991). The transition from narrow eclipses to sinusoidal variations was accompanied by large spectral changes and appearance of a radio emission with a spectral distribution that cannot be simply accounted for by any of the popular models for symbiotic stars (Miko[ł]{}ajewska & Ivison 2001). CH Cygni: triple or binary system with a magnetic white dwarf ------------------------------------------------------------- The record for the complexity of variable phenomena found in a single symbiotic object may be held by CH Cyg, the symbiotic system with the longest ($P_{\rm orb} \sim 15.5$ yr) measured orbital period (Miko[ł]{}ajewski, Tomov, & Miko[ł]{}ajewska 1987; Hinkle et al. 1993), which light curves are presented in Fig. 3. Both the light curves and the radial velocity curves show multiple periodicities: a $\sim 100^{\rm d}$ photometric period, best visible in the $VRI$ light curves, has been attributed to radial pulsation of the giant (Miko[ł]{}ajewski, Miko[ł]{}ajewska, & Khudyakova 1992), while the nature of the secondary period of $\sim 756^{\rm d}$ also present in the radial velocity curve is not clear (Hinkle et al. 1993; Munari et al. 1996). There is a controversy about whether the system is triple or binary, and whether the symbiotic pair is the inner binary or the white dwarf is on the longer orbit. The near-IR light curves also show long-term variations similar to the dust obscuration phenomena found in symbiotic Miras (cf. Munari et al. 1996). The hot component also shows very spectacular activity. In particular, we deal with irregular outbursts accompanied by fast, massive outflows and jets, rapid brightness variations with a time scale of the order of minutes, and other peculiarities which cannot be explained in the frame of the classical models proposed for symbiotic stars. Miko[ł]{}ajewski et al. (1990) proposed that this peculiar activity is powered by unstable accretion onto a magnetic white dwarf secondary. Present state-of-the-art and future prospects ============================================= Recently published catalogue of symbiotic stars includes 188 symbiotic stars as well as 30 objects suspected of being symbiotic (Belczy[ń]{}ski et al. 2000), Among them, 173 are in our Galaxy, 14 in Magellanic Clouds and 1 in Draco-1. They are excellent targets for small telescopes, especially for long-term monitoring of their complex photometric and spectroscopic variability. Although we have $\sim 120$ S-type symbiotic systems with $V \la 15^{\rm m}$, photometric orbital periods have been measured for only 30 objects (18 of then are eclipsing). 21 systems have also known spectroscopic orbits and for 8 of them mass ratios have been also estimated. The ellipsoidal light variations, characteristic of tidally distorted stars, have been rarely observed. Thus far, only four systems, T CrB, CI Cyg, BD$-21^{\circ}3873$, and possibly EG And, seem to show such changes. The general absence of the tidally distorted giant in symbiotic binaries may be however due to the lack of systematic searches for the ellipsoidal variations in the red and near-IR range, where the cool giants dominate the continuum light. On the other hand, tidal interactions are certainly important in symbiotic systems as suggested by practically circular orbits of most ($\sim 80$ %) systems with known orbital solutions, and specifically of all those showing the multiple CI Cyg-type outburst activity. We do not know the orbital period for any of the extragalactic symbiotic stars, although 8 of them belong to the S-type, and with $V\sim 15-17$ mag, are bright enough for optical monitoring even with a relatively small telescope. Similarly, among 33 galactic D-type systems ($K \la 8^{\rm m}$), pulsation periods have been observed – and thus the Mira presence confirmed – for only 12 systems. Pulsation periods are also unknown for the few extragalactic D-type systems ($K \sim 10 - 13^{\rm m}$). Optical and near-IR monitoring of symbiotic stars is essential not only to understand variable phenomena in symbiotic stars and more generally long-period interacting binaries, but also to study such phenomena in several other astrophysical environments (giant stars, planetary nebulae, novae, supernovae, supersoft X-ray sources, hot stars and even AGNs). Studies of the symbiotic Miras are important for understanding evolution and interaction of detached low-mass binaries. For example, there is ample observational evidence for systematic differences between the symbiotic Miras and average single galactic Miras. In particular, their average pulsation periods are longer, the colours redder and the mass-loss rates higher than typical periods, colours, and mass-loss rates for single Miras (cf. Miko[ł]{}ajewska 1999). It is interesting which and how these differences are related to the binary nature of symbiotic Miras. The symbiotic Mira are often associated with extended radio and/or optically resolved nebulae. These nebulae have usually very complex structure, often with bipolar lobes and jet-like features. There are also many important questions posed by the active S-type systems. What powers the multiple outburst activity in CI Cyg and other similar systems? How many of these contain tidally distorted giants? What is the nature of the secondary periodicity, $\la 0.8-0.9\,P_{\rm orb}$, visible at outburst in some of them? Can the secondary periodicities be considered as an evidence for the presence of an accretion disc? Such periodicities have not been found in any symbiotic nova during both optical maximum and constant luminosity phase (the plateau portion of white dwarf cooling tracks), including the best studied case – AG  Peg (Kenyon et al. 1993), and their presence indicates that the outbursts in CI Cyg and other similar systems are not powered by thermonuclear reactions. The timescales and relative amplitudes for these eruptions are very similar to the timescales and amplitudes of the hot component luminosity variations (high and low states) in symbiotic recurrent novae (e.g. T CrB, RS Oph, RX Pup) between their nova eruptions (Anupama & Miko[ł]{}ajewska 1999; Miko[ł]{}ajewska et al. 1999) and in other accretion-powered systems (CH Cyg, Mira A+B). It is possible that the main difference between CI Cyg, Z And, AX Per, and other related symbiotic systems with multiple eruption activity and the activity of accretion-powered systems (symbiotic recurrent novae and CH Cyg) is that the hot component in the former burns more or less stably the accreted hydrogen whereas not in the latter. Systematic optical and near infrared monitoring with small telescopes can provide an answer to these and many other questions. I gratefully acknowledge Maciej Miko[ł]{}ajewski and Toma Tomov for providing Figure 3. I would also like to thank the LOC for their support. This research was partly founded by KBN Research Grant No. 5P03D01920. Allen, D.A., 1983, MNRAS, 204, 113 Anupama, G.C., Miko[ł]{}ajewska, J., 1999, A&A, 344, 177 Belczy[ń]{}ski, K., Miko[ł]{}ajewska, J., Munari, U., Ivison, R.J., Friedjung, M., 2000, A&AS, 146, 407 Belyakina, T.S., 1979, Izv. KAO, 59, 133 Belyakina, T.S., 1984, Izv. KAO, 68, 108 Belyakina, T.S., 1991, Izv. KAO, 83, 118 Belyakina, T.S., 1992, Izv. KAO, 83, 118 Haniff, C.A., Scholtz, M., Tuthill, P.G., 1995, MNRAS, 276, 640 Hinkle, K.H., Fekel, F.C., Johnson, D.S., Scharlach, W.W.G., 1993, AJ, 105, 1074 Galis, R., Hric, L., Friedjung, M., Petrik, K., 1999, A&A, 348, 533 Kenyon, S.J., Oliversen, N.A., Miko[ł]{}ajewska, J., Miko[ł]{}ajewski, M., Stencel, R.E., Garcia, M.R., Anderson, C.M., 1991, AJ, 101, 637 Kenyon, S.J., Miko[ł]{}ajewska, J., Miko[ł]{}ajewski, M., Polidan, R.S., Slovak, M.H., 1993, AJ, 106, 1573 Khudyakova, T.N., 1989, PhD Thesis, Leningrad University Mattei, J.A., 1997, JAAVSO, 25, 57 Meinunger, L., 1981, MVS, 9, 67 Miko[ł]{}ajewska, J. (ed.), 1997, Physical Processes in Symbiotic Binaries and Related Systems, Copernicus Foundation for Polish Astronomy, Warsaw Miko[ł]{}ajewska, J., 1999, in Stecklum, B., Guenther, E., Klose, S., eds, Optical and Infrared Spectroscopy of Circumstellar Matter, ASP Conf. Ser., vol. 188, 291 Miko[ł]{}ajewska, J., Ivison, R.J., 2001, MNRAS, in press/astro-ph/0101526 Miko[ł]{}ajewska, J., Kenyon, S.J., 1992, AJ, 103, 579 Miko[ł]{}ajewska, J., Brandi, E., Hack, W., Whitelock, P.A., Barba, R., Garcia, L., Marang, F., 1999, MNRAS, 305, 190 Miko[ł]{}ajewska, J., Brandi, Garcia, L., Ferrer, O., W., Whitelock, P.A., Marang, F., 2001, in Szczerba R. et al., eds, Post-AGB Objects as a Phase of Stellar Evolution, Kluwer, in press/astro-ph/0103495 Miko[ł]{}ajewski, M., Miko[ł]{}ajewska, J., Khudyakova, T.N., 1992, A&A, 254, 127 Miko[ł]{}ajewski, M., Tomov, T., Miko[ł]{}ajewska, J., 1987, Ap&SS, 131, 733 Miko[ł]{}ajewski, M., Miko[ł]{}ajewska, J., Tomov, T., Kulesza, B., Szczerba, R., Wikierski, B., 1990, Acta Astr., 40, 129 Munari, U., Yudin, B.F., Kholotilov, E., Tomov, T., 1996, A&A, 311, 484 Whitelock, P.A., 1998, in Takeuri, M., Sasselov, D., eds, Pulsating Stars – Recent Developments in Theory and Observation, Universal Academy Press, Tokyo, 31
--- abstract: 'It has been known for some time that the exchange-correlation potential in time-dependent density functional theory is an intrinsically nonlocal functional of the density as soon as one goes beyond the adiabatic approximation.  In this paper we show that a much more severe nonlocality problem, with a completely different physical origin, plagues the exchange-correlation potentials in time-dependent [*spin*]{}-density functional theory.  We show how the use of the [*spin current density*]{} as a basic variable solves this problem, and we provide an explicit local expression for the exchange-correlation fields as functionals of the spin currents.' author: - 'Z. Qian, A. Constantinescu, and G. Vignale' title: | Failure of the local density approximation in time-dependent\ spin density functional theory --- For many years the local density approximation (LDA) has provided the much needed handle on the difficult problem of approximating the density dependence of the exchange-correlation (xc) potential – the single particle potential that incorporates the many-body effects in the Kohn-Sham equation for the ground state density [@DreizlerGross]. In LDA, the xc potential $V_{xc}(\vec r)$  is simply a function of the local density $n(\vec r)$.  This approximation is not unreasonable as long as the functional derivative of $V_{xc}(\vec r)$ with respect to $n(\vec r')$ – the so called [*exchange-correlation kernel*]{} $f_{xc}(\vec r, \vec r') \equiv \frac {\delta V_{xc}(\vec r)}{\delta n(\vec r')}$ – is  a sufficiently short-ranged function of the distance $|\vec r - \vec r'|$ [@footnote1]. However, much recent work [@Vignale1; @Gonze1; @Martin; @Vanderbilt; @Tokatly] has demonstrated that the  requirement of short-rangedness is not always fulfilled in physical systems, and when this happens the local density approximation is flawed.  This does not mean that a local description of  exchange and correlation is absolutely impossible, only that such a description cannot be achieved in terms of the particle density. For example, in the density-functional theory of crystalline insulators, it has been found [@Gonze1; @Martin; @Vanderbilt] that the xc potential has an “ultranonlocal" dependence on the density, due to the fact that the Fourier transform of the xc kernel $f_{xc}(\vec k, \vec k)$ diverges as $1/k^2$ for $k \to 0$ in these systems. But, the ultranonlocality disappears if one reformulates the theory in terms of the electric polarization $\vec P(\vec r)$ and the exchange-correlation electric field $\vec E_{xc} (\vec r )$ associated with it. A similar phenomenon was discovered in the time-dependent density functional theory (TDDFT)[@Grossdobsonpetersilka] following the realization that the frequency-dependent LDA[@GK] fails to satisfy Kohn’s theorem[@Dobson; @Kohnstheorem].  The pathology was traced to a singularity of the form $\frac{\vec k \cdot \vec k'}{k^2}$ in the xc kernel $f_{xc}(\vec k, \vec k',\omega)$ for $k \to 0$ at  finite $\vec k'$ and $\omega$.  The  ensuing nonlocality problem was solved by upgrading  to time-dependent [*current*]{}-density functional theory (TDCDFT),  where the basic variable is the current density, and its conjugate field is a vector potential[@Vignale1].  TDCDFT has since been applied to the calculation of the optical spectra of solids [@deBoeij] and the polarizability of long polymer chains [@Faassen] with considerable success. In this Letter we show that the nonlocality problem occurs in an [*aggravated form*]{} in the time-dependent spin density functional theory or, more generally, in the time-dependent DFT of any multi-component system.   The novel features of  the spin-dependent problem stem from the fact that the xc kernel presents a divergence even in the homogeneous electron liquid.  More precisely, it can be shown that the Fourier transform of the spin-dependent exchange-correlation kernel $f_{xc,\sigma \sigma'}(r-r',t-t') \equiv \frac {\delta V_{xc,\sigma}(\vec r,t)}{\delta n_{\sigma'}(\vec r',t')}$ in a homogeneous electron liquid has the long-wavelength expansion $$\label{fxcexpansion}  f_{xc,\sigma \sigma'}(k,\omega) \stackrel{k \to 0} {\to} \frac{A(\omega) }{ k^2} \frac{ \sigma \sigma'  n^2}{ 4n_\sigma n_{\sigma'}} +B_{\sigma \sigma'}(\omega) + O(k^2)~,$$ where $A(\omega)$ and $B_{\sigma \sigma'}(\omega)$ are complex functions of frequency, $n_\sigma$ is the density of $\sigma$-spin electrons ($\sigma = +1$ for $\uparrow$-spin  and $\sigma = -1$ for $\downarrow$-spin), and $n=n_\uparrow+n_\downarrow$ is the total density. Since the xc potential  created by a small density variation $\delta n_\sigma(\vec k, \omega)$ is given by the formula $$\label{Vxc} V_{xc,\sigma}(\vec k, \omega) = \sum_{\sigma '}f_{xc,\sigma \sigma'}(k,\omega) \delta n_{\sigma '}(\vec k, \omega)~,$$ we see that Eq. (\[fxcexpansion\]) rules out the possibility of a local connection between $V_{xc,\sigma}({\vec r}, t)$ and $\delta n_{\sigma '}({\vec r}', t')$. The existence of the long-wavelength singularity in $f_{xc, \sigma \sigma'} (k , \omega)$ has been known for some time.  It was first pointed out by Goodman and Sjölander [@Goodman] that the third moment sum rule for the spin-density response function implies such a singularity. Approximate formulae for $f_{xc,-} (k, \omega) = f_{xc, \uparrow \uparrow} (k , \omega) - f_{xc, \uparrow \downarrow} (k , \omega)$ exhibiting the singularity were proposed in [@Liu] and  [@Richardson].  More recently, D’Amico and Vignale ([@DAmico]) have shown that, at low frequency and finite temperature, the singularity is related to the friction that arises between up- and down-spin currents when they have different average velocities (the so-called spin-drag effect). By contrast, the implications of Eq. (\[fxcexpansion\]) for spin density functional theory have not been explored so far. This is understandable. The singularity (\[fxcexpansion\]) arises only at finite frequency ($A(0)=0$) and therefore does not affect the [*static*]{} spin DFT.  Furthermore, the singularity does not show up as long as one is interested only in the density response of spin-compensated systems, since, in that case, the  relevant combination of xc kernels is $\sum_{\sigma \sigma'}n_\sigma n_{\sigma '} f_{xc,\sigma \sigma'}$, which is non-singular.   It is only in the time-dependent  spin DFT [@Vosko] that the issue of the singularity becomes really critical  not only to the calculation of the spin response, but even to the calculation of just the density response[@footnote2]. In this Letter we propose a resolution of the nonlocality problem based on the use of the spin components of the current density $\vec j_{\uparrow}(\vec r,\omega)$ and $\vec j_{\downarrow}(\vec r,\omega)$ as basic variables. We provide an explicit expression for the spin-dependent exchange-correlation  field $\vec E_{xc,\sigma}(\vec r,\omega)$ as a local linear functional of the currents $\vec j_\sigma$. The general method for upgrading from the density to the current-density formulation is described in detail in Ref. [@ullrichlong], so we mention only the essential steps here. We introduce a spin-dependent xc vector potential $\vec A_{xc,\sigma}(\vec k, \omega)$ (whose time-derivative, $i \omega \vec A_{xc,\sigma}(\vec k, \omega)=\vec E_{xc}(\vec k,\omega)$,   is the xc electric field), and notice that this is linearly related to the currents in the following manner $$\label{defaxc} A_{xc,\sigma}^{\alpha}(\vec k, \omega) ~=~  \frac{k^2 }{ \omega^2} \sum_{\sigma '}f_{xc,\sigma \sigma '}^{\alpha} (\vec k, \omega)j_{\sigma '}^{\alpha}(\vec k, \omega)~,$$ where the superscript $\alpha$ denotes the longitudinal ($\alpha = L$) or transverse ($\alpha = T$) component of a vector relative to the direction of $\vec k$. It is not difficult to see that the [*longitudinal*]{} xc kernel defined in this manner coincides with the xc kernel introduced in Eq. (\[fxcexpansion\]). The extra factor $ \frac{k^2 }{ \omega^2}$ in Eq. (\[defaxc\]) exactly cancels the small-$k$ singularity of $f_{xc}$, and leads to a theory that  admits a  local approximation. The imaginary part of the current xc kernel $f_{xc,\sigma \sigma '}^{\alpha}(k,\omega)$ is expressed in terms of a causal response function as follows: $$\begin{aligned} \label{Imfxc} && \Im m  f_{xc,\sigma \sigma '}^{\alpha}(k,\omega) = \frac{1 }{ V   n_\sigma n_{\sigma '} k^2} \Im m \langle \langle  \hat F_\sigma^\alpha(\vec k); \hat F_{\sigma'}^\alpha(-\vec k) \rangle\rangle_\omega ~, \nonumber \\\end{aligned}$$ where $\langle \langle \hat A;\hat B\rangle \rangle_\omega \equiv - \frac{ i}{ \hbar} \int_0^{\infty}\langle [\hat A(t),\hat B]  \rangle e^{i \omega t}dt$ is the linear response function associated with the operators $\hat A$ and $\hat B$;  $\hat F_\sigma^\alpha(\vec k) = -\frac{i m}{\hbar}[\hat H, {\hat j}_{\sigma }^{\alpha} (\vec k)]$ is the time derivative of the Fourier transform of the current-density operator  $\hat {\vec j}_\sigma (\vec k)$, $\hat H$ is the Hamiltonian, and $V$ is the volume. Once the imaginary part of $f_{xc,\sigma \sigma '}^{\alpha}(k,\omega)$ is known, its real part is determined by the Kramers-Krönig dispersion relation $$\begin{aligned} \label{refxc} \Re e  f_{xc,\sigma \sigma '}^{\alpha}(k,\omega)&=& f_{xc,\sigma \sigma '}^{\alpha}(k,\infty)\nonumber  \\ &-& \frac{2 }{ \pi}{\cal P} \int_0^\infty d \omega ' \frac{\omega ' \Im m f_{xc,\sigma \sigma '}^{\alpha}(k,\omega ') } { \omega^2 - {\omega '}^2}~,\nonumber \\\end{aligned}$$ where ${\cal P}$ denotes the principal part integral, and the infinite frequency limit of  $f_{xc,\sigma \sigma '}$ is determined by the [*third moment sum rule*]{}. In a three-dimensional electron liquid, this sum rule gives $$\label{thirdmoment} f_{xc,\sigma \sigma '}^{\alpha}(k,\infty) \stackrel{k \to 0}{\to} -\frac{4 \pi e^2 }{ 3 k^2}\left [g_{\uparrow \downarrow}(0)-1 \right]\sigma \sigma ' + \frac{1}{n_{\sigma}} \left [ a^\alpha  t_{c \sigma} \delta_{\sigma \sigma'} + b^\alpha \epsilon_{pot, \sigma \sigma'} \right ]$$ where $a^L=2$, $a^T=2/3$,  $b^L=4/15$, and $b^T=-2/15$. Here $g_{\uparrow \downarrow}(0)$ is the pair correlation function for antiparallel spin electrons at zero separation,  $t_{c \sigma}$ is the average correlation kinetic energy of the $\sigma$-spin component, and $ \epsilon_{pot, \sigma \sigma'} \equiv \frac{n_\sigma}{2} \int d \vec r \frac {e^2}{r} [g_{\sigma \sigma'}(r)-1]$ is the potential energy associated with the interaction between $\sigma$- and $\sigma'$-spin electrons. Note that the result for the longitudinal case was first obtained in Ref.[@Goodman]. It is evident from the above equations that both  the longitudinal and the transverse kernels exhibit  $\frac{1 }{ k^2}$ singularities, which are “cured" by the $\frac{k^2 }{ \omega^2}$ factor of Eq. (\[defaxc\]). In particular, substituting  the  small-$k$ expansion $\hat F_{\sigma}^\alpha (\vec k) = \hat F_\sigma^\alpha(0)+ O(\vec k)$ in Eq. (\[Imfxc\]),  where $\hat F_\sigma^\alpha(0)$ is the operator of the total force acting on $\sigma$-spin electrons, and noting that terms of first order in $\vec k$ vanish by inversion symmetry, we see that the xc-kernels have the small-$k$ expansion $$\label{fxccurrentexpansion}  f_{xc,\sigma \sigma'}^{\alpha}(k,\omega) \stackrel{k \to 0}{\to} \frac {A(\omega) }{ k^2}\frac{ \sigma \sigma'  n^2}{ 4n_\sigma n_{\sigma'}} +B_{\sigma \sigma'}^\alpha(\omega) + O(k^2)~,$$ where $$\label{imaxc} \Im m A(\omega) = - \frac{4}{V n^2} \Im m \langle \langle \hat F^\alpha_\uparrow;\hat F^\alpha_\downarrow\rangle \rangle_\omega~,$$ and $$\begin{aligned} \label{reaxc} \Re e  A(\omega) &=& -\frac{16 \pi e^2 }{ 3}\left [g_{\uparrow \downarrow}(0)-1 \right] \nonumber \\ &-& \frac {2 }{ \pi}{\cal P} \int_0^\infty d \omega ' \frac {\omega ' \Im m A(\omega ') }{ \omega^2 - {\omega '}^2}~.\end{aligned}$$ The factor $\sigma \sigma'$ in Eq. (\[fxccurrentexpansion\]) arises from the fact that the total force $\hat F_\uparrow + \hat F_\downarrow$ vanishes by translational invariance, so that $\langle \langle \hat F_\sigma;\hat F_{\sigma'}\rangle \rangle_\omega = - \sigma \sigma ' \langle \langle \hat F_\uparrow; \hat F_{\downarrow}\rangle \rangle_\omega$. Notice also that $A(\omega)$ is independent of the direction $\alpha$ - longitudinal or transverse. The microscopic expression for $B^\alpha_{\sigma \sigma'}$ is more complicated: a simple approximation for this quantity will be presented below. Substituting the expansion (\[fxccurrentexpansion\]) in Eq. (\[defaxc\]), calculations similar to those described in  [@ullrichlong] lead us to the following [*local*]{} approximation for the xc field in terms of the spin currents $$\begin{aligned} \label{localexc} -e \vec E_{xc,\sigma}(\omega) &=& -\vec \nabla V_{xc,\sigma}^{LDA} +\frac{1}{n_{\sigma} }\vec \nabla \cdot {\bf \stackrel{\leftrightarrow}{\sigma}}_{xc,\sigma}(\omega) \nonumber \\ &+& \frac{i n^2 A (\omega)}{4 \omega} \sum_{\sigma'} \frac{\sigma \sigma'}{ n_\sigma n_{\sigma'}} \vec j_{\sigma'}~.\end{aligned}$$ Here the $\vec r$ dependence has been left implicit, and the xc stress tensor ${\bf \stackrel{\leftrightarrow}{\sigma}}_{xc}(\omega)$, as well as $A(\omega)$, is a function of the local spin densities, as discussed below. Eq. (\[localexc\]) is the central result of this paper.  The first two terms on the right are well known: they are, respectively,  the adiabatic LDA contribution and the visco-elastic force term, where the  stress tensor $\sigma_{xc,\sigma}(\omega)$ is related to $B_{xc,\sigma \sigma'}$ by obvious extensions of the formulae reported in  [@ullrichlong]. The expression for the xc stress tensor is $$\begin{aligned} \sigma_{xc, \sigma, ij} &=& \sum_{\sigma'} \biggl [ \eta_{xc, \sigma \sigma'} \biggl ( \frac{\partial u_{\sigma', i}}{\partial r_j} + \frac{\partial u_{\sigma', j}}{\partial r_i} - \frac{2}{3} \vec \nabla \cdot {\vec u}_{\sigma'} \delta_{ij} \biggr ) \nonumber \\ &+& \zeta_{xc, \sigma \sigma'} \vec \nabla \cdot {\vec u}_{\sigma '} \delta_{ij} \biggr ]\end{aligned}$$ where $\vec u_{\sigma} = \vec j_{\sigma} / n_\sigma $, and $$\begin{aligned} \eta_{xc, \sigma \sigma '} = - \frac{n_\sigma n_{\sigma '}}{i \omega} B_{\sigma \sigma '}^T (\omega )~,\end{aligned}$$ $$\begin{aligned} \zeta_{xc, \sigma \sigma '} = - \frac{ n_\sigma n_{\sigma '}} {i \omega } [ B_{\sigma \sigma '}^{L} (\omega) - \frac{4}{3} B_{ \sigma \sigma '}^{T} (\omega) - \epsilon_{xc, \sigma \sigma '}'' ]~,\end{aligned}$$ where $\epsilon_{xc, \sigma \sigma '}'' = \frac{\partial^2 \epsilon_{xc}}{\partial n_\sigma \partial n_{\sigma'}}$. The last term in Eq. (\[localexc\]) is new, and comes directly from the $\frac{1}{k^2}$ singularity of Eq. (\[fxccurrentexpansion\]). The essential feature of the new term is that it produces damping of the spin current proportional to the relative velocity between up- and down-spin electrons.  This makes it readily distinguishable from the usual viscous friction contained in the second term,  which is proportional to the [*derivatives*]{} of the velocity field. \[fig1\] ![Imaginary part of $A(\omega)$ evaluated from Eq. (\[ImA\]) with the correction factor given in Eq. (\[factor\]).  The values of $a(r_s, 0)$ are 1.92, 3.36, and 7.49 at $r_s=1,2$, and $4$ respectively.](imag.ps "fig:"){width="7cm"} \[fig2\] ![Real part of $A(\omega)$ calculated from Eq. (\[reaxc\]).](real.ps "fig:"){width="7cm"} The physical reason for the difference is that, whenever up and down spin currents travel with different average velocities, they exert [*friction*]{} on each other:  the “spin drag coefficient" is $\gamma (\omega ) = \frac{i n^3 A(\omega )}{4 \omega m n_\uparrow n_\downarrow}$. Of course, like all the quantities considered here, $\gamma(\omega)$ is complex and frequency-dependent, and, in the limit of zero frequency, its real part can be shown to be related to the spin diffusion constant $D_s$ by the Einstein relation $D_s = \frac {n}{m \chi_s \gamma(0)}$, where $\chi_s$ is the static, macroscopic spin susceptibility. Unfortunately, an exact calculation of $A(\omega)$ from the microscopic expressions (\[imaxc\]) and (\[reaxc\]) is beyond the reach of present-day many-body techniques. However, we can obtain a rather good approximation with the help of the following exact results:  (i) For $\omega \to 0$, $\Im m A(\omega) \propto \omega^3$ and $\Re e A(\omega) \propto \omega^2$; (ii) For large $\omega$, $\Im m A(\omega) \to -\frac{16 \pi e^2}{3} \frac {n_\uparrow n_\downarrow}{n^2} \frac{\alpha r_s}{\sqrt{\bar \omega}} \frac{1}{(1+\zeta)^{1/3}}$ and $\Re e A(\omega) \to -\frac{16 \pi e^2}{3} \frac {n_\uparrow n_\downarrow}{n^2} [g_{\uparrow \downarrow}(0)-1]$. Here $\bar \omega = \frac{\omega}{2 E_{F \uparrow}}$, where $ E_{F \uparrow}$ is the Fermi energy for majority spin electrons and $\zeta = \frac{n_{\uparrow}-n_{\downarrow}}{n}$ measures the degree of spin polarization, and $\alpha=(4/9 \pi)^{1/3}$ [@footnote3]. Note that $g_{\uparrow \downarrow}(0)$ is accurately known from the work of Gori-Giorgi and Perdew [@Perdew]. The high and low frequency limits  of $\Re e A(\omega)$ are both obtained from the third moment sum rule.  In particular, the vanishing of $\Re e A(0)$ follows from the fact that $\frac{2}{\pi}\int_0^\infty \frac {\Im m A(\omega')}{\omega'}$ is equal to (minus) the first moment of the current-current response function, which, by gauge invariance and the continuity equation, coincides with the third moment of the density-density response function, i.e., $-A(\infty)$. The $\omega^3$ behavior of $\Im m A(\omega)$ at low frequency is easily obtained from the approximate zero-temperature formula [@DAmico] $$\begin{aligned} \label{ImA} \Im mA(\omega)  \simeq  - \frac {4}{3 n^2 V}\sum_{\vec q}v_{\vec q}^2 q^2 \int_0^\omega \frac{d \omega'}{\pi} \left [\Im m \chi_{\uparrow \uparrow} (q,\omega - \omega')\Im m \chi_{\downarrow \downarrow}(q,\omega') - \Im m \chi_{\uparrow \downarrow}(q,\omega - \omega') \Im m \chi_{\downarrow \uparrow}(q,\omega') \right ]~,\end{aligned}$$ which is exact in the limits of  high density and high frequency. Here $v_{\vec q}=\frac{4 \pi e^2}{q^2}$, and $\chi_{\sigma \sigma'}(q,\omega)$ are the spin density response functions of the homogeneous liquid. We have evaluated $\chi_{\sigma \sigma'}$in the generalized random phase approximation $$\chi^{-1}_{\sigma \sigma'}(q,\omega)=\chi^{-1}_{0\sigma}(q,\omega) \delta_{\sigma \sigma'}-v_{\vec q}[1-G_{\sigma \sigma'}(q)]~,$$ where $\chi_{0\sigma}(q,\omega)$ is the Lindhard function and $G_{\sigma \sigma'}(q)$ are local field corrections [@Iwamoto]. At typical metallic densities we multiply $ \Im m A(\omega)$   by an empirical factor $$\label{factor} g(\omega)=\frac{1 + \sqrt{\bar \omega}}{a(r_s, \zeta) + \sqrt{\bar \omega}}~,$$ designed to satisfy the sum rule $\Re e A(0) = 0$ without altering the high-frequency behavior. Notice that $a(r_s, \zeta) \to 1 $ for $r_s \to 0 $.  The results evaluated with this procedure are shown in Figs. 1 and 2.   Finally, we briefly remark on the calculation of the regular part $B^\alpha_{\sigma \sigma'}(\omega)$ of $f^{\alpha}_{xc,\sigma \sigma'}$. A simple approximation strategy is as follows. We rewrite the $2 \times 2$ matrix $f^{\alpha}_{xc,\sigma \sigma'}$ in the basis of the vectors $|r\rangle \propto  (n_\uparrow, n_\downarrow)$ and $|s \rangle \propto  (-n_\downarrow, n_\uparrow)$.  It is immediately seen that only the matrix element $f^{\alpha}_{xc,rr}$ is finite in the limit $k \to 0$, while all the others are singular.  This suggests that we approximate $f^{\alpha}_{xc,rr} \sim B^\alpha_{rr}$   and completely ignore the contribution from $B^\alpha_{\sigma \sigma'}(\omega)$ in all other matrix elements, which are dominated by the singular contribution of $A(\omega)$.  A little thought shows that this approximation is equivalent to setting $$ B^\alpha_{\sigma \sigma'}(\omega) \simeq  \frac{n_\sigma n_{\sigma'}}{n^2} B^\alpha (\omega)~,$$ where the scalar function $B^\alpha (\omega)$   can be extracted from a calculation of  the xc kernels in the density channel.  Such a calculation has been carried out in [@QV] (for the paramagnetic state) and the connection between $B^\alpha (\omega)$ and the $f_{xc}^\alpha (\omega)$ of that paper is $B^\alpha(\omega) = \frac{f_{xc}^\alpha (\omega)} {\left ( 1 - 2 \frac{n_\uparrow n_\downarrow}{n^2} \right )^2}$. This completes the construction of the input necessary to the evaluation of Eq. (\[localexc\]). It is hoped that our expression for the spin-current dependent xc field  (\[localexc\]) will open the way to novel applications of CDFT to the calculation of spin excitations in spin-polarized systems. This work was supported by NSF grant No. DMR-0074959. We acknowledge useful discussions with Carsten Ullrich. [100]{} R. M. Dreizler and E. K. U. Gross, [*Density Functional Theory*]{} (Springer-Verlag, Berlin, 1990). More precisely, the Fourier transform of the xc kernel $f_{xc}(\vec k, \vec k')$ must have a finite limit for $\vec k$ and/or  $\vec k'$ tending to zero. G. Vignale and W. Kohn, Phys. Rev. Lett. [**77**]{}, 2037 (1996); G. Vignale, C. A. Ullrich, and S. Conti, Phys. Rev. Lett. [**79**]{}, 4878 (1997); X. Gonze, Ph. Ghosez, and R. W. Godby, Phys. Rev. Lett. [**74**]{}, 4035 (1995); [**78**]{}, 294 (1997). R. M. Martin and G. Ortiz, Phys. Rev. B [**56**]{}, 1124 (1997). G. Ortiz, I. Souza, and R. M. Martin, Phys. Rev. Lett. [**80**]{}, 353 (1998). D. Vanderbilt, Phys. Rev. Lett [**79**]{}, 3966 (1997). I. V. Tokatly and O. Pankratov, Phys. Rev. Lett. [**86**]{}, 2078 (2001). E. K. U. Gross, J. F. Dobson, and M. Petersilka, in [*Density Functional Theory II*]{}, ed. R. F. Nalewajski, Vol. 181 of Topics in Current Chemistry, (Springer, Berlin, 1996), p. 81. E. K. U. Gross and W. Kohn, Phys. Rev. Lett. [**55**]{}, 2850 (1985); [**57**]{}, 923(E) (1986). J. F. Dobson, Phys. Rev. Lett. [**73**]{}, 2244 (1994). W. Kohn, Phys. Rev. [**123**]{}, 1242 (1961). P. L. de Boeij, F. Kootstra, J. A. Berger, R. van Leeuwen, and J. G. Snijders, J. Chem. Phys. [**115**]{}, 1995 (2001). M. van Faassen, P. L. de Boeij, R. van Leeuwen, J. A. Berger, and J. G. Snijders, Phys. Rev. Lett. [**88**]{}, 186401 (2002). B. Goodman and A. Sjölander, Phys. Rev. [**8**]{}, 200 (1973). K. L. Liu, Can. J. Phys. [**69**]{}, 573 (1991). C. F. Richardson and N. W. Ashcroft, Phys. Rev. B [**50**]{}, 8170 (1994). I. D’Amico and G. Vignale, Phys. Rev. B [**62**]{}, 4853 (2000). K. L. Liu and S. H. Vosko, Can. J. Phys. [**67**]{}, 1015 (1989). Remarkably, the singularity does not affect the [*transverse*]{} spin-response discussed in Z. Qian and G. Vignale, Phys. Rev. Lett. [**88**]{}, 056404 (2002). C. A. Ullrich and G. Vignale, Phys. Rev. B [**65**]{}, 245102 (2002). The large $\omega$ limit of $\Im m A (\omega)$ at $\zeta =0$ was also obtained by Liu [@Liu]. However, his assumption that $\Im m A (\omega) \propto \omega$ for $\omega \to 0$ is incorrect. P. Gori-Giorgi and J. P. Perdew, Phys. Rev. B [**64**]{}, 155102 (2001). N. Iwamoto and D. Pines, Phys. Rev. B [**29**]{}, 3924 (1984). Z. Qian and G. Vignale, Phys. Rev. B [**65**]{}, 235121 (2002).
--- abstract: 'Most state-of-the-art action localization systems process each action proposal individually, without explicitly exploiting their relations during learning. However, the relations between proposals actually play an important role in action localization, since a meaningful action always consists of multiple proposals in a video. In this paper, we propose to exploit the proposal-proposal relations using Graph Convolutional Networks (GCNs). First, we construct an action proposal graph, where each proposal is represented as a node and their relations between two proposals as an edge. Here, we use two types of relations, one for capturing the context information for each proposal and the other one for characterizing the correlations between distinct actions. Then we apply the GCNs over the graph to model the relations among different proposals and learn powerful representations for the action classification and localization. Experimental results show that our approach significantly outperforms the state-of-the-art on THUMOS14 (49.1% versus 42.8%). Moreover, augmentation experiments on ActivityNet also verify the efficacy of modeling action proposal relationships. Codes are available at <https://github.com/Alvin-Zeng/PGCN>.' author: - | Runhao Zeng$^{1}$[^1]    Wenbing Huang$^{2,5*}$    Mingkui Tan$^{1,4}$[^2]    Yu Rong$^{2}$\ Peilin Zhao$^{2}$    Junzhou Huang$^{2}$     Chuang Gan$^{3}$\ $^{1}$School of Software Engineering, South China University of Technology, China\ $^{2}$Tencent AI Lab    $^{3}$MIT-IBM Watson AI Lab    $^{4}$Peng Cheng Laboratory, Shenzhen\ $^{5}$Department of Computer Science and Technology, Tsinghua University, State Key Lab. of Intelligent\ Technology and Systems, Tsinghua National Lab. for Information Science and Technology (TNList)\ [{runhaozeng.cs, ganchuang1990}@gmail.com, hwenbing@126.com, mingkuitan@scut.edu.cn]{} title: Graph Convolutional Networks for Temporal Action Localization --- Introduction ============ Understanding human actions in videos has been becoming a prominent research topic in computer vision, owing to its various applications in security surveillance, human behavior analysis and many other areas [@duan2018weakly; @simonyan2014two; @tran2015learning; @fan2018end; @gan2018geometry; @gan2015devnet; @gan2016recognizing; @gan2016you; @wang2016temporal]. Despite the fruitful progress in this vein, there are still some challenging tasks demanding further exploration — *temporal action localization* is such an example. To deal with real videos that are untrimmed and usually contain the background of irrelevant activities, temporal action localization requires the machine to not only classify the actions of interest but also localize the start and end time of every action instance. Consider a sport video as illustrated in Figure \[Fig:simple\], the detector should find out the frames where the action event is happening and identify the category of the event. ![Schematic depiction of our approach. We apply graph convolutional networks to model the proposal-proposal interactions and boost the temporal action localization performance.[]{data-label="Fig:simple"}](Fig1.pdf){width="\linewidth"} Temporal Action localization has attracted increasing attention in the last several years [@chao2018rethinking; @gao2017cascaded; @lin2017single; @shou2017cdc; @shou2016temporal]. Inspired by the success of object detection, most current action detection methods resort to the two-stage pipeline: they first generate a set of 1D temporal proposals and then perform classification and temporal boundary regression on each proposal individually. However, processing each proposal separately in the prediction stage will inevitably neglect the semantic relations between proposals. We contend that exploiting the proposal-proposal relations in the video domain provides more cues to facilitate the recognition of each proposal instance. To illustrate this, we revisit the example in Figure \[Fig:simple\], where we have generated four proposals. On the one hand, the proposals $\Mat{p}_1$, $\Mat{p}_2$ and $\Mat{p}_3$ overlapping with each other describe different parts of the same action instance (, the start period, main body and end period). Conventional methods perform prediction on $\Mat{p}_1$ by using its feature alone, which we think is insufficient to deliver complete knowledge for the detection. If we additionally take the features of $\Mat{p}_2$ and $\Mat{p}_3$ into account, we will obtain more contextual information around $\Mat{p}_1$, which is advantageous especially for the temporal boundary regression of $\Mat{p}_1$. On the other hand, $\Mat{p}_4$ describes the background (, the sport field), and its content is also helpful in identifying the action label of $\Mat{p}_1$, since what is happening on the sport field is likely to be sport action (“discus throwing”) but not the one happens elsewhere (“kissing”). In other words, the classification of $\Mat{p}_1$ can be partly guided by the content of $\Mat{p}_4$ even they are temporally disjointed. To model the proposal-proposal interactions, one may employ the self-attention mechanism [@vaswani2017attention] — as what has been conducted previously in language translation [@vaswani2017attention] and object detection [@hu2018relation] — to capture the pair-wise similarity between proposals. A self-attention module can affect an individual proposal by aggregating information from all other proposals with the automatically learned aggregation weights. However, this method is computationally expensive as querying all proposal pairs has a quadratic complexity of the proposal number (note that each video could contain more than thousands of proposals). On the contrary, Graph Convolutional Networks (GCNs) , which generalize convolutions from grid-like data (images) to non-grid structures (social networks), have received increasing interests in the machine learning domain [@kipf2017semi; @yan2018spatial]. GCNs can affect each node by aggregating information from the adjacent nodes, and thus are very suitable for leveraging the relations between proposals. More importantly, unlike the self-attention strategy, applying GCNs enables us to aggregate information from only the local neighbourhoods for each proposal, and thus can help decrease the computational complexity remarkably. In this paper, we regard the proposals as nodes of a specific graph and take advantage of GCNs for modeling the proposal relations. Motivated by the discussions above, we construct the graph by investigating two kinds of edges between proposals, including the *contextual edges* to incorporate the contextual information for each proposal instance (, detecting $\Mat{p}_1$ by accessing $\Mat{p}_2$ and $\Mat{p}_3$ in Figure \[Fig:simple\]) and the *surrounding edges* to query knowledge from nearby but distinct proposals (, querying $\Mat{p}_4$ for $\Mat{p}_1$ in Figure \[Fig:simple\]). We then perform graph convolutions on the constructed graph. Although the information is aggregated from local neighbors in each layer, message passing between distant nodes is still possible if the depth of GCNs increases. Besides, we conduct two different GCNs to perform classification and regression separately, which is demonstrated to be effective by our experiments. Moreover, to avoid the overwhelming computation cost, we further devise a sampling strategy to train the GCNs efficiently while still preserving desired detection performance. We evaluate our proposed method on two popular benchmarks for temporal action detection, , THUMOS14 [@jiang2014thumos] and AcitivityNet1.3 [@caba2015activitynet]. To sum up, our contributions are as follow: - To the best of our knowledge, we are the first to exploit the proposal-proposal relations for temporal action localization in videos. - To model the interactions between proposals, we construct a graph of proposals by establishing the edges based on our valuable observations and then apply GCNs to do message aggregation among proposals. - We have verified the effectiveness of our proposed method on two benchmarks. On THUMOS14 especially, our method obtains the mAP of $49.1\%$ when $tIoU=0.5$, which significantly outperforms the state-of-the-art, $42.8\%$ by [@chao2018rethinking]. Augmentation experiments on ActivityNet also verify the efficacy of modeling action proposal relationships. ![image](Fig2.pdf){width="\linewidth"} Related work {#Sec:related} ============ **Temporal action localization.** Recently, great progress has been achieved in deep learning [@carreira2017quo; @deng2018visual; @guo2019auto; @zhuang2018discrimination], which facilitates the development of temporal action localization. Approaches on this task can be grouped into three categories: (1) methods performing frame or segment-level classification where the smoothing and merging steps are required to obtain the temporal boundaries [@shou2017cdc; @montes2016temporal; @zeng2019breaking]; (2) approaches employing a two-stage framework involving proposal generation, classification and boundary refinement [@shou2016temporal; @xu2017r; @zhao2017temporal]; (3) methods developing end-to-end architectures integrating the proposal generation and classification [@yeung2016end; @buch2017end; @lin2017single]. Our work is built upon the second category where the action proposals are first generated and then used to perform classification and boundary regression. Following this paradigm, Shou  [@shou2016temporal] propose to generate proposals from sliding windows and classify them. Xu  [@xu2017r] exploit the 3D ConvNet and propose a framework inspired by Faster R-CNN [@ren2015faster]. The above methods neglect the context information of proposals, and hence some attempts have been developed to incorporate the context to enhance the proposal feature [@dai2017temporal; @gao2017turn; @gao2017cascaded; @zhao2017temporal; @chao2018rethinking]. They show encouraging improvements by extracting features on the extended receptive field (, boundary) of the proposal. Despite their success, they all process each proposal individually. In contrast, our method has considered the proposal-proposal interactions and leveraged the relations between proposals. **Graph Convolutional Networks.** Kipf  [@kipf2017semi] propose the Graph Convolutional Networks (GCNs) to define convolutions on the non-grid structures [@tan2015learning]. Thanks to its effectiveness, GCNs have been successfully applied to several research areas in computer vision, such as skeleton-based action recognition [@yan2018spatial], person re-identification [@shen2018person], and video classification [@wang2018video]. For real-world applications, the graph can be large and directly using GCNs is inefficient. Therefore, several attempts are posed for efficient training by virtue of the sampling strategy, such as the node-wise method SAGE [@hamilton2017inductive], layer-wise model FastGCN [@chen2018fastgcn] and its layer-dependent variant AS-GCN [@huang2018adaptive]. In this paper, considering the flexibility and implementability, we adopt SAGE method as the sampling strategy in our framework. Our Approach {#Sec:graph} ============ Notation and Preliminaries -------------------------- We denote an untrimmed video as $\Mat{V}=\{\Mat{I}_t\in\mathbb{R}^{H\times W\times 3}\}_{t=1}^T$, where $\Mat{I}_t$ denotes the frame at the time slot $t$ with height $H$ and width $W$. Within each video $\Mat{V}$, let $\Mat{P}=\{\Mat{p}_i\mid\Mat{p}_i=(\Mat{x}_i, (t_{i,s}, t_{i,e}))\}_{i=1}^N$ be the action proposals of interest, with $t_{i,s}$ and $t_{i,e}$ being the start and end time of a proposal, respectively. In addition, given proposal $\Mat{p}_i$, let $\Mat{x}_i\in\mathcal{R}^d$ be the feature vector extracted by certain feature extractor (, the I3D network [@carreira2017quo]) from frames between $\Mat{I}_{t_{i,s}}$ and $\Mat{I}_{t_{i,e}}$. Let $\mathcal{G}(\mathcal{V}, \mathcal{E})$ be a graph of $N$ nodes with nodes $v_i\in\mathcal{V}$ and edge $e_{ij}=(v_i, v_j)\in\mathcal{E}$. Furthermore, let $\Mat{A}\in\mathbb{R}^{N\times N}$ be the adjacency matrix associated with $\mathcal{G}$. In this paper, we seek to exploit graphs $\mathcal{G}(\mathcal{P},\mathcal{E})$ on action proposals in $\mathcal{P}$ to better model the proposal-proposal interactions in videos. Here, each action proposal is treated as a node and the edges in $\mathcal{E}$ are used to represent the relations between proposals. General Scheme of Our Approach ------------------------------ In this paper, we use a proposal graph $\mathcal{G}(\mathcal{P},\mathcal{E})$ to present the relations between proposals and then apply GCN on the graph to exploit the relations and learn powerful representations for proposals. The intuition behind applying GCN is that when performing graph convolution, each node aggregates information from its neighborhoods. In this way, the feature of each proposal is enhanced by other proposals, which helps boost the detection performance eventually. Without loss of generality, we assume the action proposals have been obtained beforehand by some methods (, the TAG method in [@zhao2017temporal]). In this paper, given an input video $\Mat{V}$, we seek to predict the action category $\hat{y}_i$ and temporal position $(\hat{t}_{i,s}, \hat{t}_{i,e})$ for each proposal $\Mat{p}_i$ by exploiting proposal relations. Formally, we compute $$\begin{aligned} \label{Eq:gcn-p} \{(\hat{y}_i, (\hat{t}_{i,s}, \hat{t}_{i,e}))\}_{i=1}^{N} = F(\mathrm{GCN}(\{\Mat{x}_i\}_{i=1}^N, \mathcal{G}(\mathcal{P},\mathcal{E})), \end{aligned}$$ where $F$ denotes any mapping functions to be learned. To exploit GCN for action localization, our paradigm takes both the proposal graph and the proposal features as input and perform graph convolution on the graph to leverage proposal relations. The enhanced proposal features (, the outputs of GCN) are then used to jointly predict the category label and temporal bounding box. The schematic of our approach is shown in Figure \[Fig:framework\]. For simplicity, we denote our model as **P-GCN** henceforth. In the following sections, we aim to answer two questions: (1) how to construct a graph to represent the relations between proposals; (2) how to use GCN to learn representations of proposals based on the graph and facilitate the action localization. Proposal Graph Construction {#Sec:construct} --------------------------- For the graph $\mathcal{G}(\mathcal{P},\mathcal{E})$ of each video, the nodes are instantiated as the action proposals, while the edges $\mathcal{E}$ between proposals are demanded to be characterized specifically to better model the proposal relations. One way to construct edges is linking all proposals with each other, which yet will bring in overwhelming computations for going through all proposal pairs. It also incurs redundant or noisy information for action localization, as some unrelated proposals should not be connected. In this paper, we devise a smarter approach by exploiting the temporal relevance/distance between proposals instead. Specifically, we introduce two types of edges, the contextual edges and surrounding edges, respectively. **Contextual Edges.** We establish an edge between proposal $\Vec{p}_{i}$ and $\Vec{p}_{j}$ if $r(\Vec{p}_{i}, \Vec{p}_{j})> \theta_{ctx}$, where $\theta_{ctx}$ is a certain threshold. Here, $r(\Vec{p}_{i}, \Vec{p}_{j})$ represents the relevance between proposals and is defined by the tIoU metric, *i.e.*, $$\begin{aligned} \label{Eq:relevance} r(\Vec{p}_{i}, \Vec{p}_{j}) = tIoU(\Vec{p}_{i}, \Vec{p}_{j})= \frac{I(\Vec{p}_{i}, \Vec{p}_{j})}{U(\Vec{p}_{i}, \Vec{p}_{j})}, \end{aligned}$$ where $I(\Vec{p}_{i}, \Vec{p}_{j})$ and $U(\Vec{p}_{i}, \Vec{p}_{j})$ compute the temporal intersection and union of the two proposals, respectively. If we focus on the proposal $\Mat{p}_i$, establishing the edges by computing $r(\Vec{p}_{i}, \Vec{p}_{j})>\theta_{ctx}$ will select its neighbourhoods as those have high overlaps with it. Obviously, the non-overlapping portions of the highly-overlapping neighbourhoods are able to provide rich contextual information for $\Mat{p}_i$. As already demonstrated in [@dai2017temporal; @chao2018rethinking], exploring the contextual information is of great help in refining the detection boundary and increasing the detection accuracy eventually. Here, by our contextual edges, all overlapping proposals automatically share the contextual information with each other, and these information are further processed by the graph convolution. **Surrounding Edges.** The contextual edges connect the overlapping proposals that usually correspond to the same action instance. Actually, distinct but nearby actions (including the background items) could also be correlated, and the message passing among them will facilitate the detection of each other. For example in Figure \[Fig:simple\], the background proposal $\Vec{p}_{4}$ will provide a guidance on identifying the action class of proposal $\Vec{p}_{1}$ (, more likely to be sport action). To handle such kind of correlations, we first utilize $r(\Vec{p}_{i}, \Vec{p}_{j})=0$ to query the distinct proposals, and then compute the following distance $$\begin{aligned} \label{Eq:distance} d(\Vec{p}_{i}, \Vec{p}_{j})=\frac{|c_{i}-c_{j}|}{U(\Vec{p}_{i}, \Vec{p}_{j})}, \end{aligned}$$ to add the edges between nearby proposals if $d(\Vec{p}_{i}, \Vec{p}_{j}) < \theta_{sur}$, where $\theta_{sur}$ is a certain threshold. In Eq. , $c_{i}$ (or $c_{j}$) represents the center coordinate of $\Mat{p}_i$ (or $\Mat{p}_j$). As a complement of the contextual edges, the surrounding edges enable the message to pass across distinct action instances and thereby provides more temporal cues for the detection. Graph Convolution for Action Localization {#Sec:gcn} ----------------------------------------- Given the constructed graph, we apply the GCN to do action localization. We build $K$-layer graph convolutions in our implementation. Specifically for the $k$-th layer ($1\leq k\leq K$), the graph convolution is implemented by $$\label{Eq:gcn} \Mat{X}^{(k)} = \Mat{A}\Mat{X}^{(k-1)}\Mat{W}^{(k)}.$$ Here, $\Mat{A}$ is the adjacency matrix; $\textbf{W}^{(k)}\in\mathbb{R}^{d_k\times d_k}$ is the parameter matrix to be learned; $\Mat{X}^{(k)}\in\mathbb{R}^{N \times d_k}$ are the hidden features for all proposals at layer $k$; $\Mat{X}^{(0)}\in\mathbb{R}^{N \times d}$ are the input features. We apply an activation function (, ReLU) after each convolution layer before the features are forwarded to the next layer. In addition, our experiments find it more effective by further concatenating the hidden features with the input features in the last layer, namely, $$\begin{aligned} \label{Eq:layer-wise} \Mat{X}^{(K)} := \Mat{X}^{(K)} \| \Mat{X}^{(0)}, \end{aligned}$$ where $\|$ denotes the concatenation operation. Joining the previous work [@zhao2017temporal], we find that it is beneficial to predict the action label and temporal boundary separately by virtue of two GCNs—one conducted on the original proposal features $\Mat{x}_i$ and the other one on the extended proposal features $\Mat{x'}_i$. The first GCN is formulated as $$\label{Eq:1gcn} \{\hat{y}_i\}_{i=1}^{N} = \mathrm{softmax}(FC_1(GCN_1(\{\Mat{x}_i\}_{i=1}^N, \mathcal{G}(\mathcal{P},\mathcal{E})))), \\$$ where we apply a Fully-Connected (FC) layer with soft-max operation on top of $GCN_1$ to predict the action label $\hat{y}_i$. The second GCN can be formulated as $$\begin{aligned} \label{Eq:2gcn_1} \{(\hat{t}_{i,s}, \hat{t}_{i,e})\}_{i=1}^{N} = FC_2(GCN_2(\{\Mat{x'}_i\}_{i=1}^N, \mathcal{G}(\mathcal{P},\mathcal{E}))), \\ \label{Eq:2gcn_2} \{\hat{c}_i\}_{i=1}^{N} = FC_3(GCN_2(\{\Mat{x'}_i\}_{i=1}^N, \mathcal{G}(\mathcal{P},\mathcal{E}))), \end{aligned}$$ where the graph structure $\mathcal{G}(\mathcal{P},\mathcal{E})$ is the same as that in Eq.  but the input proposal feature is different. The extended feature $\Mat{x'}_i$ is attained by first extending the temporal boundary of $\Mat{p}_i$ with $\frac{1}{2}$ of its length on both the left and right sides and then extracting the feature within the extended boundary. Here, we adopt two FC layers on top of $GCN_2$, one for predicting the boundary $(\hat{t}_{i,s}, \hat{t}_{i,e})$ and the other one for predicting the completeness label $\hat{c}_i$, which indicates whether the proposal is complete or not. It has been demonstrated by [@zhao2017temporal] that, incomplete proposals that have low tIoU with the ground-truths could have high classification score, and thus it will make mistakes when using the classification score alone to rank the proposal for the mAP test; further applying the completeness score enables us to avoid this issue. **Adjacency Matrix.** In Eq. , we need to compute the adjacency matrix $\Mat{A}$. Here, we design the adjacency matrix by assigning specific weights to edges. For example, we can apply the cosine similarity to estimate the weights of edge $e_{ij}$ by $$\label{Eq:adjacent_mat} A_{ij}=\frac{\Vec{x}_{i}^{T}\Vec{x}_{j}}{\|\Vec{x}_{i}\|_{2}\cdot\|\Vec{x}_{j}\|_{2}}.$$ In the above computation, we compute $A_{ij}$ relying on the feature vector $\Vec{x}_{i}$. We can also map the feature vectors into an embedding space using a learnable linear mapping function as in [@wang2018non] before the cosine computation. We leave the discussion in our experiments. Efficient Training by Sampling {#Sec:training} ------------------------------ Typical proposal generation methods usually produce thousands of proposals for each video. Applying the aforementioned graph convolution (Eq. ) on all proposals demands hefty computation and memory footprints. To accelerate the training of GCNs, several approaches [@chen2018fastgcn; @huang2018adaptive; @hamilton2017inductive] have been proposed based on neighbourhood sampling. Here, we adopt the SAGE method [@hamilton2017inductive] in our method for its flexibility. The SAGE method uniformly samples the fixed-size neighbourhoods of each node layer-by-layer in a top-down passway. In other words, the nodes of the $(k-1)$-th layer are formulated as the sampled neighbourhoods of the nodes in the $k$-th layer. After all nodes of all layers are sampled, SAGE performs the information aggregation in a bottom-up fashion. Here we specify the aggregation function to be a sampling form of Eq. , namely, $$\label{Eq:gcn-sampling} \Mat{x}_i^{(k)} = \left(\frac{1}{N_s} \sum_{j=1}^{N_s} A_{ij}\Mat{x}_j^{(k-1)} + \Mat{x}_i^{(k-1)}\right)\Mat{W}^{(k)},$$ where node $j$ is sampled from the neighbourhoods of node $i$, , $j\in\mathcal{N}(i)$; $N_s$ is the sampling size and is much less than the total number $N$. The summation in Eq.  is further normalized by $N_s$, which empirically makes the training more stable. Besides, we also enforce the self addition of its feature for node $i$ in Eq. . We do not perform any sampling when testing. For better readability, Algorithm \[Alg:forward\] depicts the algorithmic Flow of our method. **Input:** Proposal set $\Mat{P}=\{\Mat{p}_i\mid\Mat{p}_i=(\Mat{x}_i, (t_{i,s}, t_{i,e}))\}_{i=1}^N$; original proposal features $\{\Mat{x}_{i}^{(0)}\}_{i=1}^N$; extended proposal features $\{\Mat{x'}_{i}^{(0)}\}_{i=1}^N$; graph depth $K$; sampling size $N_s$ **Parameter:** Weight matrices $\textbf{W}^{(k)}$, $\forall k \in \{1,\dots,K\}$ instantiate the nodes by the proposals $\Mat{p}_i$, $\forall \Mat{p}_i \in \Mat{P}$ establish edges between nodes obtain a proposal graph $\mathcal{G}(\mathcal{P}, \mathcal{E})$ calculate adjacent matrix using Eq.  sample $N_s$ neighborhoods of $p$ aggregate information using Eq.  predict action categories $\{\hat{y}_i\}_{i=1}^{N}$ using Eq.  perform boundary regression using Eq.  predict completeness $\{\hat{c}_i\}_{i=1}^{N}$ using Eq.  **Output:** Trained P-GCN model \[Alg:forward\] Experiments =========== Datasets -------- **THUMOS14 [@jiang2014thumos]** is a standard benchmark for action localization. Its training set known as the UCF-101 dataset consists of 13320 videos. The validation, testing and background set contain 1010, 1574 and 2500 untrimmed videos, respectively. Performing action localization on this dataset is challenging since each video has more than 15 action instances and its 71% frames are occupied by background items. Following the common setting in [@jiang2014thumos], we apply 200 videos in the validation set for training and conduct evaluation on the 213 annotated videos from the testing set. **ActivityNet [@caba2015activitynet]** is another popular benchmark for action localization on untrimmed videos. We evaluate our method on ActivityNet v1.3, which contains around 10K training videos and 5K validation videos corresponded to 200 different activities. Each video has an average of 1.65 action instances. Following the standard practice, we train our method on the training videos and test it on the validation videos. In our experiments, we contrast our method with the state-of-the-art methods on both THUMOS14 and ActivityNet v1.3, and perform ablation studies on THUMOS14. Implementation details ---------------------- **Evaluation Metrics.** We use mean Average Precision (mAP) as the evaluation metric. A proposal is considered to be correct if its temporal IoU with the ground-truth instance is larger than a certain threshold and the predicted category is the same as this ground-truth instance. On THUMOS14, the tIOU thresholds are chosen from $\{0.1, 0.2, 0.3, 0.4, 0.5\}$; on ActivityNet v1.3, the IoU thresholds are from $\{0.5, 0.75, 0.95\}$, and we also report the average mAP of the IoU thresholds between 0.5 and 0.95 with the step of $0.05$. **Features and Proposals.** Our model is implemented under the two-stream strategy [@simonyan2014two]: RGB frames and optical flow fields. We first uniformly divide each input video into 64-frame segments. We then use a two-stream Inflated 3D ConvNet (I3D) model pre-trained on Kinetics [@carreira2017quo] to extract the segment features. In detail, the I3D model takes as input the RGB/optical-flow segment and outputs a 1024-dimensional feature vector for each segment. Upon the I3D features, we further apply max pooling across segments to obtain one 1024-dimensional feature vector for each proposal that is obtained by the BSN method [@lin2018bsn]. Note that we do not finetune the parameters of the I3D model in our training phase. Besides the I3D features and BSN proposals, our ablation studies in  \[Sec:ablation\] also explore other types of features (2-D features [@lin2018bsn]) and proposals (TAG proposals [@zhao2017temporal]). **Proposal Graph Construction.** We construct the proposal graph by fixing the values of $\theta_{ctx}$ as 0.7 and $\theta_{sur}$ as 1 for both streams. More discussions on choosing the values of $\theta_{ctx}$ and $\theta_{sur}$ could be found in the supplementary material. We adopt 2-layer GCN since we observed no clear improvement with more than 2 layers but the model complexity is increased. For more efficiency, we choose $N_{s}=4$ in Eq.  for neighbourhood sampling unless otherwise specified. **Training.** The initial learning rate is 0.001 for the RGB stream and 0.01 for the Flow stream. During training, the learning rates will be divided by 10 every 15 epochs. The dropout ratio is 0.8. The classification $\hat{y}_i$ and completeness $\hat{c}_i$ are trained with the cross-entropy loss and the hinge loss, respectively. The regression term $(\hat{t}_{i,s}, \hat{t}_{i,e})$ is trained with the smooth $L_{1}$ loss. More training details can be found in the supplementary material. **Testing.** We do not perform neighbourhood sampling (*i.e.* Eq. ) for testing. The predictions of the RGB and Flow steams are fused using a ratio of 2:3. We multiply the classification score with the completeness score as the final score for calculating mAP. We then use Non-Maximum Suppression (NMS) to obtain the final predicted temporal proposals for each action class separately. We use 600 and 100 proposals per video for computing mAPs on THUMOS14 and ActivityNet v1.3, respectively. Comparison with state-of-the-art results ---------------------------------------- **THUMOS14.** Our P-GCN model is compared with the state-of-the-art methods in Table \[Tab:thumos\]. The P-GCN model reaches the highest mAP over all thresholds, implying that our method can recognize and localize actions much more accurately than any other method. Particularly, our P-GCN model outperforms the previously best method (*i.e.* TAL-Net [@chao2018rethinking]) by 6.3% absolute improvement and the second-best result [@lin2018bsn] by more than 12.2%, when $tIoU=0.5$. tIoU 0.1 0.2 0.3 0.4 0.5 -------------------------------- ---------- ---------- ---------- ---------- ---------- -- -- Oneata [@oneata2014lear] 36.6 33.6 27.0 20.8 14.4 Wang [@wang2014action] 18.2 17.0 14.0 11.7 8.3 Caba [@caba2016fast] - - - - 13.5 Richard [@richard2016temporal] 39.7 35.7 30.0 23.2 15.2 Shou [@shou2016temporal] 47.7 43.5 36.3 28.7 19.0 Yeung [@yeung2016end] 48.9 44.0 36.0 26.4 17.1 Yuan [@yuan2016temporal] 51.4 42.6 33.6 26.1 18.8 Escorcia [@escorcia2016daps] - - - - 13.9 Buch [@buch2017sst] - - 37.8 - 23.0 Shou [@shou2017cdc] - - 40.1 29.4 23.3 Yuan [@Yuan2017] 51.0 45.2 36.5 27.8 17.8 Buch [@buch2017end] - - 45.7 - 29.2 Gao [@gao2017cascaded] 60.1 56.7 50.1 41.3 31.0 Hou [@hou2017real] 51.3 - 43.7 - 22.0 Dai [@dai2017temporal] - - - 33.3 25.6 Gao [@gao2017turn] 54.0 50.9 44.1 34.9 25.6 Xu [@xu2017r] 54.5 51.5 44.8 35.6 28.9 Zhao [@zhao2017temporal] 66.0 59.4 51.9 41.0 29.8 Lin [@lin2018bsn] - - 53.5 45.0 36.9 Chao [@chao2018rethinking] 59.8 57.1 53.2 48.5 42.8 P-GCN **69.5** **67.8** **63.6** **57.8** **49.1** : Action localization results on THUMOS14, measured by mAP (%) at different tIoU thresholds $\alpha$. \[Tab:thumos\] tIoU 0.5 0.75 0.95 Average ----------------------------- ----------- ----------- ------ ----------- Singh [@singh2016untrimmed] 34.47 - - - Wang [@wang2016uts] 43.65 - - - Shou [@shou2017cdc] 45.30 26.00 0.20 23.80 Dai [@dai2017temporal] 36.44 21.15 3.90 - Xu [@xu2017r] 26.80 - - - Zhao [@zhao2017temporal] 39.12 23.48 5.49 23.98 Chao [@chao2018rethinking] 38.23 18.30 1.30 20.22 P-GCN **42.90** **28.14** 2.47 **26.99** Lin [@lin2018bsn] \* 46.45 29.96 8.02 30.03 P-GCN\* **48.26** **33.16** 3.27 **31.11** : Action localization results on ActivityNet v1.3 (val), measured by mAP (%) at different tIoU thresholds and the average mAP of IoU thresholds from 0.5 to 0.95. (\*) indicates the method that uses the external video labels from UntrimmedNet [@wang2017untrimmed]. \[Tab:anet\] **ActivityNet v1.3.** Table \[Tab:anet\] reports the action localization results of various methods. Regarding the average mAP, P-GCN outperforms SSN [@zhao2017temporal], CDC [@shou2017cdc], and TAL-Net [@chao2018rethinking] by 3.01%, 3.19%, and 6.77%, respectively. We observe that the method by Lin [@lin2018bsn] (called LIN below) performs promisingly on this dataset. Note that LIN is originally designed for generating class-agnostic proposals, and thus relies on external video-level action labels (from UntrimmedNet [@wang2017untrimmed]) for action localization. In contrast, our method is self-contained and is able to perform action localization without any external label. Actually, P-GCN can still be modified to take external labels into account. To achieve this, we assign the top-2 video-level classes predicted by UntrimmedNet to all the proposals in that video. We provide more details about how to involve external labels in P-GCN in the supplementary material. As summarized in Table \[Tab:anet\], our enhanced version P-GCN\* consistently outperforms LIN, hence demonstrating the effectiveness of our method under the same setting. ![Action localization results on THUMOS14 with different backbones, measured by mAP@tIoU=0.5.](Fig3.pdf){width="\linewidth"} \[Fig:instantiations\] Ablation Studies ================ In this section, we will perform complete and in-depth ablation studies to evaluate the impact of each component of our model. More details about the structures of baseline methods (such as MLP and MP) can be found in the supplementary material. \[Sec:ablation\] How do the proposal-proposal relations help? {#Sec:5.1} -------------------------------------------- As illustrated in  \[Sec:gcn\], we apply two GCNs for action classification and boundary regression separately. Here, we implement the baseline with a 2-layer MultiLayer-Perceptron (MLP). The MLP baseline shares the same structure as GCN except that we remove the adjacent matrix $\Mat{A}$ in Eq. . To be specific, for the $k$-th layer, the propagation in Eq.  becomes $\Mat{X}^{k}=\Mat{X}^{k-1}\Mat{W}^k$, where $\Mat{W}^k$ are the trainable parameters. Without using $\Mat{A}$, MLP processes each proposal feature independently. By comparing the performance of MLP with GCN, we can justify the importance of message passing along proposals. To do so, we replace each GCN with an MLP and have the following variants of our model including: (1) **MLP$_1$ + GCN$_2$** where GCN$_1$ is replaced; (2) **GCN$_1$ + MLP$_2$** where GCN$_2$ is replaced; and (3) **MLP$_1$ + MLP$_2$** where both GCNs are replaced. Table \[Tab:twographs\] reads that all these variants decrease the performance of our model, thus verifying the effectiveness of GCNs for both action classification and boundary regression. Overall, our model P-GCN significantly outperforms the MLP protocol (**MLP$_1$ + MLP$_2$**), validating the importance of considering proposal-proposal relations in temporal action localization. How does the graph convolution help? ------------------------------------ Besides graph convolutions, performing mean pooling among proposal features is another way to enable information dissemination between proposals. We thus conduct another baseline by first adopting MLP on the proposal features and then conducting mean pooling on the output of MLP over adjacent proposals. The adjacent connections are formulated by using the same graph as GCN. We term this baseline as MP below. Similar to the setting in  \[Sec:5.1\], we have three variants of our model including: (1) **MP$_1$ + MP$_2$**; (2) **MP$_1$ + GCN$_2$**; and (3) **GCN$_1$ + MP$_2$**. We report the results in Table \[Tab:mean-pooling\]. Our P-GCN outperforms all MP variants, demonstrating the superiority of graph convolution over mean pooling on capturing between-proposal connections. The protocol **MP$_1$ + MP$_2$** in Table \[Tab:mean-pooling\] performs better than **MLP$_1$ + MLP$_2$** in Table \[Tab:twographs\], which again reveals the benefit of modeling the proposal-proposal relations, even we pursue it using the naive mean pooling. mAP@tIoU=0.5 RGB Gain Flow Gain --------------------------- ----------- ---------- ----------- ---------- MLP$_1$ + MLP$_2$ 34.75 - 43.68 - MLP$_1$ + GCN$_2$ 35.94 1.19 44.59 0.91 GCN$_1$ + MLP$_2$ 35.82 1.07 45.26 1.58 P-GCN (GCN$_1$ + GCN$_2$) **37.27** **2.52** **46.53** **2.85** : Comparison between our P-GCN model and MLP on THUMOS14, measured by mAP (%). \[Tab:twographs\] mAP@tIoU=0.5 RGB Gain Flow Gain --------------------------- ----------- ---------- ----------- ---------- MP$_1$ + MP$_2$ 35.32 - 43.97 - MP$_1$ + GCN$_2$ 36.50 1.18 45.78 1.81 GCN$_1$ + MP$_2$ 36.22 0.90 44.42 0.45 P-GCN (GCN$_1$ + GCN$_2$) **37.27** **1.95** **46.53** **2.56** : Comparison between our P-GCN model and mean-pooling (MP) on THUMOS14, measured by mAP (%). \[Tab:mean-pooling\] mAP@tIoU=0.5 RGB Flow -------------------------------- ------- ------- MLP 34.75 43.68 P-GCN(cos-sim) 35.55 44.83 P-GCN(cos-sim, self-add) 37.27 46.53 P-GCN(embed-cos-sim, self-add) 36.81 46.89 : Comparison of different types of edge functions on THUMOS14, measured by mAP (%). \[Tab:edge\] Influences of different backbones --------------------------------- Our framework is general and compatible with different backbones (, proposals and features). Beside the backbones applied above, we further perform experiments on TAG proposals [@zhao2017temporal] and 2D features [@lin2018bsn]. We try different combinations: (1) BSN+I3D; (2) BSN+2D; (3) TAG+I3D; (4) TAG+2D, and report the results of MLP and P-GCN in Figure \[Fig:instantiations\]. In comparison with MLP, our P-GCN leads to significant and consistent improvements in all types of features and proposals. These results conclude that, our method is generally effective and is not limited to the specific feature or proposal type. The weights of edge and self-addition ------------------------------------- We have defined the weights of edges in Eq. , where the cosine similarity (cos-sim) is applied. This similarity can be further extended by first embedding the features before the cosine computation. We call the embedded version as embed-cos-sim, and compare it with cos-sim in Table \[Tab:edge\]. No obvious improvement is attained by replacing cos-sim with embed-cos-sim (the mAP difference between them is less than $0.4\%$). Eq.  has considered the self-addition of the node feature. We also investigate the importance of this term in Table \[Tab:edge\]. It suggests that the self-addition leads to at least 1.7% absolute improvements on both RGB and Flow streams. Is it necessary to consider two types of edges? ----------------------------------------------- To evaluate the necessity of formulating two types of edges, we perform experiments on two variants of our P-GCN, each of which considers only one type of edge in the graph construction stage. As expected, the result in Table \[Tab:surrounding\] drops remarkably when either kind of edge is removed. Another crucial point is that our P-GCN still boosts MLP when only the surrounding edges are remained. The rationale behind this could be that, actions in the same video are correlated and exploiting the surrounding relation will enable more accurate action classification. mAP@tIoU=0.5 RGB Gain Flow Gain ----------------------- ------- ------- ------- ------- w/ both edges (P-GCN) 37.27 - 46.53 - w/o surrounding edges 35.84 -1.43 45.89 -0.64 w/o contextual edges 36.81 -0.46 45.57 -0.96 w/o both edges (MLP) 34.75 -2.52 43.68 -2.85 : Comparison of two types of edge on THUMOS14, measured by mAP (%). \[Tab:surrounding\] $N_s$ 1 2 3 4 5 10 --------- ------- ------- ------- ----------- ------- ------- RGB 36.0 36.92 35.68 **37.27** 36.11 36.37 Flow 46.15 45.06 45.13 **46.53** 46.28 46.14 Time(s) 0.10 0.23 0.33 0.41 0.48 1.72 : Comparison of different sampling size and training time for each iteration on THUMOS14, measured by mAP@tIoU=0.5. \[Tab:sampling\] The efficiency of our sampling strategy --------------------------------------- We train P-GCN efficiently based on the neighbourhood sampling in Eq. . Here, we are interested in how the sampling size $N_{s}$ affects the final performance. Table \[Tab:sampling\] reports the testing mAPs corresponded to different $N_s$ varying from 1 to 5 (and also 10). The training time per iteration is also added in Table \[Tab:sampling\]. We observe that when $N_{s}=4$ the model achieves higher mAP than the full model (, $N_{s}=10$) while reducing 76% of training time for each iteration. This is interesting, as sampling fewer nodes even yields better results. We conjecture that, the neighbourhood sampling could bring in more stochasticity and guide our model to escape from the local minimal during training, thus delivering better results. ![Qualitative results on THUMOS14 dataset.](Fig4.pdf){width="\linewidth"} \[Fig:qualitative\] Qualitative Results ------------------- Given the significant improvements, we also attempt to find out in what cases our P-GCN model improves over MLP. We visualize the qualitative results on THUMOS14 in Figure  \[Fig:qualitative\]. In the top example, both MLP and our P-GCN model are able to predict the action category correctly, while P-GCN predicts a more precise temporal boundary. In the bottom example, due to similar action characteristic and context, MLP predicts the action of “Shotput” as “Throw Discus”. Despite such challenge, P-GCN still correctly predicts the action category, demonstrating the effectiveness of our method. More qualitative results could be found in the supplementary material. Conclusions =========== In this paper, we have exploited the proposal-proposal interaction to tackle the task of temporal action localization. By constructing a graph of proposals and applying GCNs to message passing, our P-GCN model outperforms the state-of-the-art methods by a large margin on two benchmarks, , THUMOS14 and ActivithNet v1.3. It would be interesting to extend our P-GCN for object detection in image and we leave it for our future work. . This work was partially supported by National Natural Science Foundation of China (NSFC) 61602185, 61836003 (key project), Program for Guangdong Introducing Innovative and Enterpreneurial Teams 2017ZT07X183, Guangdong Provincial Scientific and Technological Funds under Grants 2018B010107001, and Tencent AI Lab Rhino-Bird Focused Research Program (No. JR201902). [10]{}=-1pt Shyamal Buch, Victor Escorcia, Bernard Ghanem, Li Fei-Fei, and Juan Carlos Niebles. End-to-end, single-stream temporal action detection in untrimmed videos. In [*Proceedings of the British Machine Vision Conference*]{}, 2017. Shyamal Buch, Victor Escorcia, Chuanqi Shen, Bernard Ghanem, and Juan Carlos Niebles. Sst: Single-stream temporal action proposals. In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, pages 6373–6382. IEEE, 2017. Fabian Caba Heilbron, Juan Carlos Niebles, and Bernard Ghanem. Fast temporal activity proposals for efficient detection of human actions in untrimmed videos. In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, pages 1914–1923, 2016. Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, pages 961–970, 2015. Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In [*proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, pages 6299–6308, 2017. Yu-Wei Chao, Sudheendra Vijayanarasimhan, Bryan Seybold, David A. Ross, Jia Deng, and Rahul Sukthankar. Rethinking the faster r-cnn architecture for temporal action localization. In [*The IEEE Conference on Computer Vision and Pattern Recognition*]{}, June 2018. Jie Chen, Tengfei Ma, and Cao Xiao. Fastgcn: fast learning with graph convolutional networks via importance sampling. , 2018. Xiyang Dai, Bharat Singh, Guyue Zhang, Larry S. Davis, and Yan Qiu Chen. Temporal context network for activity localization in videos. In [*Proceedings of the IEEE International Conference on Computer Vision*]{}, Oct 2017. Chaorui Deng, Qi Wu, Qingyao Wu, Fuyuan Hu, Fan Lyu, and Mingkui Tan. Visual grounding via accumulated attention. In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, pages 7746–7755, 2018. Xuguang Duan, Wenbing Huang, Chuang Gan, Jingdong Wang, Wenwu Zhu, and Junzhou Huang. Weakly supervised dense event captioning in videos. In [*Advances in Neural Information Processing Systems*]{}, pages 3059–3069, 2018. Victor Escorcia, Fabian Caba Heilbron, Juan Carlos Niebles, and Bernard Ghanem. Daps: Deep action proposals for action understanding. In [*Proceedings of the European Conference on Computer Vision*]{}, pages 768–784, 2016. Lijie Fan, Wenbing Huang, Chuang Gan, Stefano Ermon, Boqing Gong, and Junzhou Huang. End-to-end learning of motion representation for video understanding. In [*The IEEE Conference on Computer Vision and Pattern Recognition*]{}, June 2018. Chuang Gan, Boqing Gong, Kun Liu, Hao Su, and Leonidas J Guibas. Geometry guided convolutional neural networks for self-supervised video representation learning. In [*CVPR*]{}, pages 5589–5597, 2018. Chuang Gan, Naiyan Wang, Yi Yang, Dit-Yan Yeung, and Alex G Hauptmann. Devnet: A deep event network for multimedia event detection and evidence recounting. In [*CVPR*]{}, pages 2568–2577, 2015. Chuang Gan, Yi Yang, Linchao Zhu, Deli Zhao, and Yueting Zhuang. Recognizing an action using its name: A knowledge-based approach. , 120(1):61–77, 2016. Chuang Gan, Ting Yao, Kuiyuan Yang, Yi Yang, and Tao Mei. You lead, we exceed: Labor-free video concept learning by jointly exploiting web videos and images. In [*CVPR*]{}, pages 923–932, 2016. Jiyang Gao, Zhenheng Yang, Kan Chen, Chen Sun, and Ram Nevatia. Turn tap: Temporal unit regression network for temporal action proposals. In [*Proceedings of the IEEE International Conference on Computer Vision*]{}, pages 3628–3636, 2017. Jiyang Gao, Zhenheng Yang, and Ram Nevatia. Cascaded boundary regression for temporal action detection. In [*BMVC*]{}, 2017. Yong Guo, Qi Chen, Jian Chen, Qingyao Wu, Qinfeng Shi, and Mingkui Tan. Auto-embedding generative adversarial networks for high resolution image synthesis. , 2019. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In [*Advances in Neural Information Processing Systems*]{}, pages 1024–1034, 2017. Rui Hou, Rahul Sukthankar, and Mubarak Shah. Real-time temporal action localization in untrimmed videos by sub-action discovery. In [*BMVC*]{}, 2017. Han Hu, Jiayuan Gu, Zheng Zhang, Jifeng Dai, and Yichen Wei. Relation networks for object detection. In [*The IEEE Conference on Computer Vision and Pattern Recognition*]{}, June 2018. Wenbing Huang, Tong Zhang, Yu Rong, and Junzhou Huang. Adaptive sampling towards fast graph representation learning. In [*Advances in Neural Information Processing Systems*]{}, pages 4558–4567, 2018. YG Jiang, J Liu, A Roshan Zamir, G Toderici, I Laptev, M Shah, and R Sukthankar. Thumos challenge: Action recognition with a large number of classes, 2014. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In [*International Conference on Learning Representations*]{}, 2017. Tianwei Lin, Xu Zhao, and Zheng Shou. Single shot temporal action detection. In [*Proceedings of the 2017 [ACM]{} on Multimedia Conference, [MM]{} 2017, Mountain View, CA, USA, October 23-27, 2017*]{}, pages 988–996, 2017. Tianwei Lin, Xu Zhao, Haisheng Su, Chongjing Wang, and Ming Yang. Bsn: Boundary sensitive network for temporal action proposal generation. In [*The European Conference on Computer Vision*]{}, September 2018. Alberto Montes, Amaia Salvador, and Xavier Giro-i Nieto. Temporal activity detection in untrimmed videos with recurrent neural networks. , 2016. Dan Oneata, Jakob Verbeek, and Cordelia Schmid. The lear submission at thumos 2014. 2014. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In [*Advances in neural information processing systems*]{}, pages 91–99, 2015. Alexander Richard and Juergen Gall. Temporal action detection using a statistical language model. In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, pages 3131–3140, 2016. Yantao Shen, Hongsheng Li, Shuai Yi, Dapeng Chen, and Xiaogang Wang. Person re-identification with deep similarity-guided graph neural network. In [*The European Conference on Computer Vision*]{}, September 2018. Zheng Shou, Jonathan Chan, Alireza Zareian, Kazuyuki Miyazawa, and Shih-Fu Chang. Cdc: Convolutional-de-convolutional networks for precise temporal action localization in untrimmed videos. In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, July 2017. Zheng Shou, Dongang Wang, and Shih-Fu Chang. Temporal action localization in untrimmed videos via multi-stage cnns. In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, pages 1049–1058, 2016. Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. In [*Advances in neural information processing systems*]{}, pages 568–576, 2014. Gurkirt Singh and Fabio Cuzzolin. Untrimmed video classification for activity detection: submission to activitynet challenge. , 2016. Mingkui Tan, Qinfeng Shi, Anton van den Hengel, Chunhua Shen, Junbin Gao, Fuyuan Hu, and Zhen Zhang. Learning graph structure for multi-label image classification via clique generation. In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, pages 4100–4109, 2015. Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. In [*Proceedings of the IEEE International Conference on Computer Vision*]{}, pages 4489–4497, 2015. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, [Ł]{}ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In [*Advances in Neural Information Processing Systems*]{}, pages 5998–6008, 2017. Limin Wang, Yu Qiao, and Xiaoou Tang. Action recognition and detection by combining motion and appearance features. , 1(2):2, 2014. Limin Wang, Yuanjun Xiong, Dahua Lin, and Luc Van Gool. Untrimmednets for weakly supervised action recognition and detection. In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, July 2017. Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In [*Proceedings of the European Conference on Computer Vision*]{}, pages 20–36. Springer, 2016. Ruxin Wang and Dacheng Tao. Uts at activitynet 2016. , 2016. Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, pages 7794–7803, 2018. Xiaolong Wang and Abhinav Gupta. Videos as space-time region graphs. In [*The European Conference on Computer Vision*]{}, September 2018. Huijuan Xu, Abir Das, and Kate Saenko. R-c3d: Region convolutional 3d network for temporal activity detection. In [*Proceedings of the IEEE International Conference on Computer Vision*]{}, Oct 2017. Sijie Yan, Yuanjun Xiong, and Dahua Lin. Spatial temporal graph convolutional networks for skeleton-based action recognition. , 2018. Serena Yeung, Olga Russakovsky, Greg Mori, and Li Fei-Fei. End-to-end learning of action detection from frame glimpses in videos. In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, pages 2678–2687, 2016. Jun Yuan, Bingbing Ni, Xiaokang Yang, and Ashraf A Kassim. Temporal action localization with pyramid of score distribution features. In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, pages 3093–3102, 2016. Zehuan Yuan, Jonathan C. Stroud, Tong Lu, and Jia Deng. Temporal action localization by structured maximal sums. In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, July 2017. Runhao Zeng, Chuang Gan, Peihao Chen, Wenbing Huang, Qingyao Wu, and Mingkui Tan. Breaking winner-takes-all: Iterative-winners-out networks for weakly supervised temporal action localization. , 2019. Yue Zhao, Yuanjun Xiong, Limin Wang, Zhirong Wu, Xiaoou Tang, and Dahua Lin. Temporal action detection with structured segment networks. In [*Proceedings of the IEEE International Conference on Computer Vision*]{}, Oct 2017. Zhuangwei Zhuang, Mingkui Tan, Bohan Zhuang, Jing Liu, Yong Guo, Qingyao Wu, Junzhou Huang, and Jinhui Zhu. Discrimination-aware channel pruning for deep neural networks. In [*Advances in Neural Information Processing Systems*]{}, pages 875–886, 2018. Proposal Features ================= We have two types of proposal features and the process of feature extraction is shown in Figure \[Fig:feature\]. **Proposal features.** For the original proposal, we first obtain a set of segment-level features within the proposal and then apply max-pooling across segments to obtain one 1024-dimensional feature vector. **Extended proposal features.** The boundary of the original proposal is extended with $\frac{1}{2}$ of its length on both the left and right sides, resulting in the extended proposal. Thus, the extended proposal has three portions: *start*, *center* and *end*. For each portion, we follow the same feature extraction process as the original proposal. Finally, the extended proposal feature is obtained by concatenating the feature of three portions. ![The illustration of (extended) proposal feature extraction.[]{data-label="Fig:feature"}](supp-Fig1.pdf){width="\linewidth"} Network Architectures ===================== **P-GCN.** The network architecture of our P-GCN model is shown in Figure \[Fig:network\]. Let $N$ and $N_{class}$ be the number of proposals in one video and the total number of action categories, respectively. On the top of GCN, we have three fully-connected (FC) layers for different purposes. The one with $N_{class}\times 2$ outputs is for boundary regression and the other two with $N_{class}$ outputs are designed for action classification and completeness classification, respectively. ![The network architecture of P-GCN model.[]{data-label="Fig:network"}](supp-Fig2.pdf){width=".85\linewidth"} ![The network architecture of the MLP baseline.[]{data-label="Fig:mlp_baseline"}](supp-Fig3.pdf){width=".9\linewidth"} ![The network architecture of the Mean-Pooling baseline.[]{data-label="Fig:mp_baseline"}](supp-Fig4.pdf){width=".9\linewidth"} **MLP baseline.** The network architecture of MLP baseline is shown in Figure \[Fig:mlp\_baseline\]. We replace each of GCNs with a 2-layer multilayer perceptron (MLP). We set the number of parameters in MLP the same as GCN’s for a fair comparison. Note that MLP processes each proposal independently without exploiting the relations between proposals. **Mean-Pooling baseline.** As shown in Figure \[Fig:mp\_baseline\], the network architecture of Mean-Pooling baseline is the same as the MLP baseline’s except that we conduct mean-pooling on the output of MLP over the adjacent proposals. Training Details ================ We have three types of training samples chosen by two criteria, , the best tIoU and best overlap. For each proposal, we calculate its tIoU with all the ground truth in that video and choose the largest tIoU as the best tIoU (similarly for best overlap). For simplicity, we denote the best tIoU and best overlap as tIoU and OL. Then, three types of training samples can be described as: (1) Foreground sample: $tIoU\ge \theta_1$; (2) Incomplete sample: $OL\ge \theta_2,tIoU\le \theta_3$; (3) Background sample: $tIoU\le \theta_4$. These certain thresholds are slightly different on two datasets as shown in Table \[Tab:thr\]. We consider all foreground proposals as the complete proposals. Dataset $\theta_1$ $\theta_2$ $\theta_3$ $\theta_4$ ------------------ ------------ ------------ ------------ ------------ THUMOS14 0.7 0.7 0.3 0 ActivityNet v1.3 0.7 0.7 0.6 0.1 : The thresholds on different datasets. \[Tab:thr\] Each mini-batch contains examples sampled from a single video. The ratio of three types of samples is fixed to (1):(2):(3)=1:6:1. We set the mini-batch size to 32 on THUMOS14 and 64 on ActivityNet v1.3. For more efficiency, we fix the number of neighborhoods for each node to be 10 by selecting contextual edges with the largest relevance scores and surrounding edges with the smallest distances, where the ratio of contextual and surrounding edges is set to 4:1. In addition, we empirically found that setting $A_{i,j}$ to 0 (when $A_{i,j}<0$) leads to better results. Loss function ============= **Multi-task Loss.** Our P-GCN model can not only predict action category but also refine the proposal’s temporal boundary by location regression. With the action classifier, completeness classifier and location regressors, we define a multi-task loss by: $$\begin{aligned} \mathcal{L}&=\sum_{i}\mathcal{L}_{cls}(y_i,\hat{y}_i)+\lambda_{1}\sum_{i}[y_i\ge1, e_i=1]\mathcal{L}_{reg}(o_i,\hat{o}_i) \\ &+\lambda_{2}\sum_{i}[y_i\ge1]\mathcal{L}_{com}(e_i,\hat{c}_i), \end{aligned}$$ where $\hat{y}_i$ and $y_i \in \{0,\dots,N_{class}\}$ is the predicted probability and ground truth action label of the $i$-th proposal in a mini-batch, respectively. Here, 0 represents the background class. $e_i$ is the completeness label. $\hat{o}_i$ and $o_i$ are the predicted and ground truth offset, which will be detailed below. In all experiments, we set $\lambda_{1}=\lambda_{2}=0.5$. **Completeness Loss.** Here, the completeness term $\mathcal{L}_{com}$ is used only when $y_i\ge1$, , the proposal is not considered as part of the background. **Regression Loss.** We devise a set of location regressors $\{R_m\}_{m=1}^{N_{class}}$, each for an activity category. For a proposal, we regress the boundary using the closest ground truth instance as the target. Our P-GCN model does not predict the start time and end time of each proposal directly. Instead, it predicts the offset $\hat{o}_i=(\hat{o}_{i,c}, \hat{o}_{i,l})$ relative to the proposal, where $\hat{o}_{i,c}$ and $\hat{o}_{i,l}$ are the offset of center coordinate and length, respectively. The ground truth offset is denoted as $o_i=(o_{i,c}, o_{i,l})$ and parameterized by: $$\begin{aligned} o_{i,c}&=(c_i-c_{gt})/l_i, \\ o_{i,l}&=log(l_i/l_{gt}), \end{aligned}$$ where $c_i$ and $l_i$ denote the original center coordinate and length of the proposal, respectively. $c_{gt}$ and $l_{gt}$ account for the center coordinate and length of the closest ground truth, respectively. $\mathcal{L}_{reg}$ is the smooth L1 loss and used when $y_i\ge1$ and $e_i=1$, , the proposal is a foreground sample. Details of Augmentation Experiments on\ ActivityNet ======================================= Our P-GCN model can be further augmented by taking the external video-level labels into account. To achieve this, we replace the predicted action classes in Eq. (6) with the external action labels. Specifically, given an input video, we use UntrimmedNet to predict the top-2 video-level classes and assign these classes to all the proposals in this video. In this way, each proposal has two action classes. To further compute mAP, the score of each proposal is required. In our implementation, we follow the settings in BSN by calculating $$s_{prop} = s_{act} * s_{com} * s_{bsn} * s_{unet},$$ where $s_{act}$ and $s_{com}$ are the action score and completeness score associated with the action class. $s_{bsn}$ represents the confidence score produced by BSN and $s_{unet}$ denotes for the action score predicted by UntrimmedNet. Explanation and ablation study of $\theta_{ctx}$ ================================================ The parameter $\theta_{ctx}$ is a threshold value for constructing contextual edges, $r(\Vec{p}_i, \Vec{p}_i)> \theta_{ctx}$. Since $r(\Vec{p}_i, \Vec{p}_i) \in [0,1]$, $ \theta_{ctx}$ can be chosen from $[0,1)$. An ablation study is shown in Table \[tab:theta\]. Our method performs well when $\theta_{ctx}=0.7,0.8,0.9$. \[tab:theta\] Ablation study of boundary regression ===================================== We conducted an ablation study on boundary regression in Table \[tab:random\], whose results validate the necessity of using boundary regression. mAP@tIoU=0.5 RGB Flow ----------------------------- ---------- ---------- without boundary regression 36.4 45.4 with boundary regression **37.3** **46.5** : Ablation results of boundary regression on THUMOS14. \[tab:random\] -------------- -------- -------- ------ ------ RGB Flow MLP baseline 0.376s 16.57M 34.8 43.7 P-GCN 0.404s 17.70M 37.3 46.5 -------------- -------- -------- ------ ------ : Runtime/computation complexity in FLOPs/action localization mAP on THUMOS14. We train each model with 200 iterations on a Titan X GPU and report the average processing time per video per iteration (note that proposal generation and feature extraction are excluded for each model). For P-GCN, we choose the number of sampling neighbourhoods as $N_s=4$. \[tab:runtime\] Additional runtime compared to \[52\] ===================================== The MLP baseline is indeed a particular implementation of \[52\], and it shares the same amount of parameters with our P-GCN. We compare the runtime between P-GCN and MLP in Table \[tab:runtime\]. It reads that GCN only incurs a relatively small additional runtime compared to MLP but is able to boost the performance significantly. [^1]: indicates equal contributions. This work was done when Runhao Zeng was served as a research intern in Tencent AI Lab under the supervision of Wenbing Huang. [^2]: Corresponding author
--- author: - 'J.-U. Ness, J.H.M.M. Schmitt, M. Audard, M. Güdel, R. Mewe' date: 'Received January 23, 2003; accepted June 06, 2003' subtitle: A systematic investigation of opacity effects title: 'Are stellar coronae optically thin in X-rays?' --- Introduction {#intro} ============ The emission line spectra obtained with the gratings on board the new X-ray observatories XMM-Newton and Chandra allow us to measure individual X-ray emission lines originating from ions in high ionization stages. These emission lines probe the hot tenuous plasma in stellar coronae. Obviously, the solar corona is much easier to study than stellar coronae, and observing techniques and methods originally developed for the analysis of the solar corona can now be applied to stellar coronae many years later with much improved technology. The theory required for X-ray spectroscopy developed in the 1960s and 70s now experiences a revival with the new generation of X-ray instruments applied to study stellar coronae. The basic assumptions underlying almost all theoretical and observational analyses of solar and stellar coronal emission lines are, first, that the plasma is optically thin, and second, that the plasma is in collisional equilibrium. The latter implies that excitations are exclusively due to collisions and not to photo-excitation, the former implies that all photons produced in the hot plasma escape without further interaction. The plasma then cools through radiation (and possibly conduction). Radiative transport does not need to be considered, which makes the interpretation of coronal spectra and modeling of the coronal plasma much easier.\ If this assumption were not true, opacity effects would first become visible in strong resonance lines. Resonance line photons could be absorbed and re-emitted in other directions. Depending on the plasma geometry, resonance line photons can be scattered out of the line of sight, but photons can also be scattered from other directions into the line of sight. Photons scattered back to the stellar surface will be absorbed rather than escape. Therefore, the line intensities of lines with strong scattering are reduced when compared to lines with no scattering. This effect is called resonant scattering. In coronal equilibrium forbidden lines can always be considered optically thin because of their low radiative transition probabilities. Therefore the effect of resonant scattering can be recognized by resonance lines being damped in comparison to forbidden lines. Thus, the basic principle for detecting resonance scattering is to measure line flux ratios of definitely non-damped forbidden lines with low oscillator strengths $f$ and resonance lines with high oscillator strengths. If this ratio is found to be enhanced compared to line ratios from theoretical predictions or from laboratory measurements, the resonance line should be considered optically thick. For a detailed account of the underlying theory we refer to [@bhatia01; @schmelz97; @mar92]. We derive the reference line ratios from the line databases MEKAL[^1] [@mewe95], Chianti with ionization balances from [@ar] [@dere01; @young02], and APEC[^2] [e.g., @smith01].\ In the solar context the problem of resonant scattering of X-ray emission lines has been discussed with rather controversial conclusions. [@actcat76], [@acton78], and [@strong78] investigated the effects of resonant scattering for various He-like ions, especially the O[vii]{} resonance line at 21.6Å. They found differences between theoretical and observed values of the temperature sensitive G-ratio (f+i)/r [@gj69] and interpreted these differences as being due to resonant scattering effects. [@schmelz97] and [@saba99] measured five different line ratios and found significant optical depths only for the Fe[xvii]{} line at 15.03Å ($^1$S$_0\rightarrow^1$P$_1$ with a high oscillator strength $f=2.66$). They compared the 15.03Å line flux with line flux measurements for Fe[xvii]{} lines with lower oscillator strengths, namely two intercombination lines $^1$S$_0\rightarrow^3$D$_1$ at 15.27Å  with $f=0.593$ and $^1$S$_0\rightarrow^3$P$_1$ at 16.78Å with $f=0.1$. The different oscillator strengths indicate to which extent the transition can be subject to resonant scattering, i.e., the probability for resonant scattering of the 15.27Å line is less than a quarter of that of the 15.03Å line, while resonant scattering of the 16.78Å line is even less probable, i.e., 0.04 times that for the 15.03Å line. For the prediction of such line ratios for optically thin cases theory and experiment unfortunately do not agree with each other. The 15.27/15.03Åline ratio has been measured in the Electron Beam Ion Trap [EBIT; @brown98; @brown01; @laming00]. These experiments typically yield Fe[xvii]{} 15.27/15.03Å photon flux ratios in the range 0.3 - 0.36, which significantly differ from those expected from theoretical calculations. Also, [@brown01; @phil97] point out that contamination of the 15.27Åline by an Fe[xvi]{} satellite line can further enhance the observed 15.27/15.03Å photon flux ratio especially in cooler plasmas (below $\sim 3$MK).\ The optical thickness of stellar coronae has been investigated for EUV lines [e.g., @schr94; @schm96] measured with the Extreme Ultraviolet Explorer (EUVE). While [@schr94] claim to have found evidence for resonant scattering, [@schm96] argue using additional ROSAT observations that resonant scattering does not appear to be required for the interpretation of the EUV and X-ray spectra of inactive cool stars. From Chandra LETGS measurements [@ness01] ruled out optical depth effects in their analysis of Procyon and Capella. The assumption of a significant optical depth leads to unreasonably large emission measures contradicting their direct measurements of emission measures. [@mewe01] measured the Fe[xvii]{} 15.27/15.03Å photon flux ratio for Capella of $0.35\,\pm\,0.02$ and derive a formal value of an optical depth $\tau$ (assuming slab geometries), which can be used in order to constrain loop lengths. The effects of opacity effects have also been addressed for Capella by [@phil01] using the same ratios and were found to be neglible. [@ness_alg] measure the same ratio identical to the Capella measurement for the much more active star Algol. From this consistency they conclude that resonant scattering effects might in general be negligible for all coronae rather than being identical for all kinds of different coronae. This hypothesis is also supported by [@aud03] from an analysis of a sample of five active RS CVn stars, where also similar ratios are measured for all stars.\ The purpose of this paper is a systematic investigation of potential optical depth effects in a large sample of stars covering a wide range of different activity levels. We will specifically analyze two Fe[xvii]{} line ratios and He-like f/r ratios for O[vii]{} and Ne[ix]{} for all cool stars, for which high-resolution spectra with the new X-ray instruments are available. We analyze 22 spectra obtained with the Reflection Grating Spectrometer (RGS) on board XMM-Newton, 12 spectra measured with the Low Energy Transmission Grating (LETGS) on board Chandra, and 10 spectra from the High Energy Transmission Grating (HETGS) on board Chandra (which are split in two spectra, the Medium Energy Grating (MEG) with a higher aperture and the High Energy Grating (HEG) with higher spectral resolution). Some stars have been measured by two or three instruments allowing comparison of calibration and/or finding variability of opacity effects. We will discuss possible trends and agreement and disagreement for measured line ratios with theoretical predictions. The major question we address is: Are resonant scattering effects dependent on the degree of activity, or are they negligible? Reduction and analysis ====================== Reduction of the raw data ------------------------- For a most comprehensive analysis we studied line ratios relevant for detecting opacity effects from different instruments. From the XMM-Newton RGS GT program on board XMM-Newton, 22 spectra from stars in all stages of coronal activity are available. The reduction procedure for these data is identical for all spectra using SAS version 5.2. Five stars in our sample (RS CVn systems) have been described by [@aud03] and a detailed description of the reduction is given there. For some stars (47 Cas, AU Mic, $\kappa$ Cet, and YZ CMi) we tested the effect of larger extraction regions comprising 95% source photons (instead of 90%), but find no significant improvement. Three observations (AB Dor, Capella, and EQ Peg) have been carried out before the chip failure on the RGS1, so that the range between 10.5 and 13.8Å is available also with the RGS1 for these stars. The analysis of Ne[ix]{} is still not possible with the RGS1 for these stars, because of bad pixels on the chip where the photons from the 13.7Å (the Ne[ix]{} forbidden line) are extracted. Line counts are measured with the CORA program (Sect. \[lfxlues\]) and the ASCII files required for CORA were produced with XSPEC from the fits files returned by the SAS software. From the response matrices effective areas were calculated and stored in ASCII files which are used as look-up tables for converting measured line counts into line fluxes.\ Most of the LETGS data included have been introduced by [@ness_10] and for details on the data reduction we refer to that paper (effective areas from Deron Pease, Aug. 2002). We also analyze HETGS spectra of all cool stars available to us and use the pre-processed pha files from the Chandra archive. In Table \[sample\] we list specifications for 44 observations of 26 stars with exposure times and X-ray luminosities obtained from the different instruments. We summed all first order photons converted to energy fluxes using the effective areas, exposure times and distances in order to calculate X-ray luminosities. Differences in X-ray luminosities by no more than a factor of two occur, although they are extracted in the same wavelength intervals (except for MEG and HEG, which are extracted in their complete wavelength ranges), because photons in the chip gaps on RGS1 and RGS2 are missing and higher order photons in the LETGS are not corrected for.\ Measurement of line fluxes {#lfxlues} -------------------------- Line counts are measured with a modified version of the CORA program by [@newi]. Due to small count numbers all spectra in our sample require Poisson statistics to be applied. Since the conventional background subtraction ruins the Poissonian statistics, we construct a model spectrum consisting of the sum of three components. The line spectrum is modeled with analytical line profile functions representing instrumental point spread functions (Lorentzian for the RGS spectra, Gaussian for the MEG and HEG spectra and a ”$\beta$ model” for the LETGS spectra, which is a Lorentzian with an exponent $\beta=2.5$). The background is split in two components, the instrumental background (extracted from regions on the detectors adjacent to the dispersion directions) and a source continuum (modeled as a constant value representing a number of counts per bin over the wavelength region under individual consideration). The sum of these three components is compared to the non-subtracted spectrum in order to calculate likelihood values to be minimized. The modeling is restricted only to the line parameters’ position, line width and line counts, but the two background components must be given a priori (cf. Sect. \[bg\]). Therefore the errors (1$\sigma$ errors) given for the line counts represent only statistical errors (including correlated errors from possible line blends), but systematic uncertainties from the placement of a continuum value are not included. Placement of the continuum {#bg} -------------------------- The accuracy of the iterated line model clearly depends on the choice of the two background components. The instrumental background is no problem, because it can be measured from adjacent regions on the detector plates. However, the determination of reliable source continua (comprising true continuum and pseudo continuum of unresolved weak lines) is much more difficult. We consider the source continuum to be constant over small wavelength regions (small range including the emission lines to be measured) and assign a single source background parameter $sbg$ in units counts/Å to represent this flat source continuum. In the CORA program such a value for a source continuum can be specified directly by hand or the median value of all bins in the wavelength region covering the lines under consideration can be selected, which is only valid as long as less than 50% of the bins belong to emission lines. All bins containing count numbers higher than 3$\sigma$ above this median value ($\sigma=\sqrt{\rm median}$) are regarded to obviously belong to emission lines and are excluded from calculating the final $sbg$ median value.\ The specific challenge posed by RGS spectra is that the line wings are broad and overlap. The inclusion of correlated statistical errors is thus very important, but the determination of an adequate value for the source background is more difficult. The median value will systematically overestimate the source continuum, because more bins belong to the emission lines rather than representing the source continuum and line counts will then be underestimated. For the purpose of this paper, the Fe[xvii]{} lines around 15Å are measured, and this wavelength region contains many nearby lines, such that for the median calculation only small regions representing the continuum are available.\ We therefore modified the program to calculate a value for the source background by refining the median calculation. The 3$\sigma$ criterion is already an attempt to remove some bins that belong to emission lines in order to increase the percentage of remaining bins belonging to the background. For our purpose we modify this criterion in two ways. First, the removal of bins with high count numbers and the calculation of a new median value are repeated iteratively until no more bins contain more counts than 3$\sigma$ above the respective median values. Secondly, a new parameter $n_\sigma$ is introduced. In this way median values are iteratively calculated after removal of all bins with count values higher than $n_\sigma\times \sigma$, i.e., median$_{\rm new}=$median(bins$ < n_\sigma\times \sqrt{{\rm median}_{\rm last}})$. Small values of $n_\sigma$ will more critically remove high-count bins resulting in lower source background values. Usage of this parameter represents a parameterized choice of source continuum values by eye. In Fig. \[bg15\] we show the 15Å region of $\lambda$ And with attempts to obtain a most realistic source background value using the new parameter. It can be seen that this wavelength region is full of emission lines and that significantly more than 50% of all bins belong to emission lines rather than the background emission. By gradually reducing $n_\sigma$ the median background can significantly be reduced, and with $n_\sigma=1$ a most suitable background is found. The resulting count number for the 15.03Å line range from 388.8 to 476.7 counts. This demonstrates that systematic errors of order 25% must be added to the given statistical errors. In the following we use $n_\sigma=1$ for all spectra when fitting the 15Å lines and $n_\sigma=1.5$ for the 16.78Å line. The neon and oxygen lines are all measured with the old method. Measured line counts -------------------- Since in the RGS1 bad pixels corrupt the measurement of the 15.27Å line we analyze only the RGS2 data and the LETGS, MEG, and HEG data for the iron measurements. The He-like lines were measured with RGS1, LETGS, and MEG (oxygen) and with RGS2, LETGS, MEG, and HEG (neon). The fit results for the three Fe[xvii]{} lines at 15.03Å, 15.27Å, and at 16.78Å are listed in Table \[fe\_cts\]. These counts are converted to energy fluxes in order to derive line flux ratios using effective areas obtained from the response matrices for comparison with line flux ratios from the databases MEKAL, Chianti, and APEC, which all list optically thin emissivities for given temperature grids. The results for the measured ratios are also listed in Table \[fe\_cts\]. The line counts measured for the He-like f and r lines of O[vii]{} and Ne[ix]{} are listed in Tables \[fr\_ox\] and \[fr\_ne\], respectively. We also measure the O[viii]{} Ly$_{\alpha}$ line, and from the O[viii]{} Ly$_\alpha$/O[vii]{}r line ratios we assign an characteristic coronal temperature to each star (using the APEC line database). Further, we calculate X-ray luminosities emitted in all three He-like lines (r, i, and f summed) as activity indicators (cf. Fig. \[lxox\]). In addition to the line counts for oxygen we list the derived temperatures and O[vii]{} and Ne[ix]{} luminosities in Tables \[fr\_ox\] and \[fr\_ne\]. The plasma temperature is also a good activity indicator [@gdl97]. Results ======= The measured line fluxes are used in order to plot the line ratios sensitive to resonant scattering versus activity indicators, i.e., temperatures for the Fe[xvii]{} ratios and X-ray luminosities contained in the He-like lines for the f/r ratios of O[vii]{} and Ne[ix]{}.\ Fe[xvii]{} line ratios ---------------------- In Fig. \[feratios\] we plot the Fe[xvii]{} line ratios of 15.27/15.03Å lines and for 16.78/15.03Å lines from Table \[fe\_cts\] versus O[viii]{}/O[vii]{} characteristic temperatures used as activity indicators (listed in Table \[fr\_ox\]). The horizontal lines represent theoretical low-optical depth ratios as a function of temperature predicted by interpolation from MEKAL, APEC, and Chianti, respectively. For the 16.78/15.03Å line ratios we also included new theoretical predictions by [@doron], who account for dielectronic and radiative recombination from Fe[xviii]{}, inner-shell ionization from Fe[xvi]{}, and resonant excitation through doubly excited levels of Fe[xvi]{} (3-ion model) in their calculations. The model predictions lie significantly higher than the predictions from the other databases, but their 1-ion model and their predictions for the 15.27/15.03Å line ratio are consistent with the other predictions.\ Comparing our measurements of the 15.27/15.03Å line ratios with the theoretical predictions the measured ratios are systematically higher than predicted with no apparent correlation with temperature except possibly for the coolest coronae in our sample where the ratios are highest. For the 16.78/15.03Å ratios we find no correlation with temperature at all, but a larger scatter with systematic deviations from the databases, although good agreement with the predictions by [@doron] is seen.\ In Fig. \[fe\_meg\] we plot only the LETGS and MEG measurements, where the scatter due to systematic and statistical uncertainties is much smaller. The reason is that the RGS ratios suffer from systematic uncertainties in the placement of the source continuum (due to broad line wings; cf. Sect. \[bg\]) and the HEG measurements have low signal to noise and have thus large statistical uncertainties. In the left panel of Fig. \[fe\_meg\] it can be seen that the 15.27/15.03Å ratio is remarkably constant for all sources except for the MEG measurement of EV Lac, which deviates considerably from all the other MEG and LETGS measurements; this data point is marked by a filled triangle. Both Fe[xvii]{} ratios thus suggest that resonant scattering plays a significant role for EV Lac, however, this high ratio cannot be confirmed in the simultaneous HEG measurement nor in the RGS2 data. We show the two spectra obtained with MEG and RGS2 for EV Lac in Fig. \[evlac\], where the different ratios can be recognized. In order to compare our measurements with solar measurements we include the solar measurements from [@saba99] in the form of shaded areas in Figs. \[feratios\] and \[fe\_meg\]. They deduced significant optical depths for the 15.03Å resonance line by comparison with databases available at that time. From the left panel of Fig. \[feratios\] it can be seen that most of our measured ratios are located in the bottom part of the shaded area, but only measurements for cooler coronae are really consistent with solar measurements. For the 16.78/15.03Å ratio we find solar measurements significantly higher than all our results.\ The calibration used for obtaining the solar line ratios cannot be reconstructed, such that systematic uncertainties cannot be excluded as the reason for the discrepancies. However, since ratios of very nearby lines are calculated, only the relative calibration matters, which is always more accurate than the absolute calibration for such nearby lines. We point out that measurements for the Sun can also lead to different results when specific regions in the solar corona are selected, while for the stars in our sample only overall line fluxes can be obtained. Contamination is always an issue that needs to be checked. We therefore inspected the line flux ratios of the two low-$f$ lines at 15.27Å and at 16.78Å in Fig. \[fecheck\], which should be independent of resonant scattering effects. This ratio is consistent with both the solar measurements and with the databases. Possible blending of the 15.27Å line can explain the enhanced ratios measured for the cooler coronae in the left panel of Fig. \[feratios\]. Such enhancements can also be identified in Fig. \[fecheck\], but not for the 16.78/15.03Å ratios. He-like line ratio f/r ---------------------- Another line ratio sensitive to resonance line scattering is the ratio f/r for He-like ions, where f is the forbidden line $^3$S$_1\rightarrow^1$S$_0$ and r is the resonance line $^1$P$_1\rightarrow^1$S$_0$. This ratio is also sensitive to density and temperature. Interference with density effects is not severe as [@ness_10] found low density limits for almost all stellar coronae. In Fig. \[Gratios\] we plot the measured f/r ratios for O[vii]{} and Ne[ix]{} versus the luminosity contained in all three He-like lines of the respective ions, thus restricting the analysis to only the plasma regions actually emitting in the respective lines. We over-plot expected f/r ratios calculated from APEC for three temperatures log(T/K)=6.0, 6.3, and 6.6 assuming low densities. Good agreement between measurements and predictions can be seen. The oxygen ratios seem to generally follow the temperature trend suggested by the three theoretically predicted ratios, decreasing with increasing degree of activity. For the Ne[ix]{} f/r ratios the scatter becomes larger for the more active stars, which must be attributed to more severe blending by hotter Fe[xix]{} lines; the blending of Ne[ix]{} by Fe[xix]{} has been discussed by [@neix], however, for many stars the Fe lines blending the resonance and the forbidden lines are relatively weak due to high Ne/Fe abundance ratios. Discussion {#disc} ========== One of the major aims pursued by the analysis of coronal spectra is to understand geometrical configurations of the coronal plasma. Opacity effects would make the interpretation tremendously more complicated, because assumptions about the geometrical configuration, which we want to study in the first place, would have to be made in order to account for these effects. If resonant scattering played an important role, one would naively expect that with an increasing amount of plasma these effects would become more and more visible, thus the more active stars should exhibit stronger effects on the line ratios sensitive to resonant scattering. Therefore our analysis focuses on searching for ratios of possibly optically thick resonance lines and optically thinner lines to correlate with the degree of activity.\ When drawing conclusions out of the measured ratios one has to keep in mind that these ratios are not always determined by resonant scattering effects alone, but might be obstructed by other effects as, e.g., line blending or density effects. Possible temperature effects are considered quantitatively by use of characteristic coronal temperatures derived from ratios of O[viii]{} and O[vii]{} resonance lines.\ Stellar data ------------ Our measured ratios of Fe[xvii]{} 15.27/15.03Å show a temperature trend indicating that enhanced line ratios are found particularly for the coolest coronae in our sample, but for the more active stars all measured line ratios scatter around a constant value of about 0.38$\pm$0.07 with a slight, but insignificant increasing trend towards higher temperatures. All measured ratios are higher than predicted by the three data bases MEKAL, APEC, and Chianti, but are consistent with laboratory measurements for low optical depths obtained with EBIT. Many of the 16.78/15.03Å line ratios are discrepant with theoretical predictions and as a sample the ratios seem generally to be discrepant with theory, but not with recent calculations by [@doron] that include additional processes than pure collisional excitation. No temperature trend can be seen in these data, and the scatter is much larger than for the other ratio. This scatter cannot be attributed to statistical uncertainties, because the 16.78Å line is much stronger than the 15.27Å line. In Fig. \[fe\_meg\] only the ratios with the highest precision are plotted and still the 16.78Å line seems more problematic than the ratios with the 15.27Å line. The 16.78/15.03Åratio is significantly more sensitive to resonant scattering than the 15.27/15.03Å line ratio. The larger scatter could thus represent a variety of resonant scattering processes. The interpretation of opacity effects affecting only the 15.03Åline (but not the 15.27Å and the 16.78Å lines) is certainly a possible explanation for these deviations. From the 15.27/15.03Å ratios this would mean that opacity effects play a larger role for the inactive stars and the Sun. However, such a temperature trend cannot be identified in the 16.78/15.03Å line ratios lending support to the suspicion by [@brown01] about blending of the Fe[xvii]{} 15.27Å line by an Fe[xvi]{} satellite line. However, the cited databases don’t give clear indications about the nature of such a blending line, so that no clear identification can be given here. An Fe[xvi]{} satellite would disappear in the hotter coronae and leaves un-blended 15.27/15.03Å line ratios for these coronae. Such a trend should also be visible in Fig. \[fecheck\], but can only be recognized when concentrating on the LETGS and MEG measurements. From Fig. \[fecheck\] we must conclude that the blending scenario cannot explain all discrepancies, unless a similar blending applies also to the 16.78Å line.\ Inspection of Fig. \[fe\_meg\] (left panel) clearly shows that all 15.27/15.03Å ratios measured with high confidence are systematically enhanced above the predicted ratios, but no temperature trend can be seen, neither in the data nor in the predictions. If taken at face value, these deviations suggest that the opacities are significantly non-zero for all stars, but also that optical depths are practically identical for all stars given our heterogeneous sample. Alternatively, if none of the investigated stellar coronae is optically thick, which would be quite surprising, the deviations from the databases would then have to be explained by uncertainties in the databases. For the 16.78/15.03Å line ratio the databases agree with each other, but when using more recent calculations by [@doron] better agreement with our measurements can be seen. For the 15.27/15.03Åratio laboratory measurements disagree with the theoretical predictions. This demonstrates that the inclusion of all kinds of side effects can change theoretical predictions significantly. The ratios of the two low-$f$ Fe[xvii]{} lines at 15.27Å and 16.78Å are plotted in Fig. \[fecheck\] and agreement with theoretical predictions can be seen. From this it is suggestive that problems in the databases might rather lie in determining line fluxes for the 15.03Å resonance line.\ We marked the Fe[xvii]{} line ratios measured with MEG for the flare star EV Lac by filling its symbol assigned to the MEG (triangle), because in Fig.\[fe\_meg\] this measurement is the only ratio significantly above the otherwise flat trend for the 15.27/15.03Å ratio. In Fig. \[evlac\] we show the spectra of EV Lac obtained with MEG and RGS2 in order to demonstrate that a significant difference can already be recognized by inspecting the spectra. However, no event such as, e.g., marked flare activity during one of the observations, can be associated with such a difference. In addition the HEG spectrum shown in the bottom panel of Fig. \[evlac\] is rather consistent with the RGS2 measurement although simultaneously observed with the MEG.\ Our second attempt to test for opacity effects and to probe possible other emitting regions is the ratio f/r of the He-like ions O[vii]{} and Ne[ix]{}. Although this ratio is also sensitive to density and temperature, we find roughly the same f/r ratios for all stars in our sample, definitely in agreement with plausible temperatures. No indication can be seen suggesting opacity effects from these ratios. The dependence on temperature might, however, be stronger than the sensitivity to resonant scattering effects. The f/r ratios are therefore only useful in cases of resonant scattering effects that outweigh the temperature sensitivity. For our measured ratios this means that we can exclude strong resonant scattering effects, but weaker effects could be hidden in the temperature trend.\ Comparison with solar measurements ---------------------------------- As described in Sect. \[intro\] the discussion about opacity effects in the solar corona has been quite controversial. Since with the Sun only one star is investigated our sample of 26 stellar coronae gives more insight into trends or systematic effects. We focus on the Fe[xvii]{} line ratios, where solar measurements resulted in tempting evidence that the 15.03Å line was significantly damped due to resonant scattering [e.g., @schmelz97; @saba99]. Discrepancies between theoretical predictions and laboratory measurements made it difficult to identify the measured ratios as pure resonant scattering effects. In addition the MEKAL database has been upgraded since then and more refined databases have become available.\ We use the most recent databases and find that the discrepancies between theoretical predictions and ratios for the Sun are still present. From the left panel of Fig. \[feratios\] it can be seen that the 15.27/15.03Å ratios measured for the Sun are consistent with the coolest coronae in our sample but not with the hotter coronae and not with any of the more recent databases. A blending scenario for the 15.27Å line by Fe[xvi]{} could explain the discrepancy and it would be well consistent with the temperature trend found from our sample, but it cannot be confirmed from the measured 15.27/16.78Å line ratios. With the solar data a blending scenario could not be identified, because the temperatures encountered for the hotter coronae are never reached in the solar corona and a blending Fe[xvi]{} satellite line would thus never disappear. In the right panel of Fig. \[feratios\] the solar measurements for the 16.78/15.03Å line ratio are all systematically higher than all the ratios from our sample as well as the predictions from the three databases MEKAL, Chianti, and APEC. The predicted ratios from [@doron] are consistent below log(T)$\sim$ 6.5, but most solar measurements are well above these predictions as well. Note that the discrepancies between solar ratios and our measurements are greater for the 16.78/15.03Å ratio, which is significantly more sensitive to resonant scattering effects than the 15.27/15.03Å ratio. Systematic uncertainties in the calibration can of course always lead to such deviations, since totally different instruments were used to obtain the solar ratios. The calibration of our instruments seem sufficiently well in order to produce similar results. If calibration error can be excluded, a physical interpretation must be found for understanding why the Sun should be the only star where opacity effects in the corona play a role.\ Summary and Conclusions {#concl} ======================= In the solar context opacity effects in X-ray lines have been discussed controversially. In practice the strongest lines are used for the analysis rather than weaker lines, but these lines are often resonance lines and are the first candidates for opacity effects. We test for effects of resonant scattering by measuring ratios of such resonance lines and forbidden lines with significantly lower probabilities for such effects sampling many different coronae.\ The Fe[xvii]{} 15.27/15.03Å and 16.78/15.03Å line ratios we measure systematically higher than theoretical predictions, but for all kinds of different coronae these deviations are strikingly similar. For the coolest coronae in our sample we measure 15.27/15.03Å ratios systematically higher ratios consistent with solar measurements of the same ratio. This trend suggests blending of the 15.27Å line. This trend can also be seen in the 15.27/16.78Å ratios, but only when ignoring the RGS2 measurements. In our large sample the 15.27/15.03Åmeasurement for EV Lac deviates from the general trend, but an exceptional case cannot be claimed because the simultaneous HEG observation is not consistent with the MEG ratio. We interpret this measurement as a statistical outlier. For the He-like f/r ratios for oxygen and neon all ratios can be explained by reasonable temperatures, such that strong resonant scattering effects are ruled out. It cannot be excluded, that weak resonant scattering effects are hidden in the large range of ratios allowed for a reasonable temperature range.\ Obviously the behavior with respect to resonant scattering is very similar for all stars in our sample. Formally one could derive optical depths $\tau$ from the measured deviations from the databases, however, similar but non-zero optical depths for stars in all kinds of stellar activity are unlikely. We therefore conclude that opacity effects should be considered as weak and undetected and uncertainties in the databases could be a more plausible explanation for the discrepancies. Since the 15.27/16.78Å ratios are well consistent with predictions from the databases we conclude that uncertainties in the databases must lie in the 15.03Å line.\ The large discrepancies of our measurements with solar ratios are somewhat puzzling. We doubt that opacity effects play a role only for the Sun. Also the statistical argument cannot be applied in the way the EV Lac observation can be treated, since many observations for the solar coronae exist. The high 15.27/15.03Å could be explained by blending of the 15.27Å line, because the solar coronal plasma is in the right temperature range. Possibly geometric effects might play a role. Observations of isolated emitting regions on the solar surface might exclude resonant photons from the analysis that are scattered out of not observed regions into the line of sight. Overall measurements for stars collect all photons emitted towards the observer.\ The methods we chose to investigate for resonant scattering effects in coronal plasmas are commonly accepted to efficiently probe for these effects. With the amount of data gathered with the new X-ray telescopes we are convinced to operate on a sufficiently representative basis in order to decide about optical thickness of coronal plasmas in general. As to answering the question in our title we find deviations of measurements and theoretical predictions that allow the conclusion of measurable resonant scattering effects, however, we are not convinced that this conclusion is the final answer. We did find systematic deviations of line ratios from optically thin theoretical predictions, but we also found that theoretical predictions can suffer from quite some uncertainties particularly when it comes down to accounting for certain side effects. We also found striking similarities between the ratios measured for all kinds of different coronae. From the complicated geometrical configurations expected for coronal plasma optical depths are unlikely to be so similar for inactive, intermediately active, and most active coronae. The amount of emitting plasma being up to four orders of magnitude different in X-ray luminosity raises the expectation that optical depths will be much larger for active stars than for inactive stars. The only scenario that we find plausible on the background of such similarities is that resonant scattering effects are all in the same way not detectable.\ The detection of resonant scattering for the Sun seems to be a different story. We attribute these differences to some resonant scattering effects possibly always taking place. For the Sun these effects are better detectable when focusing on selected regions, while for stellar observations these effects are balanced out by observing globally. It must be pointed out that no measurement for the Sun has been reported describing any kind of ”negative resonant scattering” that could balance out hypothetical global observations for the Sun.\ What does it mean when we conclude resonant scattering to be taking place without being detectable for stellar coronae? Practically the analysis of stellar coronal emission always refers to globally averaged statements, not only for the aspect of resonant scattering. When analysing resonance lines the non-detectability of existent resonant scattering effects means that statements derived out of, e.g., ratios of resonance lines, made on a global basis are still valid. It has to be kept in mind that no statements can be made for individual emitting regions, but that only average conclusions about all kinds of different emitting regions can be drawn, and on this level a balance of resonant scattering effects is equivalent with negligible resonant scattering effects. This work is based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and the USA (NASA).\ J.-U.N. acknowledges support from DLR under 50OR0105. MA and MG acknowledge support from the Swiss National Science Foundation (fellowship 81EZ-67388 and grant 2000-058827). The SRON National Institute for Space Research is supported financially by NWO. We thank our referee Dr. F.P.Keenan. Acton, L.W. 1978, ApJ, 255, 1069 Acton, L.W. & Catura, R.,C. 1976, Phil. Trans. Roy. Soc. London, 281, 383 Arnaud, M. & Rothenflug, R. 1985, A&AS, 60, 425 Audard, M., Güdel, M., Sres, A., et al. 2003, A&A, 398, 1137 Bhatia, A.K. & Saba, J.L. 2001, ApJ, 563, 434 Brown, G.V., Beiersdorfer, P., et al. 1998, ApJ, 502, 1015 Brown, G.V., Beiersdorfer, P., et al. 2001, ApJ, 557, L75 Dere, K. P., Landi, E., Young, P. R., & Del Zanna, G. 2001 ApJS, 134, 331, (Chianti) Doron, R. & Behar, E. 2002, ApJ, 574, 518 Gabriel, A.H. & Jordan, C. 1969, MNRAS, 145, 241 Güdel, M., Guinan, E.F., Skinner, S.L. 1997, ApJ, 483, 947 Laming, J.M., Kink, I., et al., 2000, ApJ, 545, L161 Mariska, J.T. 1992, in “The Solar Transition Region”, Cambr. Astroph. Ser., 22, Cambridge Univ. Press Mewe, R., Kaastra, J.S., & Liedahl, D.A. 1995, Legacy, 6, 16 (MEKAL) Mewe, R., Raassen, A.J.J., Drake, J.J., et al. 2001, A&A, 368, 888 Ness, J.-U., Mewe, R., Schmitt, J.H.M.M., et al. 2001, A&A, 367, 282 Ness, J.-U. & Wichmann, R. 2002, Astron. Nachr. 323, (2002)2, 129-134 (CORA program) Ness, J.-U., Schmitt, J.H.M.M., Burwitz, V., et al. 2002a, A&A, 394, 911 Ness, J.-U., Schmitt, J.H.M.M., Burwitz, V., et al. 2002b, A&A, 387, 1032 Ness, J.-U., Brickhouse, N.S., Drake, J.J., & Huenemoerder, D.P. 2003, submitted to ApJ Phillips, K.J.H., Greer, C.J., Bhatia, A.K., et al. 1997, A&A, 324, 381 Phillips, K.J.H., Mathioudakis, M., Huenemoerder, D.P, et al. 2001, MNRAS, 325, 1500 Saba, J.L.R., Schmelz, J.T., Bhatia, A.K., & Strong, K.T. 1999, ApJ, 510, 1064 Schmelz, J.T., Saba, J.L.R., Chauvin, J.C., and Strong, K.T. 1997, ApJ, 477, 509 Schmitt, J.H.M.M., Drake, J.J., & Stern, R.A. 1996, A&A, 465, L51 Schrijver, C.J, van den Oord, G.H.J., & Mewe, R. 1994, A&A, 289, L23 Smith, R.K., Brickhouse, N.S., Liedahl, D.A., & Raymond, J.C. 2001, ApJ, 556, L91 Strong, K.T. 1978, Ph.D. thesis, Univ. London Young, P.R., Zanna, G. Del, Landi, E., et al. 2003, ApJS, 144, 135 [^1]: Improved version; available at http://www.sron.nl/\ divisions/hea/spex/version1.10/line/line\_new.ps.gz [^2]: Version 1.2; available at http://cxc.harvard.edu/atomdb
--- abstract: 'Recently lots of efforts have been made to obtain the next to leading order and Landau-Pomeranchuk-Migdal corrections to the thermal dilepton emission rate in perturbative QCD. Here we apply these results to the plasma created in heavy ion collisions and see wether these corrections improve the comparison between theoretical calculations and experimental results for the invariant mass dependence of the dilepton emission rate. In particular, we simulate the quark-gluon plasma produced at RHIC and LHC using a 2+1-dimensional viscous hydro model. We compare our results to STAR experiment and comment on the need for a non-perturbative determination of the dilepton rate at low invariant mass.' author: - Chiara Gastaldi --- **The contribution of NLO and LPM corrections to thermal dilepton emission\ in heavy ion collisions**\ Yannis Burnier, Chiara Gastaldi\ \ () 0.2cm [2]{} Introduction ============ The theoretical study of the quark-gluon plasma (QGP) [@QGP1; @QGP] is supported by experiments based on ultra-relativistic heavy ion collisions (HIC): gold ions are used at RHIC (Relativistic Heavy Ion Collider) at the Brookhaven National Laboratory and lead ions are used at LHC (Large Hadron Collider) at CERN. Excellent probes to study the QGP are photons and dileptons pairs [@W], in fact, as they interact electromagnetically, they have a small cross section with the strongly interacting matter inside the QGP. Thus they leave the QGP and reach the detector, carrying information from deep into the plasma [@phenix; @star]. Moreover we prefer dilepton to photons for two reasons: photons are produced from a bigger background of decays [@ghiglieri] while, on the other hand, leptons have a non-null invariant mass M, which helps in disentangling various sources [@sources]. In fact, the dilepton background is also not small: dileptons are produced in every phase of the HIC and in several types of processes [@sources]. Here we are interested in thermal dileptons [@sources] produced by the partonic interactions during the hydrodynamical expansion; these dileptons can tell us about the QGP properties . Thermal dilepton are produced mainly in quark-antiquark annihilation and Compton scattering and their contribution to the spectrum is important in the intermediate invariant mass range, $M\in [0.2,2.5]$ GeV. The first type of background we encounter consists in hadronic reactions at early times. They consist in jet-dilepton conversion from the initial hadronic scattering processes and from the photoproduction processes. These are hard processes that contribute to the spectrum in the high mass range (M&gt;3 GeV). Secondly particle decays imprint broad peaks in the spectrum, for instance, the contribution from the decay of open charm $c\bar{c}\rightarrow e^+ e^- X$ is also very important in the intermediate mass range [@Vujanovic:2012nq; @Vujanovic:2013jpa]. In the low mass range $0.6<M<1.1$ GeV the decays of vector mesons, i.e. $\rho$ , $\omega$, $\phi$, give a sizeable contribution to the invariant mass spectrum [@star; @Vujanovic:2012nq; @Vujanovic:2013jpa]. Finally below $M<0.2$ pions decays from the hadronic phase dominate. A big effort has been done to study the thermal dilepton production from the QGP in perturbative QCD: in particular references [@W; @strick] provide a complete discussion of the leading order (LO) with inclusion of anisotropic corrections and [@ghiglieri; @laine] supply the passage from the LO to the NLO and Landau-Pomeranchuk-Migdal corrections. In this work we investigate whether higher order corrections in perturbation theory can improve the comparison between theoretical predictions for the thermal dilepton emission and experimental results. In section \[theo\], we introduce the theoretical background, in particular how to compute the LO dilepton emission rate per unit of 4-volume and per unit of 4-momentum, what is the effect of NLO and LPM corrections to it and how we compute the invariant mass spectrum. In section \[num\], we explain how we describe the hydrodynamical plasma evolution using SONIC and the details of the numerical computation for the invariant mass spectrum. In section \[res\], we show our results and compare them to experimental result from the STAR experiment at RHIC [@star]. We then make analogous computations for LHC and conclude in section \[concl\]. Dileptons in heavy ion collisions {#theo} ================================= We recall here the perturbative QCD calculations of the dilepton rate and explain how to use them in the geometry of heavy ion collisions. We use natural units when not stated otherwise and metric signature is (+,-,-,-). In perturbation theory, two distinct expansions are made, one in the electromagnetic coupling, where the LO is sufficient and a second one in the strong coupling. In this work we discuss the validity of this second expansion in the case of the plasma created in heavy ion collisions. Leading order dilepton production rate {#prod_rate} -------------------------------------- The relation between the dilepton emission rate and thermal expectation values of electromagnetic current correlation function $$\label{current} W^{\mu\nu}=\int d^4x \, e^{-iq\cdot x}\langle J^{\mu}(x)J^{\nu}(0)\rangle,$$ is well described in refs. [@W; @Weldon:1990iw; @book]. Here we briefly summarize the main results for the specific case of a $q \bar{q} \rightarrow e^+e^- $ process, shown in figure \[feyLO\]. ![LO Feynmann diagram for the thermal dilepton production from the QGP.[]{data-label="feyLO"}](photon_LO) The number of dileptons produced per unit volume and emitted at a given total momentum $P=(p^0,p^i)$ can be expressed trough the dilepton rate $R$: $$\frac{d N^{\ell \bar \ell}(x,P)}{d^4x d^4P}=\frac{dR(x,P)}{d^4 P},$$ which in turn can be calculated form the quark current correlator $W^{\mu\nu}(P)$ as: $$\begin{aligned} \frac{dR^{\ell \bar \ell}}{d^{4}P}&=&\sum_{i=1}^{n_f} Q_i^2\frac{\alpha_e^2}{24\pi^3P^2}\left(1+\frac{2m^2}{P^2}\right)\left(1-\frac{4m^2}{P^2}\right)^{\frac{1}{2}}\\&&\quad\quad\times\, \theta \left(P^2-4m^2 \right)W_{\mu}^{\mu}(P).\notag\end{aligned}$$ where $m$ is the mass of the emitted leptons and $Q_i,\, i=1,...,n_f$ the charges of the $n_f$ massless quarks present in the plasma. The strong coupling only enters in the quark current correlator $W^{\mu\nu}(P)$, which is calculated below at leading order but which receives large higher order corrections. If we restrict to leading order and to the case where the lepton mass is negligible compared to the invariant mass $M=\sqrt{P^2}\gg m$, which is a good approximation for electrons, we get [@laine]: $$\label{laineanalitic} \frac{dR^{\ell \bar \ell}_{LO}}{d^{4}P}=\sum_{i=1}^{n_f} Q_i^2 \frac{\alpha_e^{2}}{2\pi^{4}} \frac{T}{p} \frac{1}{e^{E/T}-1} \log{\frac{\cosh{\frac{E+p}{4T}}}{\cosh{\frac{E-p}{4T}}}},$$ where $E=p_0$ and $p=p_i^2$. NLO corrections to the spectra {#NLO} ------------------------------ As emphasised before, large corrections to the leading order dilepton rate arises. The next-to-leading order (NLO) is suppressed only by $\alpha_s$, but diverges in the small invariant mass limit $M\to 0$ (some representative diagrams for the NLO are shown in fig. \[feyNLO\]). In this soft limit the correct result can only be obtained by performing the Landau-Pomeranchuk-Migdal (LPM) resummation. The LPM resummation takes into account the destructive interference effect between the prompt emitted photons and it is summarised by the Feynman diagram in figure \[feyLPM\]. ![Examples of NLO Feynman diagram for the thermal dilepton production from the QGP.[]{data-label="feyNLO"}](cut1 "fig:") ![Examples of NLO Feynman diagram for the thermal dilepton production from the QGP.[]{data-label="feyNLO"}](cut2 "fig:") ![Feynman diagram that summarizes the LPM effect: in the very dense QGP a quark is rescattered many times before it annihilates with its antiparticle.[]{data-label="feyLPM"}](lpm) In this section we investigate how the NLO and LPM corrections contribute to the dilepton spectra. To show the results of NLO and higher order corrections, we used the data furnished by Ghisoiu and Laine [@laine] (NLO and LPM to LO) and Ghiglieri and Moore (for the LPM at NLO) [@ghiglieri]. It consists in a database for the electron-positron and the muon-antimuon emission rate as a function of invariant mass $M$, temperature $T$ and modulus of 3-momentum $P$ for the NLO and for the LPM corrections (available at [@laineweb]). ![dielectron emission rate computed at leading order, NLO [@laine] , and NLO plus LPM corrections [@ghiglieri] as a function of the dimension-less quantity $M/T$ for the values $P=0.53$ GeV, $T=0.18$ GeV and the quarks $u,d,s$ have been considered. These corrections dominate at small total momentum $P$. []{data-label="NOvsNLO"}](comparison_right) Figure \[NOvsNLO\] shows a comparison between the dielectron emission rate at LO computed with formula (\[laineanalitic\]) and the same with NLO and NLO plus LPM corrections. We can notice that the NLO and LPM contribution to the emission rate are important for small values of the ratio of the invariant mass over temperature $M/T$. Even if the LPM corrections suppress the rate at very small $M$, they increase the rate for intermediate values of the invariant mass. Geometry of HIC and hydrodynamics of the plasma ----------------------------------------------- Over all this work, we will keep separated the longitudinal dynamics (along the collision axis), from transverse dynamics, described by the (2+1)-dimensional hydrodynamic model in section \[metho\] [@rom1; @rom2]. We use the Bjorken model [@BJ; @plateau] to describe the longitudinal expansion and thus we assume the existence of a “central plateau” structure in the production rate of particle as a function of space-time rapidity $$\zeta=\frac{1}{2}\ln\frac{t+z}{t-z}.\label{rapidity}$$ Rapidity $\zeta$ and proper time $ \tau=\sqrt{t^2-z^2}$ are therefore a convenient reparametrization of $z$ and $t$ to describe the longitudinal flow: $$\label{milne} x^{\mu}=(\tau\cosh\zeta,\mathbf{x}_{\perp},\tau\sinh\zeta),$$ so that one can rewrite the differential measure of space-time as $d^{4}X=\tau d\tau\, d\zeta\, d^{2}x_{\perp}$. The longitudinal Bjorken flow [@BJ] velocity is defined simply as the distance covered over proper time: $$\label{bjvz} u_{\mu}(\tau,y)=\gamma_{B}(1,0,0,v_z)=\gamma_{B}(1,0,0,\frac{z}{\tau}),$$ where $ \gamma_{B}=\frac{1}{\sqrt{1-(z/\tau)^2}}. $ On the other hand, we obtain the temperature in the $\zeta=0$ slice, using a (2+1)-dimensional hydrodynamic simulation (see section \[metho\]). Dilepton spectra in HIC {#spectra} ----------------------- One can rewrite formula (\[laineanalitic\]) using a new parametrisation of the 4-momentum of the virtual photon [@strick], which is better suited for the geometry of HIC: $$\label{param} p^{\mu}=\left(m_{\perp}\cosh y,p_{\perp}\cos\phi_{p},p_{\perp}\sin\phi_{p},m_{\perp}\sinh y\right).$$ In the formula above, $\phi_{p}$ denote the azimutal angle, the transverse mass is $m_{\perp}\equiv\sqrt{M^{2}+p_{\perp}^{2}}$ and $y$ the momentum space rapidity $$y=\frac12\ln\frac{p^0+p^z}{p^0-p^z}.$$ Using (\[param\]), we write the differential 4-momentum as $d^{4}P=M\, dM\, dy\, p_{\perp}dp_{\perp}d\phi_{p}$. We are now ready to compute the invariant mass and rapidity differential spectra: $$\label{15} \frac{dN^{l^{+}l^{-}}}{M\, dM\, dy}=\int_{p_{\perp}^{min}}^{p_{\perp}^{max}}p_{\perp}dp_{\perp}\int_{0}^{2\pi}d\phi_{p}\int d^{4}X \frac{dR^{\ell \bar \ell}}{d^{4}P}\, .$$ As we are interested only in the thermal contribution to the dilepton emission spectrum, the integration over $d^{4}X$ is performed only on the quark-gluon plasma volume, i.e. on the regions with temperature bigger then the critical temperature $T({\bold x}_\perp,\tau)>T_{c}=0.17\, $ GeV. It is important to note that, in formula (\[15\]), the values of $p_{\perp}$ are defined in the LAB reference frame, but the emission rate $dR/d^4P$ (in formula (\[laineanalitic\])) has been computed in local rest frame (LRF) of the plasma[@strick]. While integrating over the plasma volume, we have to boost the observed LAB frame momenta to the LRF of the plasma, which we can plug in the dilepton rate formula (\[laineanalitic\]) computed previously. Change of frame {#hystory} --------------- Let us consider the integral over space-time in (\[15\]) and write it more explicitly using (\[milne\]): $\int d^{4}X\,\frac{dR^{l^{+}l^{-}}}{d^{4}P}=\int \tau d\tau d{\bf x}_{\perp} d\zeta\,\frac{dR^{l^{+}l^{-}}}{d^{4}P}$. Notice that the emission rate (\[laineanalitic\]) depends on ${\bf x}_{\perp}$ and $\tau$ only through the temperature $T({\bf x}_{\perp},\tau)$, which is given by the hydrodynamical simulation, while the dependence on $\zeta$ is given by the Bjorken model, as anticipated. At the end of the previous subsection, we noticed that, whenever we fix a LAB frame value of $p_{\mu}$ and a volume element in space in the integral (\[15\]), it is necessary to boost the momentum to the LRF of the volume element of which we want to compute the emission rate. In order to do this, we need the boost $\Lambda^{\mu}_{\nu}(u^{\mu})$ that parametrizes the change of coordinates from the LAB frame to the LRF [@strick]. It is defined by the relative velocity $u^{\mu}_{tot}(x^{\mu})$ that combines the Bjorken longitudinal velocity with the transverse hydrodynamical expansion. Once we found $u^{\mu}_{tot}(x^{\mu})$, we are able to compute (\[15\]) considering $\frac{dR^{l^{+}l^{-}}}{d^{4}P}\left(\Lambda^{\mu}_{\nu}(u_{tot})p^{\nu}_{LAB}\right)$. In what follows, we show the derivation of $u^{\mu}_{tot}(x^{\mu})$. From the hydrodynamical simulations we obtain the 4-velocity of the transverse flow (for $\zeta=0$), $$u^{\mu}_{hydro}=\gamma_{hydro} \{ 1, v^{hydro}_x,v^{hydro}_y,0\} ,$$ where $\gamma_{hydro}=\sqrt{1+(u^{\mu }_{hydro})^{2}}$ and $v^{hydro}_x$ and $v^{hydro}_y$ are measured with respect to the collision axis. However the whole system is moving along the longitudinal axis with Bjorken velocity $v_z$, estimated in (\[bjvz\]). Thus we will need to use the relativistic composition law to find the total velocity respect to the LAB frame. ![Scheme of the combination of longitudinal and transverse velocity, and the final boost $\Lambda^{\mu}_{\nu}$. []{data-label="lambda"}](boost1) The total velocity of a generic element of volume inside the QGP is the relativistic sum of $v_z$ and $\textbf{v}^{hydro}$: $v_x= v_z\oplus v^{hydro}_x $ and $v_y=v_z\oplus v^{hydro}_y$. [^1] Then, applying the relativistic addition rule for perpendicular velocities $\textbf{ v}_{tot}=\textbf{v}_1+\sqrt{1-v_1^2}\textbf{v}_2$, one obtains $ u_{tot}(x)=\gamma_{tot}\left(1,\gamma_B^{-1} v_x^{hydro},\gamma_B^{-1} x_y^{hydro}, \frac{z}{t}\right) $, where $ \gamma_{tot}= \gamma_{Bj}\cdot \gamma_{hydro}$. Numerical simulations {#num} ===================== SONIC {#metho} ----- The collisions between two heavy ions are not well understood at early times, before the system thermalizes. As soon as the QGP is formed (at proper time $\tau_{in} \sim 0.5$ fm), its space time evolution is described by hydrodynamic models [@artstory]. We simulate the hydrodynamic evolution of the QGP using the software SONIC (Super hybrid mOdel simulatioN for relativistic heavy-Ion Collisions), developed by Romatschke, Luzum and others (the code is available at [@site]) [@rom1; @rom2; @rom3; @rom4; @rom5; @rom6; @rom7; @rom8]. In this section we summarize the model on which SONIC is based on. It consists into a (2+1)-dimensional model that takes into account only the slice at rapidity $\zeta=0$, in which the center of mass lies. It combines the pre-equilibrium flow, modelled as in Ref. [@rom1], with the hydrodynamic phase and the latter with the final hadronisation (which does not concern this work). SONIC simulates the highly boosted and Lorentz contracted nuclei starting with their energy density, $T_{tt}=\delta(t+z)T_{A}({\bold x}_\perp)$, where the function $T_{A}$ has the shape [@rom2]: $$\label{overlap} T_{A}=\epsilon_{0}\int_{-\infty}^{\infty}dz\left[1+e^{-(\sqrt{{\bold x}_\perp^{2}+z^{2}}-R)/a}\right]^{-1},$$ $R$ and $a$ are the charge radius and the skin depth parameters (the values of these parameters can be found in table 1 in reference [@rom2]) and $\epsilon_{0}$ is a normalization constant that controls the final charged multiplicity and it is set to reproduce the available experimental data The pre-equilibrium radial flow velocity is estimated numerically in [@rom1]: $$\label{vel} v^{\perp}_i(\tau,{\bold x}_\perp)=-\frac{\tau}{3.0}\partial_{i}\ln T_{A}^{2}({\bold x}_\perp),$$ where $\tau=\sqrt{t^{2}-z^{2}}$. The initial energy profile is set to be $$\label{energy} \epsilon(\tau,{\bold x}_\perp)=T_{A}^{2}({\bold x}_\perp).$$ SONIC includes the relativistic viscous hydrodynamics solver, (here we use version 1.7), that implements the evolution of the system using the energy density from equation (\[energy\]) and the flow profile from equation (\[vel\]). The main parameters that need to be set in the hydrodynamical simulation are: the freeze-out temperature $T_{c}=$0.17 GeV; the initial central temperature $T=$0.37 GeV for RHIC and $T=$0.47 GeV for LHC; the shear viscosity $\eta/s=1/4\pi$. Integration of the dilepton rate -------------------------------- In this subsection, we enunciate the details of the numerical computation for the dilepton spectra (\[15\]). Firstly we must introduce the setting to the SONIC simulations. The starting proper time is $\tau_{start}=0.5$ fm and the temporal lattice spacing is 0.001 fm. The space grid (which spans the x-y plane) is made of 139 lattice sites (per each dimension), separated by $dx=dy=1$ GeV$^{-1}$, it covers the squared area $[-13.6,13.6]^2$ fm$^2$. Every 500 time steps, (0.5 fm), it takes a “snapshot”, i.e. it writes into data files all the measurables, from which we use the temperature and the transverse velocity. From the inner to outer integration in (\[15\]), we first computed the integration over $\tau d\tau d\zeta$ with the method of parallelepipeds. We integrated $\zeta$ in the half range $[0,0.9]$ divided in 20 steps and then double the result (the integral is symmetric for positive and negative values of $\zeta$). The other integrals (over the transverse coordinates $x$ and $y$) are computed separately with the method of trapezes on the same lattice than the SONIC simulation. The integral $\int d\phi_P$ is computed in $[0,\pi/2]$ with 4 steps and then we multiply the result times 4, (the system is symmetric under rotation with period $\pi/2$). The extremes of integration for $p_{\perp}$ for RHIC simulations have been chosen as in STAR: $p_{\perp}\in[0.2,15]$ GeV (we note that contributions from $p_{\perp}>15$ are negligible), while we choose $p_{\perp}\in[0,15]$ GeV for LHC and the integral is computed in 34 steps. The vales of $\frac{dR^{l^{+}l^{-}}}{d^{4}P}$ were tabulated in advance as a function of $T$, $M$ and $p_{\perp}$ and the values of $M$ that we plot always correspond to nodes of the 3D mesh on which $\frac{dR^{l^{+}l^{-}}}{d^{4}P}$ is tabulated. The same was not possible for $p_{\perp}$ because the boost shifts its value. Thus for a give $p_{\perp}$ and $T$ we find the corresponding $\frac{dR^{l^{+}l^{-}}}{d^{4}P}$ by bi-linear interpolation [@rom2]. Results {#res} ======= Predictions for RHIC -------------------- ![Invariant mass spectra for the dileptons emission at RHIC computed at NLO with LPM corrections for different values of the impact parameter b, for central rapidity $y=0$, and transverse momentum $p_{\perp}\in[0.2,15]$ GeV.[]{data-label="rhicbs"}](fig13) In this section, we present the results for the mass dielectron spectrum of the quark gluon plasma (\[15\]) for Au-Au collisions at $\sqrt{s}=200$ GeV RHIC and for Pb-Pb collisions at $\sqrt{s}=2.76$ TeV at LHC. Moreover we compare the RHIC results with the experimental data from the STAR experiment. Figure \[rhicbs\] shows our results for the thermal dilepton spectrum (\[15\]) as function of invariant mass $M$, for different values of impact parameter $b$ and for LO, NLO and NLO + LPM approximation. STAR measures the electron-positron pairs from Au-Au ions collisions at $\sqrt{s}=200$ GeV, as a function of the invariant mass $M$ of the virtual photon and its transverse momentum $p_{\perp}$ [@star]. STAR can capture emitted leptons at all azimuthal angles and for momentum-space rapidity values ${\left|y\right|}<1$. To make a good comparison to experiment, we also integrate the dilepton spectrum over momentum-space rapidity ${\left|y\right|}<1$. Moreover, the data are classified by centrality ranges, thus it was necessary to average our results over the impact parameter $b$ in order to reproduce the centrality ranges. To this aim, we integrated over different impact parameters b as [^2] $$\frac{dN}{dM}(\%centrality)=\frac{\int_{b_{min}}^{b_{max}}db\,b\frac{dN}{dM}(b)}{\int_{b_{min}}^{b_{max}}db\,b} \label{inte_b}$$ where $b_{min}$ and $b_{max}$ can be found as a function of centrality in [@cetrality_rhic; @brhic] [^3]. Figure \[rhic080\] shows the dilepton spectrum for RHIC averaged on all the centralities 0-80%, which, for Au-Au collisions, corresponds to $b=0-13$ fm, for different orders in perturbation theory. ![Invariant mass spectrum for the thermal emitted dileptons computed at LO, NLO and NLO+LPM corrections for the full detected centrality range (0-80%) and comparison with the relative data from STAR. Error bars for the star data can be found in [@star].[]{data-label="rhic080"}](rhic_0-80) Of course, the experimental data from [@star] include, in addition to the thermal dileptons, high energetic electron-positron pairs from pre-equilibrium processes and mainly the ones generated in the following decays: $\omega \rightarrow e^+ e^- \pi ^{0}$, $\pi^0 \rightarrow e^+ e^- \gamma$, $\eta \rightarrow e^+ e^- \gamma$, $\eta^0$, $\omega \rightarrow e^+ e^-$, $,\rho \rightarrow e^+ e^-$ $\phi \rightarrow e^+ e^-$, $J/\psi \rightarrow e^+ e^- X$ . For large invariant mass, the thermal contribution quickly becomes small in comparison to the other contributions discussed in the introduction [@contributions; @sources; @Vujanovic:2012nq; @Vujanovic:2013jpa]. In the high mass range $2<M<3$ GeV, the contribution from particle decays is much more important then the thermal one. For $M>3$ GeV the main contribution to the spectrum is given by dielectrons couples produced in pre-equilibrium Drell-Yan processes, and our forecast are, of course, much smaller then experimental data. The region in which the thermal contribution is dominant is indeed very small, i.e. roughly $0.2<M<1.5$ GeV, up to the $\rho, \omega$ and $ \phi$ peaks. In this region the agreement can be tested. ![From up to down: Invariant mass spectra for the dileptons emission at RHIC computed at LO, NLO and NLO with LPM corrections respectively, for different ranges of centrality. In each plot the predictions based on (\[15\]) are compared with the relative datas from STAR. The predicted spectra consider the rapidity range ${\left|y\right|}<1$ and transverse momentum range $p_{\perp}\in[0.2,15]$ GeV.[]{data-label="rhic3"}](rhic_lo_parts "fig:")\ ![From up to down: Invariant mass spectra for the dileptons emission at RHIC computed at LO, NLO and NLO with LPM corrections respectively, for different ranges of centrality. In each plot the predictions based on (\[15\]) are compared with the relative datas from STAR. The predicted spectra consider the rapidity range ${\left|y\right|}<1$ and transverse momentum range $p_{\perp}\in[0.2,15]$ GeV.[]{data-label="rhic3"}](rhic_nlo_parts "fig:")\ ![From up to down: Invariant mass spectra for the dileptons emission at RHIC computed at LO, NLO and NLO with LPM corrections respectively, for different ranges of centrality. In each plot the predictions based on (\[15\]) are compared with the relative datas from STAR. The predicted spectra consider the rapidity range ${\left|y\right|}<1$ and transverse momentum range $p_{\perp}\in[0.2,15]$ GeV.[]{data-label="rhic3"}](rhic_full_parts "fig:")\ Figure \[rhic3\] shows the comparison between our predictions and the experimental data for different range of centrality, computed as in (\[inte\_b\]). Surprisingly, the LPM corrections overestimate the number of emitted thermal dileptons at small invariant mass and the NLO approximation is the closest to experimental data. In the very low mass range $M<0.5$ GeV, perturbation theory breaks down, and a different approach should be studied, for example lattice simulations. For $M>0.7$ GeV, our results are a bit smaller than experimental data as expected, since we do not consider all the contributions that are included to the experimental data. Moreover we can notice that, the more the impact parameter $b$ increases, the farthest are the predictions with the experimental data. This is expected, as for large $b$, the volume of the plasma produced is smaller and so the thermal contribution to the invariant mass dilepton spectrum is less important. Predictions for LHC ------------------- Figure \[lhc080\] is the analogue of figure \[rhic080\] but for Pb-Pb collisions at $\sqrt{s}=2.71$ TeV at LHC, with the difference that we kept the rapidity $y=0$ and we integrated over the full transverse momentum range. The number of emitted dielectrons pairs is of course bigger. ![Invariant mass spectra for the dileptons emission at LHC computed at NLO with LPM corrections for different values of the impact parameter b. The curves are computed at mid-rapidity $y=0$, for the transverse momentum range: $p_{\perp}\in[0,15]$ GeV.[]{data-label="lhcbs"}](fig5 "fig:")\ Figure \[lhcbs\] and figure \[lhc3\] are the analogues of figure \[rhicbs\].3 (NLO + LPM corrections) and \[rhic3\].2 (NLO) respectively. ![Invariant mass spectra for the dileptons emission at LHC computed at LO, NLO and NLO with LPM corrections for collisions with centrality in range 0-80%. The curves are computed at mid-rapidity $y=0$, for the transverse momentum range: $p_{\perp}\in[0,15]$ GeV.[]{data-label="lhc080"}](fig4) Conclusion {#concl} ========== The main result of this work is the comparison between the thermal dilepton rate calculated at LO, NLO and NLO+LPM. The higher order corrections becomes important for small value of the invariant mass of the virtual photon $M$. Comparing our results with experimental data from the STAR experiment at RHIC, we see that for small invariant mass, the NLO+LPM rate seems to overshoot the data. This shows that the LPM effect, even if it damps the rate at very small $M$ actually enhance the rate for $M\sim0.5$GeV too much to be compatible with the STAR experiment. The NLO seems to fit experiment best but overshoot the data for $M<0.5$ GeV. For such small values of $M$ it seems that a non-perturbative determination is a must. We also performed predictions for the LHC, where the plasma phase might become more important in comparison to other sources. Results from LHC would be very useful to settle the tension between STAR data and higher order calculations of the dilepton rate. ![From up to down: invariant mass spectra for the dileptons emission at LHC computed at LO, NLO and NLO +LPM corrections for different ranges of centrality. The curves are computed at mid-rapidity $y=0$, for the transverse momentum range: $p_{\perp}\in[0,15]$ GeV.[]{data-label="lhc3"}](lhc_lo_parts "fig:")\ ![From up to down: invariant mass spectra for the dileptons emission at LHC computed at LO, NLO and NLO +LPM corrections for different ranges of centrality. The curves are computed at mid-rapidity $y=0$, for the transverse momentum range: $p_{\perp}\in[0,15]$ GeV.[]{data-label="lhc3"}](lhc_nlo_parts "fig:")\ ![From up to down: invariant mass spectra for the dileptons emission at LHC computed at LO, NLO and NLO +LPM corrections for different ranges of centrality. The curves are computed at mid-rapidity $y=0$, for the transverse momentum range: $p_{\perp}\in[0,15]$ GeV.[]{data-label="lhc3"}](lhc_full_parts "fig:") [9]{} J. Adams [*et al.*]{} \[STAR Collaboration\], Nucl. Phys. A [**757**]{} (2005) 102 \[nucl-ex/0501009\]. K. Adcox [*et al.*]{} \[PHENIX Collaboration\], Nucl. Phys. A [**757**]{} (2005) 184 \[nucl-ex/0410003\]. L. D. McLerran and T. Toimela, Phys. Rev. D [**31**]{} (1985) 545. Y. Akiba \[PHENIX Collaboration\], Nucl. Phys. A [**830**]{} (2009) 567C \[arXiv:0907.4794 \[nucl-ex\]\]. F. Geurts \[STAR Collaboration\], J. Phys. Conf. Ser.  [**458**]{} (2013) 012016 \[arXiv:1305.5447 \[nucl-ex\]\]. J. Ghiglieri and G. D. Moore, JHEP [**1412**]{} (2014) 029 \[arXiv:1410.4203 \[hep-ph\]\]. E. L. Bratkovskaya, Nucl. Phys. A [**931**]{} (2014) 194 \[arXiv:1408.3674 \[hep-ph\]\]. G. M. Yu and Y. D. Li, Phys. Rev. C [**91**]{} (2015) 4, 044908. G. Vujanovic, C. Young, B. Schenke, S. Jeon, R. Rapp and C. Gale, Nucl. Phys. A [**904-905**]{} (2013) 557c \[arXiv:1211.0022 \[hep-ph\]\]. G. Vujanovic, C. Young, B. Schenke, R. Rapp, S. Jeon and C. Gale, Phys. Rev. C [**89**]{} (2014) 3, 034904 \[arXiv:1312.0676 \[nucl-th\]\]. R. Ryblewski and M. Strickland, Phys. Rev. D [**92**]{} (2015) 2, 025026 \[arXiv:1501.03418 \[nucl-th\]\]. I. Ghisoiu and M. Laine, JHEP [**1410**]{} (2014) 83 \[arXiv:1407.7955 \[hep-ph\]\]. H. A. Weldon, Phys. Rev. D [**42**]{} (1990) 2384. J. I. Kapusta and C. Gale, “Finite-temperature field theory: Principles and applications”. Laine’s database are available at the web-site: http://www.laine.itp.unibe.ch/dilepton-lpm/. W. van der Schee, P. Romatschke and S. Pratt, Phys. Rev. Lett.  [**111**]{} (2013) 22, 222302 \[arXiv:1307.2539\]. M. Habich, J. L. Nagle and P. Romatschke, Eur. Phys. J. C [**75**]{} (2015) 1, 15 \[arXiv:1409.0040 \[nucl-th\]\]. J. D. Bjorken, Phys. Rev. D [**27**]{} (1983) 140. G. Roland (for the PHOBOS Collaboration), Nucl. Phys. [**A**]{} (2006) 774. C. Nonaka and M. Asakawa, PTEP [**2012**]{} (2012) 01A208 \[arXiv:1204.4795 \[nucl-th\]\]. Source codes are available at that web-site https://sites.google.com/site/revihy/ P. Romatschke and U. Romatschke, arXiv:0706.1522, (Phys. Rev. Lett.99, 172301,2007). M. Luzum and P. Romatschke, arXiv:0804.4015 (Phys.Rev.C78, 034915, 2008) The setup and tests are documented in R. Baier, P. Romatschke, D.T. Son, A. Starinets, M. Stephanov, arXiv:0712.2451, (JHEP 0804:100,2008). M. Laine and Y. Schroder, Phys. Rev.  D [**73**]{} (2006) 085009 \[arXiv:hep-ph/0603048\] J. Sollfrank, P. Koch, U.W. Heinz, Z. Phys.  C [**52**]{} (1991) 593. J. Sollfrank, P. Koch and U. W. Heinz, Phys. Lett.  B [**252**]{} (1990) 256. K. Reygers, unpublished, available at\ http://www.phenix.bnl.gov/$\sim$enterria/tmp/\ glauber/glauber\_auau\_200gev.pdf D. Kharzeev, M. Nardi, Phys.  Lett. [**B**]{} (2001) 507. B. Abelev [*et al.*]{} \[ALICE Collaboration\], Phys. Rev. C [**88**]{} (2013) 4, 044909 \[arXiv:1301.4361 \[nucl-ex\]\]. [^1]: The sum of relativistic velocities is not commutative. According to Bjorken model, we make the hypothesis that, in the central rapidity plateau, the transverse evolution of the plasma is the same at any rapidity. Thus a generic slice of plasma at a generic rapidity $\zeta_0$ evolves radially exactly like the $\zeta_0=0$ slice described by hydro model. We want to obtain the total velocity of a generic element of volume of the plasma respect to the LAB frame where $p^{\mu}$ is measured. The LAB frame, in our case, corresponds to the CM frame, thus every time we fix $p^{\mu}$ in the LAB frame we first have to make a boost along the longitudinal axis to the center of the “$\zeta_0$-plasma slice” and then add the transverse velocity relative to that point, obtained with the hydrodynamics simulations. [^2]: The integral over $y$ and the one at the numerator of (\[inte\_b\]) has been computed with Simpson’s rule for parabolic integration. Considering the dependence of the emission rate on the impact parameter (see references [@cetrality_rhic; @brhic]), this should give the exact solution of the integral in the range $b\in[0,13]$ fm. [^3]: Analogue tables are in ref. [@blhc] for LHC.
--- author: - David Eppstein - 'Michael T. Goodrich' - Michael Mitzenmacher - Paweł Pszona bibliography: - 'refs.bib' title: 'Wear Minimization for Cuckoo Hashing: How Not to Throw a Lot of Eggs into One Basket' --- =1 > “I did throw a lot of eggs into one basket, as you do in your teenage years...”\ >  —Dylan Moran [@moran] Introduction ============ Algorithm ========= Analysis ======== Experiments =========== Acknowledgements {#acknowledgements .unnumbered} ---------------- This research was supported in part by NSF grants 1011840, 1228639, and ONR grant N00014-08-1-1015. Michael Mitzenmacher was supported in part by NSF grants CCF-1320231, CNS-1228598, IIS-0964473, and CCF-0915922, and part of this work was done while visiting Microsoft Research New England.
--- abstract: 'From optical photometry the cataclysmic variable RXJ0944.5+0357 is shown to have a double-peaked pulse profile with a period $\sim$ 2160 s. The two peaks vary rapidly in relative amplitude. Often most of the optical power is concentrated in the first harmonic of the 2160 s modulation; RXJ0944.5+0357 therefore probably belongs to the relatively rare class of two-pole accreting intermediate polars exemplified by YY Dra and V405 Aur.' author: - 'Patrick A. and Brian' date: 30 October 2002 title: 'RXJ0944.5+0357: A Probable Intermediate Polar[^1]' --- Introduction ============ Many of the X-Ray sources in the ROSAT All-Sky Survey have been identified optically in the Hamburg objective prism survey (Hagen et al. 1995), among which are several cataclysmic variables (CVs) (Jiang et al. 2000). The source RXJ0944.5+0357 (= 1RXSJ094432.1+035738; hereafter RXJ0944), in the constellation Sextans, was observed spectroscopically by Jiang et al. and found to have HI and HeI emission lines typical of a CV. Further spectroscopic study by Mennickent et al. (2002) showed the presence of absorption bands in the red, characteristic of a secondary with a spectral type near M2. Observations by the VSNET group have identified two dwarf nova-like outbursts, in January and June 2001, during which RXJ0944 rose to V $\sim$ 13 from its quiescent magnitude of V $\sim$ 16.2. Mennickent et al. confirmed the spectroscopically determined orbital period ($P_{orb}$) of 0.1492 d (3.581 h) reported to them by Thorstensen & Fenton. Mennickent et al. also provided the first high speed photometry of RXJ0944 in which large amplitude variations ($\sim$ 0.5 mag) were found on time scales of 10 min to 2 h. They did not report any coherent signals in their photometry. Photometric Observations ======================== We have used the University of Cape Town CCD Photometer (O’Donoghue 1995), attached to the 74-in and 40-in telescopes at the Sutherland site of the South African Astronomical Observatory, to observe RXJ0944 at time resolutions down to 6 s. Table 1 gives the log of our photometric observations and Figure \[fig1\] shows the resulting light curves. ---------------- ------------------ ------------------- -------- ---------- ------- --------- Run No. Date of obs. HJD of first obs. Length $t_{in}$ Tel. $<$V$>$ (start of night) (+2452000.0) (h) (s) (mag) \[10pt\] S6324 22 March 2002 356.25290 4.98 15 40-in 16.4 S6331 23 March 2002 357.40201 1.47 15 40-in 16.5 S6340 25 March 2002 359.28466 0.89 30 40-in 16.5 S6341 02 April 2002 367.23471 4.96 6 74-in 16.6: S6350 06 April 2002 371.23004 1.28 8 74-in 16.5: S6362 09 April 2002 374.22997 3.05 20 40-in 16.5 S6366 10 April 2002 375.28856 2.46 20 40-in 16.3 S6370 11 April 2002 376.22836 3.26 20 40-in 16.5 S6386 14 April 2002 379.23542 4.09 45 40-in 16.5 S6391 15 April 2002 380.25135 3.36 10 40-in 16.5 ---------------- ------------------ ------------------- -------- ---------- ------- --------- : Observing log. [Notes: ‘:’ denotes an uncertain value, $t_{in}$ is the integration time.]{} \[tab1\] A Fourier Transform (FT) of the entire data set shows no power at the spectroscopic period or its first harmonic, so we deduce that RXJ0944 is of quite low inclination. From the radial velocity amplitude of 75 km s$^{-1}$ Mennickent et al. reasoned that the inclination probably lies in the range $30^{\circ} - 60^{\circ}$; our result indicates that it is probably at the lower end of this range. A low inclination is also compatible with the weakness of the emission lines in the spectrum. It was obvious early in our work that RXJ0944 has a repetitive brightness modulation with a period $\sim$ 2000 s. With further observations it could be seen that the feature is a double humped profile, with the two humps varying independently and rapidly in amplitude. In Figure \[fig2\] we show the light curve of run S6324 on a larger scale, with the cyclic modulation marked, and its highly variable pair of peaks. The FT for this run discloses a fundamental period at $\sim$ 2220 s plus its first harmonic. There are only six cycles of this modulation in the light curve, so the uncertainty of the period is large (at least $\sim$ 40 s). The mean light curve, folded on the fundamental period of 2162 s as derived below, is given in Figure \[fig3\] and shows the double humped nature of the profile, and that the humps sit on plateaux with only short-lived dips between them. (We removed the strong flare seen at HJD 2452356.418 in Figure \[fig2\] as being not representative; it probably resulted from a sudden short-lived surge of mass transference.) In the mean light curve, the two peaks occur at about phases 0.26 and 0.68, respectively. The peaks on the plateau appear as flares of variable width, so that adding more observations tends to even out their contributions, with the result that the mean light curve for the entire data set (using the period of 2162 s), shown in Figure \[fig4\], has largely lost the evidence for the doubling of the profile. The FT for the full set of observations is given in Figure \[fig5\], and shows clearly the humps of power near the $\sim$ 2000 s fundamental and its first and second harmonics. There is a great deal of complicated fine structure in the FT, beyond what is produced by the window pattern; this is caused by the rapid amplitude modulation of the fundamental and its harmonics. It is not possible to select unambiguous frequencies from the forest of aliases. However, the highest peak in the neighbourhood of the fundamental modulation is at 2162 s and the highest peak at the first harmonic is 1079 s, which supports the choice of a fundamental period near 2160 s. There are other humps of power in the total FT, but by subdividing our data (in particular, treating the March and April data sets separately) we find that the FT is non-stationary – only the 2160 s modulation and its harmonics are persistent features. Given the high activity in the light curves (Figure \[fig1\]) it is not surprising that the FT is also very variable. We find no evidence for rapid oscillations in brightness (Dwarf Nova Oscillations – typically with periods in the range 5–50 s: see Warner 1995), but in run S6341 we find a Quasi-Periodic Oscillation (QPO; see Warner 1995) with a mean period of 351 s and amplitude 0.013 mag. This is clearly seen in the light curve and maintains coherence for about 6 cycles between each major change of phase. Discussion ========== The presence of two distinct coherent periodicities in a CV is the recognised signature of an intermediate polar (IP) in which the non-orbital modulation is the spin period ($P_{sp}$) of the white dwarf primary, or its orbital side band (see, e.g., Warner 1995). X-ray emission is another common feature of IPs, resulting from accretion from the inner edge of the accretion disc onto the magnetic pole(s) of the white dwarf. We therefore conclude that RXJ0944 is most probably an IP with highly variable two-pole accretion. With $P_{orb}$ = 3.581 h and $P_{sp}$ = 36.0 min, RXJ0944 is quantitatively similar to canonical IPs such as FO Aqr and TV Col. However, the double-humped light curve and other properties make it most similar to YY Dra, as can be seen from the following brief review of the latter’s properties. YY Dra is a dwarf nova at V $\sim$ 16.0 quiescent magnitude with a mean outburst interval of 870 d and amplitude 5.5 mag, a $P_{orb}$ of 3.96 h and a $P_{sp}$ of 529.2 s. Both the spin period and the orbital sideband (at 550 s) have been detected in the optical region (Patterson et al. 1992). YY Dra is the X-Ray source 3A1148+719 and the spin modulation is seen in the X-Ray emission (Patterson & Szkody 1993). An M-type spectrum of the secondary is visible in the red, which is not normal for CVs with $P_{orb}$ $\sim$ 4h, suggesting a lower luminosity disc, probably the result if its central truncation by the magnetosphere of the primary. HST observations of YY Dra (Haswell et al. 1997) show that the UV emission line profiles are modulated at half the spin period, and that there is simultaneous presence of broad red and blue wings in CIV emission, which is interpreted as evidence for two-pole accretion. This is in accord with the double-humped pulse profiles in the optical and X-ray regions, and the occasional variations in height of the two peaks (though YY Dra is also notable for the near equality of its accretion pole luminosities most of the time). During an outburst of YY Dra X-ray emission greatly increased and the 529 s oscillation was usually visible, but near maximum disappeared, which is interpreted as possible due to the equality of accretion onto two extended poles (Szkody et al. 2002). There are other IPs with evidence of two-pole accretion: V405 Aur (Haberl et al. 1994) is similar in having its principal optical modulation at the first harmonic rather than the 545 s fundamental (Allan et al. 1996); the 1WGA J1958.2+3232 ($P_{sp}$ = 1467 s) has a double peaked profile which shows reversal of circular polarization between the peaks, confirming that it is a two-pole accretor (Norton et al. 2002) and Still, Duck & Marsh (1998) have found spectroscopic evidence that RX J0558+5353 is a two-pole accretor. The basic similarity of the optical photometric properties of RXJ0944 and YY Dra is evident, so we conclude that the same model, namely two-pole accretion, is the most probable description of RXJ0944, though with more variable and independent accretion rates onto the two poles. The phases of the two peaks determined from the mean light curve (Figure \[fig3\]) are not half a cycle apart, indicating that the magnetic poles are not diametrically opposite on the surface of the primary. For an IP, however, the strengths of the HeII and CIII/NIII emission lines at 4686 [Å]{} and 4650 [Å]{} (Mennickent et al. 2002) are relatively weak – but the fact that they are seen at all is uncharacteristic of an ordinary dwarf nova. Conclusion ========== RXJ0944 has the spectroscopic and photometric characteristics of an IP and would be worth studying in a pointed X-ray observation, in order to detect any modulation and thereby determine whether the 36.0 min periodicity in the optical is the white dwarf spin period or an orbital sideband. A time-resolved spectroscopic study with the HST should also be undertaken. RXJ0944 also has interest as a dwarf nova – there are other intermediate polars that show full dwarf nova outbursts (XY Ari is an example) and others that have abbreviated outbursts (e.g. V1223 Sgr). High speed photometry during an outburst of RXJ0944 could help to reveal the interaction between disc and magnetosphere as the rate of mass transfer increases and decreases. We thank Drs S. Potter and P. Rodriguez-Gil for allowing us to use their light curve of RXJ0944. PAW is supported by funds from the University of Cape Town and the National Research Foundation; BW is supported by funds from the University. Allan, A., Horne, K., Hellier, C., Mukai, K., Barwig, H., Bennie, P. J. and Hilditch, R.W.: 1996, [*Mon. Not. R. astr. Soc.*]{} 279, 1345. Haberl, F., Thorstensen, J.R., Motch, C., Schwarzenberg-Czerny, A., Pakull, M., Shambrook, A. and Pietsch, W.: 1994, [*Astron. Astrophys.*]{} 291, 171. Hagen H.-J., Groote, D., Engels, R. and Reimers, D.: 1995, [*Astron. Astrophys. Suppl.*]{} 111, 195. Haswell, C. A., Patterson, J., Thorstensen, J. R., Hellier, C. and Skillman, D. R.: 1997, [*Astrophys. J.*]{} 476, 847. Jiang , X. J., Engels, D., Wei, J. Y., Tesch, F. and Hu, J. Y.: 2000, [*Astron. Astrophys.*]{} 362, 263. Mennickent, R. E., Tovmassian, G., Zharikov, S. V., Tappert, C., Greiner, J., Gaensicke, B. T. and Fried, R. E.: 2002, [*Astron. Astrophys.*]{} 383, 933. Norton, A. J., Quaintrell, H., Katajainen, S., Lehto, H. J., Mukai, K. and Negueruela, I.: 2002, [*Astron. Astrophys.*]{} 384, 195. O’Donoghue, D.: 1995, [*Baltic Astr.*]{} 4, 517. Patterson, J. and Szkody, P.: 1993, [*Publ. Astron. Soc. Pacific*]{} 105, 1116. Patterson, J., Schwartz, D. A., Pye, J. P., Blair, W. P., Williams, G. A. and Caillault, J.-P.: 1992, [*Astrophys. J.*]{} 392, 233. Still, M. D., Duck., S. R. and Marsh, T. R.: 1998, [*Mon. Not. R. astr. Soc.*]{} 299, 759. Szkody, P., Nishikida, K., Erb, D., Mukai, K., Hellier, C., Uemura, M., Kato, T., Pavlenko, E., Katysheva, N., Shugarov, S. and Cook, L.: 2002, [*Astron. J.*]{} 123, 413. Warner, B.: 1995, [*Cataclysmic Variable Stars, Cambridge Univ. Press, Cambridge.*]{} [^1]: This paper uses observations made from the South African Astronomical Observatory (SAAO).
--- abstract: 'Short gamma-ray bursts (GRBs), typically lasting less than 2 s, are a special class of GRBs of great interest. We report the detection by the AGILE satellite of the short GRB 090510 which shows two clearly distinct emission phases: a prompt phase lasting $\sim 200$ msec and a second phase lasting tens of seconds. The [ prompt phase]{} is relatively intense in the 0.3-10 MeV range with a spectrum characterized by a large peak/cutoff energy near 3 MeV, [ in this phase, no significant high-energy gamma-ray emission is detected]{}. [ At the end of the prompt phase, intense gamma-ray emission above 30 MeV is detected showing]{} a power-law time decay of the flux of the type $t^{-1.3}$ and a broad-band spectrum remarkably different from that of the prompt phase. It extends from sub-MeV to hundreds of MeV energies with a photon index $\alpha \simeq 1.5$. [ GRB 090510 provides the first case of a short GRB with delayed gamma-ray emission. ]{}. We present the timing and spectral data of GRB 090510 and briefly discuss its remarkable properties within the current models of gamma-ray emission of short GRBs.' author: - 'A. Giuliani, F. Fuschino, G. Vianello, M. Marisaldi, S. Mereghetti, M. Tavani, S. Cutini, G. Barbiellini, F. Longo, E. Moretti, M. Feroci, E. Del Monte, A. Argan, A. Bulgarelli, P. Caraveo, P. W. Cattaneo, A. W. Chen, T. Contessi, F. D’Ammando, E. Costa, G. De Paris, G. Di Cocco, I. Donnarumma, Y. Evangelista, A. Ferrari, M. Fiorini, M. Galli, F. Gianotti, C. Labanti, I. Lapshov, F. Lazzarotto, P. Lipari, A. Morselli, L. Pacciani, A. Pellizzoni, F. Perotti, G. Piano, P. Picozza, M. Pilia, G. Pucella, M. Prest, M. Rapisarda, A. Rappoldi, A. Rubini, S. Sabatini, E. Scalise, E. Striani, P. Soffitta, M. Trifoglio, A. Trois, E. Vallazza, S. Vercellone, V. Vittorini, A. Zambra, D. Zanello, C. Pittori, F. Verrecchia, P. Santolamazza, P. Giommi, S. Colafrancesco, L.A. Antonelli, L. Salotti' title: ' AGILE detection of delayed gamma-ray emission from the short gamma-ray burst ' --- Introduction ============ Gamma-ray bursts (GRBs) are the most energetic explosions in our Universe but only a few bursts were detected at gamma-ray energies above 100 MeV. The EGRET instrument on board the [Compton Gamma Ray Observatory]{} during its 6-year lifetime detected 5 GRBs above 100 MeV [@dingus2001]. Today, the currently operating AGILE and *Fermi* satellites, have doubled the sample of GRBs detected at these energies ([e.g., Giuliani et al. 2008, McEnery et al. 2008, Abdo et al. 2009]{}). However, the great majority of GRBs with detected photons above 100 MeV are long bursts with typical durations above 2 seconds: they are possibly associated with stellar explosions of massive stars. Much less is known about the high-energy proprieties of [*short*]{} GRBs that show durations below 2 seconds. These short events are usually hard compared to the average properties of GRBs and are believed to be associated with the coalescence of neutron-star binaries [ (but see Zhang et al. 2009 for a more thorough discussion of the GRB classification and possible origin of the different classes)]{}. It is then very important to establish the gamma-ray properties of short GRBs. Before the advent of AGILE and *Fermi* no short-GRB was detected above a few MeV. The first short-GRB detection in the gamma-ray energy band was by *Fermi*: GRB 081024B (lasting about 0.8 s in the MeV range) was detected up to 3 GeV within the first 5 sec after trigger [@omodei2008; @connaughton]. We report here the *AGILE* detection of , the second short-GRB detected above 100 MeV. The Italian AGILE satellite for gamma-ray astronomy has been operating since 2007 April [@tavani-1]. The Gamma-Ray Imaging Detector (GRID, Barbiellini et al., 2002) on board AGILE covers one fifth of the sky in the 30 MeV – 30 GeV energy range. This large field of view, together with a gamma-ray detection deadtime of order of $\sim$100 $\mu$s, makes it particularly suited for the observation of GRBs. The GRID high-energy data are complemented by those of other detectors on board the satellite, which operate in different energy ranges. Super-AGILE provides GRB localizations, lightcurves and spectra in the 18–60 keV range [@feroci09; @Feroci2007; @Del_Monte_et_al_2008]. The Mini-Calorimeter (MCAL), besides being used as part of the GRID, can be used to autonomously detect and study GRBs in the 0.35–100 MeV range with excellent timing [@Labanti2009; @Marisaldi2008]. Finally, GRB lightcurves in the hard X-ray band can be obtained also from the GRID anti coincidence scintillator panels [@perotti]. GRB 090510 ========== The  was discovered and precisely localized by the *Swift* satellite [@gehrels04] with coordinates (J2000) R.A. = 22h 14m 12.47s, Dec.=–26d 35’ 00.4" [@GCN_9331] . This burst was quite bright, with peak flux $\sim$10 ph cm$^{-2}$ s$^{-1}$ [in the energy band 15-150 keV]{} [@GCN_9337], and was independently detected also by Konus-Wind [@GCN_9344], Suzaku-WAM [@GCN_9335] and *Fermi*-GBM [@GCN_9336]. The main emission lasts about 0.2 s with a multi-peak structure. Follow-up observations of the optical transient of  led to the determination of the redshift $z= 0.903 \pm 0.003 $ [@rau]. This GRB occurred at the border of the standard AGILE-GRID field of view, at an off-axis angle of 61 degrees. At this large off-axis angle, the AGILE-GRID effective area is $\sim$100 cm$^{2}$ for photon energies above 25 MeV. A quick look analysis of the GRID data showed an excess of photons above 30 MeV consistent with the direction of  [@GCN_9343].  was also clearly detected in the 0.3-10 MeV energy range with the AGILE-MCAL, while it was not detected by Super-AGILE, owing to its large off-axis position. Emission above a few tens of MeV was also detected by the *Fermi*-LAT instrument [@GCN_9334]. AGILE Timing and Spectral Data ============================== In the following, we refer all the times to $T_0$ corresponding to 00:23:00.5 UT of 2009, May 10. This corresponds to the time of the sharp initial increase of the GRB lightcurve in the MCAL detector. Based on the properties of the $0.3-10$ MeV and $\geq 25$ MeV emissions of showing a clear dichotomy between the low- and high-energy gamma-ray emissions, we define two time intervals, Interval I from $T_0$ to $T_0+0.20$ s and Interval II from $T_0+0.20$ s to $T_0+1.20$ s (see figure \[fig-2\]). The prompt phase (Interval I) ----------------------------- The lightcurves of  obtained with the AGILE-MCAL in the 0.3-10 MeV are shown in Fig. \[fig-1a\]. As seen by MCAL the burst has a duration (T90) of $184\pm6$ ms. During the T90 time interval MCAL recorded from the source more than 1000 counts above 330 keV, with an expected background of 60 counts over the same time interval. The peak flux of 18000 counts/s in a 1 ms time bin was reached at time $T_0 + 0.024$ s. To date, this is the brightest short burst detected by MCAL in the GRID field of view. In the T90 time interval the observed emission can be divided into three main pulses, each of them showing millisecond time variability. [At $T_0 - 0.55$ s a soft precursor lasting 15 ms is detected up to 700 keV]{}, while at $T_0 + 0.29$ s another 15 ms peak is evident, with significant detection up to few MeV. ![Lightcurve of  as detected by the AGILE-MCAL detector in different energy ranges. The time bin is 4 msec.[]{data-label="fig-1a"}](f1.eps){width="9.cm"} Most of the soft-gamma emission (E$\leq$10 MeV) is concentrated in *Interval I* (between $T_0$ and **$T_0+0.20$**), where no high-energy (E$\geq$25 MeV) photons were detected. We derive the flux spectrum for the Interval I using the MCAL data. We find that the averaged MCAL spectrum is well described by a power-law model with exponential cutoff (reduced $\chi^2$ of 0.8 for 23 d.o.f.). The photon index is $\alpha_1 = 0.65 (-0.32 + 0.28)$ and the exponential cutoff energy is $E_{c}= 2.8 (-0.6, +0.9)$ MeV. The integrated fluence (500 keV $ \leq E \leq 10$ MeV) during this interval is $F = 1.82 (-0.41, +0.09) \times 10^{-5}$ erg cm$^{-2}$, all the errors for MCAL results reported throughout this paper are at the 90% confidence level. The GRID upper limit (at $3$-$\sigma$ c.l.) is consistent with the extrapolation of this spectrum. [The top panel of Fig. 4 shows the Interval I spectrum.]{} [We also notice a substantial soft-to-hard spectral evolution during Interval I. If we define a hardness ratio as $HR =$ (counts above 1 MeV)/(counts below 1 MeV), we obtain $HR \sim 0.6 \pm 0.1$ during the first peak (between $T_0$ and $T_0+0.12$) and $HR \sim 0.9 \pm 0.1$ during the second phase of Interval I (between $T_0+0.12$ and $T_0+0.20$).]{} A remarkable absence of gamma-ray events during Interval I is evident. In fact the first GRID events are detected only at the end of the prompt emission. Note that a backward extrapolation to $t=T_0+0.01$ s of the GRID power-law lightcurve of Fig. 3, discussed in the next section, would predict 28 photons in Interval I, while none was observed. ![(Top panel:) Energies versus arrival time of the GRB photons detected by the AGILE-GRID in the 25 MeV – 1 GeV energy range. [ Note the remarkable absence of gamma-ray events before $T_0 + 0.2$ s]{}. (Lower panel:) 0.3 – 10 MeV light curve measured with the AGILE-MCAL detector. The dashed line separates *Interval I* from *Interval II* (see text).[]{data-label="fig-2"}](f2.eps){width="9cm" height="11cm"} The delayed emission phase: Interval II and tail ------------------------------------------------ The phase immediately following the sharp decay of the MeV flux at the end of Interval I shows a significant tail of MeV emission and the presence of a strong gamma-ray component above 30 MeV. We consider here the Interval II and a following tail of emission (lasting up to $T_0+10$ s). During Interval II and tail, MCAL continues to detect significant emission in the 500 keV-10 MeV energy range with a spectrum significantly different from that of Interval I. Indeed, the derived power-law distribution has now photon index $\alpha_2 = -1.58 (-0.11, +0.13)$. Significant emission is detected in the MCAL highest energy channels with no sign of cutoff. The Interval II fluence in the energy range 0.5-10 MeV range is $F_2 = 3.1 (-0.7, +0.6) \times 10^{-6} \, \rm erg \, cm^{-2}$. To search for emission above 30 MeV, we selected GRID gamma-ray events within 15 degrees of the burst position. For this analysis we used all the GRID events with reliable direction and energy reconstructions, resulting in 15 events in the time interval from $T_0$ to $T_0+10$ s. The expected number of background events in this time interval is 1.4, implying that the GRB is detected above 30 MeV with a $\geq 5-\sigma$ statistical significance. The energy and arrival times of the GRID events are compared with the MCAL lightcurve in Fig. \[fig-2\]. The GRID high-energy emission lasts for a few tens of seconds after the end of Interval I. The time evolution of the gamma-ray emission from  can be remarkably well described by a power-law decay, as shown in Fig. \[fig-5\] (top panel). We model it with a function given by $$ F(t) \propto t^{-\delta} \;\; \mathrm{ for} \;\; t \geq T_0 + T_1$$ and $ F(t)=0 \;\; \mathrm{for} \;\; t \leq T_0 + T_1$ and find that $T_1 = 0.2$ s, and $\delta = 1.30 \pm 0.15$ give the highest probability to reproduce the observed times of arrival. The corresponding power-law is plotted in the Fig. 3 (top panel). The background flux measured in the 1000 seconds before trigger is also shown in figure by the the dashed horizontal line. ![(Top panel:) AGILE-GRID gamma-ray lightcurve of  for photon events within a sky region of radius 15 degrees. The inclined dashed line corresponds to a power-law time decay $t^{-\delta}$, with $\delta = 1.3$. The horizontal dashed line corresponds to the background flux measured in the 1000 seconds before the trigger. The solid line is the sum of the two components.\ (Bottom panel:) Energies versus arrival time of the GRB photons detected by the AGILE-GRID in the 25 MeV – 1 GeV energy range. Note that after $T_0+10$ the detected counts are compatible with the background, as shown by the light curve in the top pannel.](f3.eps){width="9cm"} \[fig-5\] The energy distribution of the GRID photons during the time interval $T_0$ to $T_0+10$ s is consistent with a power-law spectrum of photon index $\alpha_3 = 1.4 \pm0.4$ ($1$-$\sigma$ c.l.). For this spectrum the 25 MeV- 500 MeV fluence in the same time interval is $(1.51\pm0.39)\times10^{-1}$ ph cm$^{-2}$, corresponding to $F_3 = (2.90\pm0.75)\times10^{-5} \rm \, erg \,cm^{-2}$. ![ (Top panel:) gamma-ray power spectrum of  for *Interval I*. (Bottom panel:) gamma-ray power spectrum for *Interval II*[]{data-label="spec"}](f4.eps "fig:"){width="9cm" height="6cm"} ![ (Top panel:) gamma-ray power spectrum of  for *Interval I*. (Bottom panel:) gamma-ray power spectrum for *Interval II*[]{data-label="spec"}](f5.eps "fig:"){width="9cm" height="6cm"} To compute the GRID flux in Interval II, we assumed the same spectrum measured in the long ”tail" between $T_0$ and $T_0+10$ (photon index $-1.45\pm0.07$), and extrapolated the light curve with the best fit power-law decay. This gives $F'_2 = 2.12 \times 10^{-5}$ erg cm$^{-2}$ (25 MeV $ \leq E \leq 500$ MeV). The MCAL + GRID spectrum of Interval II is shown in the bottom panel of figure \[spec\]. Discussion ========== Even though a theoretical investigation is beyond the scope of this paper, we can briefly emphasize here a few relevant points. \(1) The broad-band emission of  shows very distinct radiation phases during Interval I and the following delayed emission phase. Prompt gamma-rays above 30 MeV are absent during Interval I, but they constitute a crucial component during Interval II and the following tail. [ Remarkably, this is the first case of a delayed rise of the gamma-ray emission above 25 MeV detected in a short GRB. A similar behaviour was shown by the long GRB 080916C [@abdo09a]. This fact suggests that the same process responsible for high energy gamma-ray production takes place, in both long and short GRBs, independently from the central engine.]{} \(2) The prompt phase (Interval I) spectrum is peaked at $E_p = 3.78$ MeV. Comparing with other short GRBs (see Ghirlanda et al. 2009) we find that the $E_p$ for GRB 090510 is the highest peak energy ever recorded for a short GRB (about 2.4-$\sigma$ greater than the mean value for short GRBs). Also the rest-frame peak energy ($E_p^{rest} = 7.19$ $(-1.54, +2.31)$ MeV) for is greater than the $E_p^{rest}$ for the other short GRBs with known redshift. The large value of $E_p^{rest}$, combined with a quite usual value of the isotropic (comoving) energetics ($E_{iso,1} = 3.91$ $(-0.88, +1.91)$ $\times 10^{52}$ erg in the whole energy range) and peak luminosity ($L_{iso,1} = 7.74$ $(-1.74, +3.79)$ $\times 10^{53}$ erg/s) implies that GRB 090510 does not follow neither the $E_p^{rest}-E_{iso}$ Amati relation (Amati et al 2002) nor the $E_p^{rest}-L_{iso}$ Yonetoku relation (Yonetoku et al. 2004). To our knowledge this is the first short GRB that does not follow the Yonetoku relation.\ No significant emission is detected above 10 MeV, implying a rather strong constraint on any possible power-law emission above $E_c$ ($\beta < -3.2$ at the 90% confidence level). [(3) The prompt phase shows a significant soft to hard spectral evolution. As it can be inferred from Fig. 1 and from the MCAL hardness ratio calculations, the last peaks of Interval I are harder than the first peak.]{} \(4) Gamma-ray emission above 25 MeV extends in time for tens of seconds, i.e., well beyond the prompt phase duration, and shows a temporal behavior consistent with a power-law of index $\delta = 1.30 \pm 0.15$. \(5) The total isotropic energy of Interval II and tail is larger than that of Interval I: by summing the MCAL and GRID contributions to the emission, we obtain for the delayed phase $E_{iso,2} = 4.8$ $\times 10^{52}$ erg. [(6) The temporal index $\delta = 1.3$ is substantially different from that ($\delta'= 0.75$) subsequently measured by *Swift*-XRT between 80 and 1400 sec after trigger [@GCN_9341]. This last phase can be attributed to an afterglow with spectral and temporal characteristics in agreement with expectations of fireball models [@zhang]. ]{} High-energy emission from  can have different physical origins at different phases. It is possible to evaluate a lower limit for the Lorentz factor of the emitting regions in interval I and II, on the basis of their spectral features and time-scale variability (Lithwich et al. 2001). During Interval I the energy of the highest bin of the spectrum with significative detection is $E_{max}=20$ MeV. A physical scenario with the minimum Lorentz factor compatible with the data corresponds to a shell, optically thick for photons of energy greater than $m_e c^2$ (in the shell rest frame), moving with $\Gamma_I \geq (1+z)E_{max}/m_e \, c^2 \simeq 80$. Otherwise, assuming that the emitting region is optically thin also for photons with energy greater than $m_e c^2$, a larger Lorentz factor is needed ($\Gamma_I \geq 150$), due to the fast variability during this phase, according with equation 5 in (Lithwich et al. 2001). Emission during Interval II and following tail appears to be of a very different nature, and is clearly non-thermal. Several mechanisms can be at work, depending on the external environment and radiative conditions. Both synchrotron and inverse Compton (IC) emitting regions characterized by impulsively energized particles can be important contributors. The ultimate origin of fast and efficient acceleration is believed to be hydrodynamical shocks produced by expanding matter ejecta. Internal (IS) and external (ES) shocks can in principle contribute to both the synchrotron and IC emissions, and several models have been recently proposed to address the issue of the prompt vs. the so-called “delayed” high-energy emission from GRBs. During Interval II the larger photon energy detected is $E_{max}=350$ MeV. The minimum Lorentz factor compatible with $E_{max}$ is $\Gamma_{II} \geq 200$. We postpone an investigation of these issues to forthcoming publications. The *AGILE* Mission is funded by the Italian Space Agency (ASI) with scientific and programmatic participation by the Italian Institute of Astrophysics (INAF) and the Italian Institute of Nuclear Physics (INFN). Abdo A. B. et al., 2009, Science 323, 1688 Amati L. et al., 2002, A&A, 390 Barbiellini G. et al., 2002, NIM A 490, 146 Connaughton V. et al., 2008, GCN 8408 Del Monte E. et al., 2007, Frascati Physics Series, 45, 201 Del Monte E. et al., 2008, A&A 478, L5 Dingus B., 2001, Amer. Inst. of Phys. Conf. Ser., Vol. 558, 383 Feroci M. et al., 2007, NIM A 581, 728 Feroci M. et al., 2009, Submitted to A&A Fuschino F. et al., 2008, NIM A 588, 17 Gehrels N. et al., 2004, ApJ 611, 1005 Giuliani A. et al., 2006, NIM A 568, 692 Giuliani A. et al., 2008, A&A 491, L25 Gonzalez M. M. et al., 2003, Nature 424, 749 Grupe D., et al. 2009, GCN 9341 Guiriec S. et al., 2009, GCN 9336 Hoversten E. A. et al., 2009, GCN 9331 Hurley K. et al., 1994, Nature 372, 652 Kaneko K. et al., 2008, ApJ 667, 1168 Kouveliotou C. et al., 1994, ApJ 422, L59 Labanti C. et al., 2009, NIM A 598, 470 Lithwich Y. et al., 2001, ApJ 555, 540 Longo F. et al., 2009, GCN 9343 Marisaldi M. et al., 2008 A&A 490, 1151 Ohmori N. et al., 2009, GCN 9335 Ohno M. et al., 2009, GCN 9334 Omodei N. et al., 2008, GCN 8407 Perotti F. et al., 2006, NIM A 556, 228 Rau A. et al., 2009, GCN 9353 Tavani M. et al., 2009, A&A, 502, 995 Ukwatta T. N. et al., 2009, GCN 9337 Golenetskii S. et al., 2009, GCN 9344 Yonetoku D. et al., 2004, ApJ 609 Zhang B. et al., 2006, ApJ 642, 354 Zhang B. et al., 2009, arXiv0902.2419
--- abstract: 'In this paper, we prove that relation-extensions of quasi-tilted algebras are 2-Calabi-Yau tilted. With the objective of describing the module category of a cluster-tilted algebra of euclidean type, we define the notion of reflection so that any two local slices can be reached one from the other by a sequence of reflections and coreflections. We then give an algorithmic procedure for constructing the tubes of a cluster-tilted algebra of euclidean type. Our main result characterizes quasi-tilted algebras whose relation-extensions are cluster-tilted of euclidean type.' address: - 'Département de Mathématiques, Université de Sherbrooke, Sherbrooke, Québec, Canada J1K 2R1' - 'Department of Mathematics, University of Connecticut, Storrs, CT 06269-3009, USA' - 'Department of Mathematics, University of California, Berkeley, CA 94720-3840, USA' author: - Ibrahim Assem - Ralf Schiffler - Khrystyna Serhiyenko title: 'Cluster-tilted and quasi-tilted algebras' --- [^1] Introduction ============ Cluster-tilted algebras were introduced by Buan, Marsh and Reiten [@BMR] and, independently in [@CCS] for type $\mathbb{A}$ as a byproduct of the now extensive theory of cluster algebras of Fomin and Zelevinsky [@FZ]. Since then, cluster-tilted algebras have been the subject of several investigations, see, for instance, [@ABCP; @ABS; @BFPPT; @BT; @BOW; @BMR2; @KR; @OS; @SS; @SS2]. In particular, in [@ABS] is given a construction procedure for cluster-tilted algebras: let $C$ be a triangular algebra of global dimension two over an algebraically closed field $k$, and consider the $C$-$C$-bimodule ${{\textup{Ext}}^2_C(DC,C)}$, where $D={\textup{Hom}}_k(-,k)$ is the standard duality, with its natural left and right $C$-actions. The trivial extension of $C$ by this bimodule is called the [*relation-extension*]{} ${\widetilde{C}}$ of $C$. It is shown there that, if $C$ is tilted, then its relation-extension is cluster-tilted, and every cluster-tilted algebra occurs in this way. Our purpose in this paper is to study the relation-extensions of a wider class of triangular algebras of global dimension two, namely the class of quasi-tilted algebras, introduced by Happel, Reiten and Smalø  in [@HRS]. In general, the relation-extension of a quasi-tilted algebra is not cluster-tilted, however it is 2-Calabi-Yau tilted, see Theorem \[thm 2.1\] below. We then look more closely at those cluster-tilted algebras which are tame and representation-infinite. According to [@BMR], these coincide exactly with the cluster-tilted algebras of euclidean type. We ask then the following question: Given a cluster-tilted algebra $B$ of euclidean type, find all quasi-tilted algebras $C$ such that $B={\widetilde{C}}$. A similar question has been asked (and answered) in [@ABS2], where, however, $C$ was assumed to be tilted. For this purpose, we generalize the notion of reflections of [@ABS4]. We prove that this operation allows to produce all tilted algebras $C$ such that $B={\widetilde{C}}$, see Theorem \[thm mainreflection\]. In [@ABS4] this result was shown only for cluster-tilted algebras of tree type. We also prove that, unlike those of [@ABS4], reflections in the sense of the present paper are always defined, that the reflection of a tilted algebra is also tilted of the same type, and that they have the same relation-extension, see Theorem \[thm reflection\] and Proposition \[prop reflection\] below. Because all tilted algebras having a given cluster-tilted algebra as relation-extension are given by iterated reflections, this gives an algorithmic answer to our question above. After that, we look at the tubes of a cluster-tilted algebra of euclidean type and give a procedure for constructing those tubes which contain a projective, see Proposition \[prop5\]. We then return to quasi-tilted algebras in our last section, namely we define a particular two-sided ideal of a cluster-tilted algebra, which we call the partition ideal. Our first result (Theorem \[thm ctaqt\]) shows that the quasi-tilted algebras which are not tilted but have a given cluster-tilted algebra $B$ of euclidean type as relation-extension are the quotients of $B$ by a partition ideal. We end the paper with the proof of our main result (Theorem \[thm ctaqt3\]) which says that if $C$ is quasi-tilted and such that $B={\widetilde{C}}$, then either $C$ is the quotient of $B$ by the annihilator of a local slice (and then $C$ is tilted) or it is the quotient of $B$ by a partition ideal (and then $C$ is not tilted except in two cases easy to characterize). Preliminaries ============= Notation -------- Throughout this paper, algebras are basic and connected finite dimensional algebras over a fixed algebraically closed field $k$. For an algebra $C$, we denote by $\text{mod}\,C$ the category of finitely generated right $C$-modules. All subcategories are full, and identified with their object classes. Given a category $\mathcal{C}$, we sometimes write $M\in\mathcal{C}$ to express that $M$ is an object in $\mathcal{C}$. If $\mathcal{C}$ is a full subcategory of $\text{mod}\,C$, we denote by $\text{add}\,\mathcal{C}$ the full subcategory of $\text{mod}\,C$ having as objects the finite direct sums of summands of modules in $\mathcal{C}$. For a point $x$ in the ordinary quiver of a given algebra $C$, we denote by $P(x)$, $I(x)$, $S(x)$ respectively, the indecomposable projective, injective and simple $C$-modules corresponding to $x$. We denote by $\Gamma(\text{mod}\,C)$ the Auslander-Reiten quiver of $C$ and by $\tau = D\text{Tr}, \tau^{-1} = \text{Tr} D$ the Auslander-Reiten translations. For further definitions and facts, we refer the reader to [@ARS; @ASS; @S]. Tilting ------- Let $Q$ be a finite connected and acyclic quiver. A module $T$ over the path algebra $kQ$ of $Q$ is called *tilting* if $\text{Ext}^1_{kQ}(T,T)=0$ and the number of isoclasses (isomorphism classes) of indecomposable summands of $T$ equals $|Q_0|$, see [@ASS]. An algebra $C$ is called *tilted of type $Q$* if there exists a tilting $kQ$-module $T$ such that $C=\text{End}_{kQ} T$. It is shown in [@Ri] that an algebra $C$ is tilted if and only if it contains a *complete slice* $\Sigma$, that is, a finite set of indecomposable modules such that - $\bigoplus_{U\in \Sigma} U$ is a sincere $C$-module. - If $U_0\to U_1 \to \dots \to U_t$ is a sequence of nonzero morphisms between indecomposable modules with $U_0,U_t\in\Sigma$ then $U_i\in\Sigma$ for all $i$ (*convexity*). - If $0\to L \to M \to N \to 0$ is an almost split sequence in $\text{mod}\,C$ and at least one indecomposable summand of $M$ lies in $\Sigma$, then exactly one of $L,N$ belongs to $\Sigma$. For more on tilting and tilted algebras, we refer the reader to [@ASS]. Tilting can also be done within the framework of a hereditary category. Let $\mathcal{H}$ be an abelian $k$-category which is Hom-finite, that is, such that, for all $X,Y\in\mathcal{H}$, the vector space $\text{Hom}_{\mathcal{H}}(X,Y)$ is finite dimensional. We say that $\mathcal{H}$ is *hereditary* if $\text{Ext}^2_{\mathcal{H}}(-, ?)=0$. An object $T\in\mathcal{H}$ is called a *tilting object* if $\text{Ext}^1_{\mathcal{H}}(T,T)=0$ and the number of isoclasses of indecomposable objects of $T$ is the rank of the Grothendieck group $K_0(\mathcal{H})$. The endomorphism algebras of tilting objects in hereditary categories are called *quasi-tilted algebras*. For instance, tilted algebras but also canonical algebras (see [@Ri]) are quasi-tilted. Quasi-tilted algebras have attracted a lot of attention and played an important role in representation theory, see for instance [@HRS; @Sk]. Cluster-tilted algebras ----------------------- Let $Q$ be a finite, connected and acyclic quiver. The *cluster category* $\mathcal{C}_Q$ of $Q$ is defined as follows, see [@BMRRT]. Let $F$ denote the composition $\tau^{-1}_{\mathcal{D}}[1]$, where $\tau^{-1}_{\mathcal{D}}$ denotes the inverse Auslander-Reiten translation in the bounded derived category $\mathcal{D} = \mathcal{D}^b(\text{mod}\, kQ)$, and \[1\] denotes the shift of $\mathcal{D}$. Then $\mathcal{C}_Q$ is the orbit category $\mathcal{D}/F$: its objects are the $F$-orbits $\widetilde{X}=(F^i X)_{i\in\mathbb{Z}}$ of the objects $X\in\mathcal{D}$, and the space of morphisms from $\widetilde{X}=(F^i X)_{i\in\mathbb{Z}}$ to $\widetilde{Y}=(F^i Y)_{i\in\mathbb{Z}}$ is $$\text{Hom}_{\mathcal{C}_Q}(\widetilde{X}, \widetilde{Y}) = \bigoplus_{i\in\mathbb{Z}} \text{Hom}_{\mathcal{D}}(X, F^i Y).$$ Then $\mathcal{C}_Q$ is a triangulated category with almost split triangles and, moreover, for $\widetilde{X}, \widetilde{Y}\in\mathcal{C}_Q$ we have a bifunctorial isomorphism $\text{Ext}^1_{\mathcal{C}_Q}(\widetilde{X}, \widetilde{Y})\cong D\text{Ext}^1_{\mathcal{C}_Q}(\widetilde{Y},\widetilde{X})$. This is expressed by saying that the category $\mathcal{C}_Q$ is *2-Calabi-Yau*. An object $\widetilde{T}\in\mathcal{C}_Q$ is called *tilting* if $\text{Ext}^1_{\mathcal{C}_Q}(\widetilde{T}, \widetilde{T})=0$ and the number of isoclasses of indecomposable summands of $\widetilde{T}$ equals $|Q_0|$. The endomorphism algebra $B=\text{End}_{\mathcal{C}_Q} \widetilde{T}$ is then called *cluster-tilted* of type $Q$. More generally, the endomorphism algebra ${\textup{End}}_{\mathcal{C}}\widetilde{T}$ of a tilting object $\widetilde{T}$ in a $2$-Calabi-Yau category with finite dimensional Hom-spaces is called a [*2-Calabi-Yau tilted algebra*]{}, see [@Reiten]. Let now $T$ be a tilting $kQ$-module, and $C=\text{End}_{kQ} T$ the corresponding tilted algebra. Then it is shown in [@ABS] that the trivial extension $\widetilde{C}$ of $C$ by the $C$-$C$-bimodule $\text{Ext}^2_C (DC,C)$ with the two natural actions of $C$, the so-called *relation-extension* of $C$, is cluster-tilted. Conversely, if $B$ is cluster-tilted, then there exists a tilted algebra $C$ such that $B=\widetilde{C}$. Let now $B$ be a cluster-tilted algebra, then a full subquiver $\Sigma$ of $\Gamma(\text{mod}\,B)$ is a *local slice*, see [@ABS2], if: - $\Sigma$ is a *presection*, that is, if $X\to Y$ is an arrow then: - $X\in\Sigma$ implies that either $Y\in\Sigma$ or $\tau Y \in \Sigma$ - $Y\in\Sigma$ implies that either $X\in \Sigma$ or $\tau^{-1} X\in\Sigma$. - $\Sigma$ is *sectionally convex*, that is, if $X=X_0\to X \to \dots \to X_t = Y$ is a sectional path in $\Gamma(\text{mod}\,B)$ then $X,Y\in\Sigma$ implies that $X_i\in\Sigma$ for all $i$. - $|\Sigma_0| = \text{rk}\,K_0(B)$. Let $C$ be tilted, then, under the standard embedding $\text{mod}\,C \to \text{mod}\,\widetilde{C}$, any complete slice in the tilted algebra $C$ embeds as a local slice in $\text{mod}\,\widetilde{C}$, and any local slice in $\textup{mod}\,{\widetilde{C}}$ occurs in this way. If $B$ is a cluster-tilted algebra, then a tilted algebra $C$ is such that $B=\widetilde{C}$ if and only if there exists a local slice $\Sigma$ in $\Gamma(\text{mod}\,B)$ such that $C=B/\text{Ann}_B \Sigma$, where $\text{Ann}_B \Sigma = \bigcap_{X\in\Sigma} \text{Ann}_B X$, see [@ABS2]. Let $\Sigma$ be a local slice in the transjective component of $\Gamma(\text{mod}\,B)$ having the property that all the sources in $\Sigma$ are injective $B$-modules. Then $\Sigma$ is called a *rightmost* slice of $B$. Let $x$ be a point in the quiver of $B$ such that $I(x)$ is an injective source of the rightmost slice $\Sigma$. Then $x$ is called a *strong sink*. *Leftmost slices* and *strong sources* are defined dually. From quasi-tilted to cluster-tilted algebras ============================================ We start with a motivating example. Let $C$ be the tilted algebra of type $\widetilde{\mathbb{A}}$ given by the quiver $$\xymatrix@R5pt@C60pt {&2\ar[ld]_{\beta}\\1&&4\ar[lu]_{\alpha}\ar[ld]^{\gamma}\\&3\ar[lu]^{\delta}}$$ bound by ${\alpha}{\beta}=0$, ${\gamma}{\delta}=0$. Its relation-extension is the cluster-tilted algebra $B$ given by the quiver $$\xymatrix@R25pt@C60pt {&2\ar[ld]_{\beta}\\1\ar@<1.5pt>[rr]^{\lambda}\ar@<-1.5pt>[rr]_\mu&&4\ar[lu]_{\alpha}\ar[ld]^{\gamma}\\&3\ar[lu]^{\delta}}$$ bound by ${\alpha}{\beta}=0$, ${\beta}{\lambda}=0$, ${\lambda}{\alpha}=0$, ${\gamma}{\delta}=0$, ${\delta}\mu=0$, $\mu{\gamma}=0$. However, $B$ is also the relation-extension of the algebra $C'$ given by the quiver $$\xymatrix@R=30pt@C60pt {2&4\ar[l]_{\alpha}&1\ar@<1.5pt>[l]_{ ^{\lambda}}\ar@<-1.5pt>[l]^{\ \atop\mu}&3\ar[l]_{\delta}}$$ bound by ${\lambda}{\alpha}=0$, ${\delta}\mu=0$. This latter algebra $C'$ is not tilted, but it is quasi-tilted. In particular, it is triangular of global dimension two. Therefore, the question arises natrually whether the relation-extension of a quasi-tilted algebra is always cluster-tilted. This is certainly not true in general, for the relation-extension of a tubular algebra is not cluster-tilted. However, it is 2-Calabi-Yau tilted. In this section, we prove that the relation-extension of a quasi-tilted algebra is always 2-Calabi-Yau tilted. Let ${\mathcal{H}}$ be a hereditary category with tilting object $T$. Because of [@Happel], there exist an algebra $A$, which is hereditary or canonical, and a triangle equivalence $\Phi:{\mathcal{D}^b(\mathcal{H})}\to{\mathcal{D}^b(\textup{mod}\,A)}$. Let $T'$ denote the image of $T$ under this equivalence. Because $\Phi$ preserves the shift and the Auslander-Reiten translation, it induces an equivalence between the cluster categories ${\mathcal{C}}_{\mathcal{H}}$ and ${\mathcal{C}}_A$, see [@Amiot Section 4.1]. Indeed, because $A$ is canonical or hereditary, it follows that ${\mathcal{C}}_A\cong{\mathcal{D}^b(\textup{mod}\,A)}/F$, where $F=\tau^{-1}[1].$ Therefore, we have ${\textup{End}}_{{\mathcal{C}}_{\mathcal{H}}}T\cong{\textup{End}}_{{\mathcal{C}}_A} T'$. We say that a 2-Calabi-Yau tilted algebra ${\textup{End}}_{\mathcal{C}}T$ is of *canonical type* if the 2-Calabi-Yau category ${\mathcal{C}}$ is the cluster category of a canonical algebra. The proof of the next theorem follows closely [@ABS]. \[thm 2.1\] Let $C$ be a quasi-tilted algebra. Then its relation-extension $\widetilde{C}$ is cluster-tilted or it is 2-Calabi-Yau titled of canonical type. Because $C$ is quasi-tilted, there exist a hereditary category ${\mathcal{H}}$ and a tilting object $T$ in ${\mathcal{H}}$ such that $C={\textup{End}}_{\mathcal{H}}T$. As observed above, there exist an algebra $A$, which is hereditary or canonical, and a triangle equivalence $\Phi:{\mathcal{D}^b(\mathcal{H})}\to{\mathcal{D}^b(\textup{mod}\,A)}$. Let $T'=\Phi(T)$.We have ${\mathcal{D}^b(\textup{mod}\,C)}\cong{\mathcal{D}^b(\textup{mod}\,A)}\cong{\mathcal{D}^b(\mathcal{H})}$, and therefore $$\begin{array} {rcl} {{\textup{Ext}}^2_C(DC,C)}&\cong & {\textup{Hom}}_{{\mathcal{D}^b(\textup{mod}\,C)}}(\tau C[1] , C[2]) \\ &\cong & {\textup{Hom}}_{{\mathcal{D}^b(\mathcal{H})}}(\tau T[1] , T[2]) \\ &\cong & {\textup{Hom}}_{{\mathcal{D}^b(\mathcal{H})}}( T , \tau^{-1} T[1]) \\ &\cong & {\textup{Hom}}_{{\mathcal{D}^b(\mathcal{H})}}( T , F T) .\\ \end{array}$$ Thus the additive structure of ${C\ltimes {{\textup{Ext}}^2_C(DC,C)}}$ is that of $$\begin{array} {rcl} C\oplus {{\textup{Ext}}^2_C(DC,C)}&\cong & {\textup{End}}_{{\mathcal{H}}}(T)\oplus {\textup{Hom}}_{{\mathcal{D}^b(\mathcal{H})}}(T,FT)\\ &\cong & \oplus_{i\in{\mathbb{Z}}}{\textup{Hom}}_{{\mathcal{D}^b(\mathcal{H})}}(T,FT)\\ &\cong & {\textup{Hom}}_{{\mathcal{C}}_{\mathcal{H}}}(T,T)\\ &\cong &{\textup{End}}_{{\mathcal{C}}_{\mathcal{H}}} T. \end{array}$$ Then, we check exactly as in [@ABS Section 3.3] that the multiplicative structure is preserved. This completes the proof. Let $C$ be a representation-infinite quasi-tilted algebra. Then $C$ is derived equivalent to a hereditary or a canonical algebra $A$. Let $n_A$ denote the tubular type of $A$. We then say that $C$ has canonical type $n_C=n_A$. \[lem 1\] Let $C$ be a representation-infinite quasi-tilted. Then its relation-extension ${\widetilde{C}}$ is cluster-tilted of euclidean type if and only if $n_C$ is one of $$(p,q),(2,2,r),(2,3,3),(2,3,4),(2,3,5), \textup{ with $p\le q$, $2\le r.$}$$ Indeed, ${\widetilde{C}}$ is cluster-tilted of euclidean type if and only if $C$ is derived equivalent to a tilted algebra of euclidean type, and this is the case if and only if $n_C$ belongs to the above list. \[rem 2\] It is possible that $C$ is domestic, but yet ${\widetilde{C}}$ is wild. Indeed, we modify the example after Corollary D in [@Sk]. Recall from [@Sk] that there exists a tame concealed full convex subcategory $K$ such that $C$ is a semiregular branch enlargement of $K$ $$C=[E_i]K[F_j],$$ where $E_i, F_j$ are (truncated) branches. Then the representation theory of $C$ is determined by those of $C^-=[E_i]K$ and $C^+=K[F_j]$. Let $C$ be given by the quiver $$\xymatrix@R20pt@C40pt{ 1&&&&6\ar[dl]_{\delta}&11\ar[l]_\zeta\\ &3\ar[ul]_{\alpha}\ar[dl]^{\beta}&4\ar[l]_{\gamma}\ar[d]^\nu&5\ar[l]_{\sigma}\\ 2&&8\ar[d]^\varphi &9\ar[l]_\omega&7\ar[ul]_\rho \\&&10 }$$ bound by the relations ${\sigma}\nu=0$, $\omega\varphi=0$, $\zeta{\delta}{\sigma}{\gamma}{\beta}=0$. Here $C^-$ is the full subcategory generated by $C_0\setminus\{11\}$ and $C^+$ the one generated by $C_0\setminus\{8,9,10\}$. Then $C^-$ has domestic tubular type $(2,2,7)$ and $C^+$ has domestic tubular type $(2,3,4)$. Therefore $C$ is domestic. On the other hand, the canonical type of $C$ is $(2,3,7)$, which is wild. In this example, the 2-Calabi-Yau tilted algebra ${\widetilde{C}}$ is not cluster-tilted, because it is not of euclidean type, but the derived category of $\textup{mod}\,C$ contains tubes, see [@Ringel; @Durham]. There clearly exist algebras which are not quasi-tilted but whose relation-extension is cluster-tilted of euclidean type. Indeed, let $C$ be given by the quiver $$\xymatrix@C40pt{6\ar[r]^{\alpha}&5\ar[r]^{\beta}&4\ar[r]^{\gamma}&3\ar[r]^{\delta}&2\ar@<-2pt>[r]_\mu\ar@<2pt>[r]^{\lambda}&1}$$ bound by ${\alpha}{\beta}=0,{\delta}{\lambda}=0$. Then $C$ is iterated tilted of type $\widetilde{\mathbb{A}}$ of global dimension 2, see [@FPT]. Its relation-extension is given by $$\xymatrix@C40pt{6\ar[r]^{\alpha}&5\ar[r]^{\beta}&4\ar@/_25pt/[ll]_{\sigma}\ar[r]^{\gamma}&3\ar[r]^{\delta}&2\ar@<-2pt>[r]_\mu\ar@<2pt>[r]^{\lambda}&1\ar@/_25pt/[ll]_\eta}$$ bound by ${\alpha}{\beta}=0,{\beta}{\sigma}=0,{\sigma}{\alpha}=0,{\delta}{\lambda}=0,{\lambda}\eta=0,\eta{\delta}=0$. This algebra is isomorphic to the relation-extension of the tilted algebra of type $\widetilde{\mathbb{A}}$ given by the quiver $$\xymatrix@R20pt@C40pt{6\\ &4\ar[lu]^{\sigma}\ar[r]^{\gamma}&3\ar[r]^{\delta}&2\ar@<-2pt>[r]_\mu\ar@<2pt>[r]^{\lambda}&1\\ 5\ar[ru]^{\beta}}$$ bound by ${\beta}{\sigma}=0$, ${\delta}{\lambda}=0$. Therefore ${\widetilde{C}}$ is cluster-tilted of euclidean type. On the other hand, $C$ is not quasi-tilted, because the uniserial module $\begin{smallmatrix}4\\3\end{smallmatrix}$ has both projective and injective dimension 2. Reflections =========== Let $C$ be a tilted algebra. Let ${\Sigma}$ be a rightmost slice, and let $I(x)$ be an injective source of ${\Sigma}$. Thus $x$ is a strong sink in $C$. We define the *completion $H_x$ of $x$* by the following three conditions. - $I(x)\in H_x$. - $H_x$ is closed under predecessors in ${\Sigma}$. - If $L\to M$ is an arrow in ${\Sigma}$ with $L\in H_x$ having an injective successor in $H_x$ then $M\in H_x$. Observe that $H_x$ may be constructed inductively in the following way. We let $H_1=I(x)$, and $H_2'$ be the closure of $H_1$ with respect to (c) (that is, we simply add the direct successors of $I(x)$ in ${\Sigma}$, and if a direct successor of $I(x)$ is injective, we also take its direct successor, etc.) We then let $H_2$ be the closure of $H_2'$ with respect to predecessors in ${\Sigma}$. Then we repeat the procedure; given $H_i$, we let $H_{i+1}'$ be the closure of $H_i$ with respect to (c) and $H_{i+1}$ be the closure of $H_{i+1}'$ with respect to predecessors. This procedure must stabilize, because the slice ${\Sigma}$ is finite. If $H_j=H_k$ with $k>j$, we let $H_x=H_j$. We can decompose $H_x$ as the disjoint union of three sets as follows. Let ${\mathcal{J}}$ denote the set of injectives in $H_x$, let ${\mathcal{J}}^-$ be the set of non-injectives in $H_x$ which have an injective successor in $H_x$, and let ${\mathcal{E}}=H_x\setminus({\mathcal{J}}\cup{\mathcal{J}}^-)$ denote the complement of $({\mathcal{J}}\cup{\mathcal{J}}^-)$ in $H_x$. Thus $$H_x={\mathcal{J}}\sqcup{\mathcal{J}}^-\sqcup{\mathcal{E}}$$ is a disjoint union. \[rem H\] If ${\mathcal{J}}^-=\emptyset$ then $H_x$ reduces to the completion $G_x$ as defined in [@ABS4]. Recall that $G_x$ does not always exist, but, as seen above, $H_x$ does. Conversely, if $G_x$ exists, then it follows from its construction in [@ABS4] that ${\mathcal{J}}^-=\emptyset$. Thus ${\mathcal{J}}^-=\emptyset$ if and only if $G_x$ exists, and, in this case $G_x=H_x$. For every module $M$ over a cluster-tilted algebra $B$, we can consider a lift $\widetilde M$ in the cluster category ${\mathcal{C}}$. Abusing notation, we sometimes write $\tau^i M$ to denote the image of $\tau^i_{\mathcal{C}}\widetilde M$ in $\textup{mod}\,B$, and say that the Auslander-Reiten translation is computed in the cluster category. Let $x$ be a strong sink in $C$ and let ${\Sigma}$ be a rightmost local slice with injective source $I(x)$. Recall that ${\Sigma}$ is also a local slice in $\textup{mod}\,B$. Then the reflection of the slice ${\Sigma}$ in $x$ is $${\sigma}_x^+{\Sigma}=\tau^{-2}({\mathcal{J}}\cup{\mathcal{J}}^-)\cup\tau^{-1}{\mathcal{E}}\cup({\Sigma}\setminus H_x),$$ where $\tau$ is computed in the cluster category. In a similar way, one defines the coreflection ${\sigma}^-_y$ of leftmost slices with projective sink $P_C(y)$. \[thm reflection\] Let $x$ be a strong sink in $C$ and let ${\Sigma}$ be a rightmost local slice in $\textup{mod}\,B$ with injective source $I(x)$. Then the reflection ${\sigma}_x^+{\Sigma}$ is a local slice as well. Set ${\Sigma}'={\sigma}_x^+{\Sigma}$ and $${\Sigma}''=\tau^{-1}({\mathcal{J}}\cup{\mathcal{J}}^-)\cup\tau^{-1}{\mathcal{E}}\cup({\Sigma}\setminus H_x)=\tau^{-1}H_x\cup({\Sigma}\setminus H_x),$$ where again, ${\Sigma}''$ and $\tau$ are computed in the cluster category ${\mathcal{C}}$. We claim that ${\Sigma}''$ is a local slice in ${\mathcal{C}}$. Notice that since $H_x$ is closed under predecessors in ${\Sigma}$, then, if $X\in{\Sigma}\setminus H_x$ is a neighbor of $Y\in H_x$, we must have an arrow $Y\to X$ in ${\Sigma}$. This observation being made, ${\Sigma}''$ is clearly obtained from ${\Sigma}$ by applying a sequence of APR-tilts. Thus ${\Sigma}''$ is a local slice in ${\mathcal{C}}$. We now claim that $\tau^{-1}({\mathcal{J}}\cup{\mathcal{J}}^-)$ is closed under predecessors in ${\Sigma}''$. Indeed, let $X\in\tau^{-1}({\mathcal{J}}\cup{\mathcal{J}}^-)$ and $Y\in {\Sigma}''$ be such that we have an arrow $Y\to X$. Then, there exists an arrow $\tau X\to Y$ in the cluster category. Because $X\in \tau^{-1}({\mathcal{J}}\cup{\mathcal{J}}^-)$, we have $\tau X\in {\mathcal{J}}\cup{\mathcal{J}}^-$. Now if $Y\in {\Sigma}$, then the arrow $\tau X\to Y$ would imply that $Y\in H_x$, which is impossible, because $Y\in {\Sigma}''$ and ${\Sigma}''\cap H_x=\emptyset$. Thus $Y\notin {\Sigma}$, and therefore $Y\in( {\Sigma}''\setminus{\Sigma})=\tau^{-1}H_x$. Hence $\tau Y\in H_x$. Moreover, there is an arrow $\tau Y\to \tau X$. Using that $\tau X\in {\mathcal{J}}\cup{\mathcal{J}}^-$, this implies that $\tau Y$ has an injective successor in $H_x$ and thus $Y\in \tau^{-1}({\mathcal{J}}\cup{\mathcal{J}}^-)$. This establishes our claim that $\tau^{-1}({\mathcal{J}}\cup{\mathcal{J}}^-)$ is closed under predecessors in ${\Sigma}''$. Thus applying the same reasoning as before, we get that $${\Sigma}'=({\Sigma}''\setminus \tau^{-1}({\mathcal{J}}\cup{\mathcal{J}}^-))\cup\tau^{-2}({\mathcal{J}}\cup{\mathcal{J}}^-)$$ is a local slice in ${\mathcal{C}}$. Now we claim that $${\Sigma}'\cap {\textup{add}}(\tau T)=\emptyset.$$ First, because ${\Sigma}\cap {\textup{add}}(\tau T)=\emptyset$, we have $({\Sigma}\setminus H_x)\cap {\textup{add}}(\tau T) =\emptyset$. Next, ${\mathcal{E}}$ contains no injectives, by definition. Thus $\tau^{-1}{\mathcal{E}}\cap {\textup{add}}(\tau T)=\emptyset.$ Assume now that $X\in {\textup{add}}(\tau T)$ belongs to $\tau^{-2}{\mathcal{J}}^-$. Then $\tau^2 X\in H_x$ and there exists an injective predecessor $I(j)$ of $\tau^2 X$ in $H_x$, and since $H_x$ is part of the local slice ${\Sigma}$, there exists a sectional path from $I(j)$ to $\tau^2 X$. Applying $\tau^{-2}$, we get a sectional path from $T_j$ to $X$ in the cluster category. But this means ${\textup{Hom}}_{\mathcal{C}}(T_j,X)\ne 0$, which is a contradiction to the hypothesis that $X\in {\textup{add}}(\tau T)$. Finally, if $X\in \tau^{-2}{\mathcal{J}}$ then $X$ is a summand of $T$, which, again, is contradicting the hypothesis that $X\in {\textup{add}}(\tau T)$. Following [@ABS4], let ${\mathcal{S}}_x$ be the full subcategory of $C$ consisting of those $y$ such that $I(y)\in H_x$. \[lem S\] - ${\mathcal{S}}_x$ is hereditary. - ${\mathcal{S}}_x$ is closed under successors in $C$. - $C$ can be written in the form $$C= \left[ \begin{array} {cc} H&0\\M&C' \end{array}\right] ,$$ where $H$ is hereditary, $C'$ is tilted and $M$ is a $C'$-$H$-bimodule. \(a) Let $H={\textup{End}}(\oplus_{y\in{\mathcal{S}}_x}I(y)).$ Then $H$ is a full subcategory of the hereditary endomorphism algebra of ${\Sigma}$. Therefore $H$ is also hereditary, and so ${\mathcal{S}}_x$ is hereditary. \(b) Let $y\in {\mathcal{S}}_x$ and $y\to z$ in $C$. Then there exists a morphism $I(z)\to I(y)$. Because $I(z)$ is an injective $C$-module and ${\Sigma}$ is sincere, there exist a module $N\in {\Sigma}$ and a non-zero morphism $N\to I(z)$. Then we have a path $N\to I(z)\to I(y)$, and since $N,I(y)\in {\Sigma}$, we get that $I(z)\in {\Sigma}$ by convexity of the slice ${\Sigma}$ in $\textup{mod}\,C$. Moreover, since $I(y)\in H_x$ and $H_x$ is closed under predecessors in ${\Sigma}$, it follows that $I(z)\in H_x$. Thus $z\in {\mathcal{S}}_x$ and this shows (b). \(c) This follows from (a) and (b). We recall that the cluster duplicated algebra was introduced in [@ABS3]. \[cor dup\] The cluster duplicated algebra $\overline{C}$ of $C$ is of the form $$\overline{C}= \left[ \begin{array} {cccc} H&0&0&0\\ M&C'&0&0\\ 0&E_0&H&0\\ 0&E_1&M&C' \end{array} \right]$$ where $E_0={\textup{Ext}}^2_C(DC',H)$ and $E_1={\textup{Ext}}^2_C(DC',C')$. We start by writing $C$ in the matrix form of the lemma. By definition, $H$ consists of those $y\in C_0$ such that the corresponding injective $I(y)$ lies in $ H_x$ inside the slice ${\Sigma}$. In particular, the projective dimension of these injectives is at most 1, hence ${{\textup{Ext}}^2_C(DC,C)}={\textup{Ext}}^2_C(DC',C)$. The result now follows upon multiplying by idempotents. Let $x$ be a strong sink in $C$. The reflection at $x$ of the algebra $C$ is $${\sigma}_x^+ C= \left[ \begin{array} {cc}C'&0\\E_0&H \end{array}\right]$$ where $E_0={\textup{Ext}}^2_C(DC',H)$. \[prop reflection\] The reflection ${\sigma}_x^+C$ of $C$ is a tilted algebra having ${\sigma}_x^+{\Sigma}$ as a complete slice. Moreover the relation-extensions of $C$ and ${\sigma}_x^+{\Sigma}$ are isomorphic. We first claim that the support $\textup{supp}({\sigma}_x^+{\Sigma})$ of ${\sigma}_x^+{\Sigma}$ is contained in $ {\sigma}_x^+C$. Let $X\in{\sigma}_x^+{\Sigma}$. Recall that ${\sigma}_x^+{\Sigma}=\tau^{-2}({\mathcal{J}}\cup{\mathcal{J}}^-)\cup\tau^{-1}{\mathcal{E}}\cup({\Sigma}\setminus H_x)$. If $X\in \tau^{-2}{\mathcal{J}}$, then $X=P(y')$ is projective corresponding to a point $y'\in H$. Thus $I(y)\in H_x$ and the radical of $P(y)$ has no non-zero morphism into $I(y)$. Therefore $\textup{supp}(X)\subset {\sigma}^+_XC$. Assume next that $X\in \tau^{-2}{\mathcal{J}}^-$, that is, $X=\tau^{-2}Y$, where $Y\in {\mathcal{J}}^-$ has an injective successor $I(z)$ in $H_x$. Because all sources in ${\Sigma}$ are injective, there is an injective $I(y') \in {\Sigma}$ and a sectional path $I(y')\to\ldots\to Y\to \ldots \to I(z)$. Applying $\tau^{-2}$, we obtain a sectional path $P(y')\to\ldots\to X\to \ldots \to P(z)$. In particular the point $y'$ belongs to the support of $X$. Assume that there is a point $h$ in $H$ that is in the support of $X$. Then there exists a nonzero morphism $X\to I(h)$. But $I(h)\in {\Sigma}$ and there is no morphism from $X\in \tau^{-2}{\Sigma}$ to ${\Sigma}$. Therefore $\textup{supp}(X)\subset {\sigma}^+_xC$. By the same argument, we show that if $X\in \tau^{-1}{\mathcal{E}}$, then $\textup{supp}(X)\subset {\sigma}^+_xC$. Finally, all modules of ${\Sigma}\setminus H_x$ are supported in $C'$. This establishes our claim. Now, by Theorem \[thm reflection\], $ {\sigma}^+_x{\Sigma}$ is a local slice in $\textup{mod}\, {\widetilde{C}}.$ Therefore ${\widetilde{C}}/{\textup{Ann}}\,{\sigma}^+_x{\Sigma}$ is a tilted algebra in which ${\sigma}^+_x{\Sigma}$ is a complete slice. Since the support of ${\sigma}^+_x{\Sigma}$ is the same as the support of ${\sigma}^+_x C$, we are done. We now come to the main result of this section, which states that any two tilted algebras that have the same relation-extension are linked to each other by a sequence of reflections and coreflections. Let $B$ be a cluster-tilted algebra and let $\Sigma$ and $\Sigma'$ be two local slices in $\text{mod}\,B$. We write $\Sigma \sim \Sigma'$ whenever $B/{\textup{Ann}}\,\Sigma = B/{\textup{Ann}}\,\Sigma'$. \[lem 310\] Let $B$ be a cluster-tilted algebra, and ${\Sigma}_1, {\Sigma}_2 $ be two local slices in $\textup{mod}\,B$. Then there exists a sequence of reflections and coreflections $\sigma$ such that $$\sigma \Sigma_1\sim \Sigma_2.$$ Given a local slice $\Sigma$ in $\text{mod}\,B$ such that $\Sigma$ has injective successors in the transjective component $\mathcal{T}$ of $\Gamma(\text{mod}\,B)$, let $\Sigma^+$ be the rightmost local slice such that $\Sigma\sim \Sigma^+$. Then $\Sigma^+$ contains a strong sink $x$, thus reflecting in $x$ we obtain a local slice $\sigma^+_{x}\Sigma^+$ that has fewer injective successors in $\mathcal{T}$ than $\Sigma$. To simplify the notation we define $\sigma^+_x\Sigma = \sigma^+_{x}\Sigma^+$. Similarly, we define $\sigma^-_y\Sigma=\sigma^-_y\Sigma^-$, where $\Sigma^-$ is the leftmost local slice containing a strong source $y$ and $\Sigma\sim\Sigma^-$. Since we can always reflect in a strong sink, there exist sequences of reflections such that $$\sigma^+_{x_r} \cdots \sigma^+_{x_2}\sigma^+_{x_1} \Sigma_1 = \Sigma^1_{\infty}$$ $$\sigma^+_{y_s} \cdots \sigma^+_{y_2}\sigma^+_{y_1} \Sigma_2 = \Sigma^2_{\infty}$$ and $\Sigma^1_{\infty}, \Sigma^2_{\infty}$ have no injective successors in $\mathcal{T}$. This implies that $\Sigma^1_{\infty}\sim\Sigma^2_{\infty}$. Let $$\sigma = \sigma^-_{y_1} \sigma^-_{y_{2}}\cdots \sigma^-_{y_s}\sigma^+_{x_r} \cdots \sigma^+_{x_2}\sigma^+_{x_1}$$ thus $\sigma\Sigma_1\sim \Sigma_2$. \[thm mainreflection\] Let $C_1$ and $C_2$ be two tilted algebras that have the same relation-extension. Then there exists a sequence of reflections and coreflections $\sigma$ such that $\sigma C_1 \cong C_2$. Let $B$ be the common relation-extension of the tilted algebras $C_1$ and $C_2$. By [@ABS2], there exist local slices ${\Sigma}_i$ in $\textup{mod}\,B$ such that $C_i=B/{\textup{Ann}}\,{\Sigma}_i$, for $i=1,2$. Now the result follows from Lemma \[lem 310\] and Theorem \[thm reflection\]. Let $A$ be the path algebra of the quiver $$\xymatrix@R2pt@C10pt{ & & 1\ar@/_10pt/[lldd]\ar[ld]\\ & 2\ar[ld] \\ 3\\ &4\ar[lu]\\ &&5\ar@<1pt>[lu]\ar@<-1pt>[lu]\ar[ld]\\ &6 }$$ Mutating at the vertices 4,5, and 2 yields the cluster-tilted algebra $B$ with quiver $$\xymatrix@R2pt@C10pt{ & & 1\ar@<0pt>@/_10pt/[lldd]\ar@<-2pt>@/_10pt/[lldd]\\ & 2\ar[ur] \\ 3\ar[ur]\ar@<0pt>@/^15pt/[rrdd]\ar@<-2pt>@/^15pt/[rrdd]\\ &4\ar@<2pt>[lu]\ar@<-2pt>[lu]\ar[ul]\ar@<1pt>[dd]\ar@<-1pt>[dd]\\ &&5\ar@<1pt>[lu]\ar@<-1pt>[lu]\\ &6\ar[ur] }$$ In the Auslander-Reiten quiver of $\textup{mod}\,B$ we have the following local configuration. $$\xymatrix@!@R0pt@C1pt{ && I(1)\ar[rd]&& \circ&& P(1) \\ & 1\ar[ru]&& 2\ar[rd] && {\begin{smallmatrix} 3\\ 5\,5\\4\end{smallmatrix}}\ar[ru] \\ I(3)\ar[ru]\ar@/^15pt/[rruu] \ar[rd]&& \circ&& P(3)\ar[ru]\ar@/^15pt/[rruu]\\ & {\begin{smallmatrix} 5555\\444 \end{smallmatrix}}\ar@<1pt>[rd]\ar@<-1pt>[rd] && {\begin{smallmatrix} 55\\4 \end{smallmatrix}}\ar@<1pt>[rd]\ar@<-1pt>[rd] \ar[ru]\\ && {\begin{smallmatrix} 555\\44 \end{smallmatrix}}\ar@<1pt>[ru]\ar@<-1pt>[ru] && 5 \ar[rd] \\ &I(6) \ar[ru] &&\circ && P(6) }$$ where $$\begin{array}{ccc} I(1)= {\begin{smallmatrix} 2\\1\end{smallmatrix}} & I(3)= {\begin{smallmatrix} 2\ \ 5555\\11\ 444 \\ 3\end{smallmatrix}} & I(6)= \begin{smallmatrix} 555\\44\\6\end{smallmatrix} \end{array}$$ The 6 modules on the left form a rightmost local slice ${\Sigma}$ in which both $I(3)$ and $I(6)$ are sources, so 3 and 6 are strong sinks. For both strong sinks the subset ${\mathcal{J}}^-$ of the completion consists of the simple module $1$. The simple module $2=\tau^{-1}1$ does not lie on a local slice. The completion $H_6$ is the whole local slice ${\Sigma}$ and therefore the reflection ${\sigma}_6^+{\Sigma}$ is the local slice consisting of the 6 modules on the right containing both $P(1)$ and $P(6)$. On the other hand, the completion $H_3$ consists of the four modules $I(3)$, $S(1)$, $I(1) $ and ${\begin{smallmatrix} 5555\\444 \end{smallmatrix}}$, and therefore the reflection ${\Sigma}'={\sigma}_3^+{\Sigma}$ is the local slice consisting of the 6 modules on the straight line from $I(6)$ to $P(1)$. This local slice admits the strong sink $6$ and the completion $H'_6$ in ${\Sigma}'$ consists of the two modules $I(6)$ and ${\begin{smallmatrix} 555\\44 \end{smallmatrix}}$. Therefore the reflection ${\sigma}_6^+{\Sigma}'$ is equal to ${\sigma}_6^+{\Sigma}$. Thus $${\sigma}_6^+{\Sigma}= {\sigma}_6^+({\sigma}_3^+{\Sigma}).$$ This example raises the question which indecomposable modules over a cluster-tilted algebra do not lie on a local slice. We answer this question in a forthcoming publication [@ASS2]. Tubes ===== The objective of this section is to show how to construct those tubes of a tame cluster-tilted algebra which contain projectives. Let $B$ be a cluster-tilted algebra of euclidean type, and let $\mathcal{T}$ be a tube in $\Gamma(\text{mod}\,B)$ containing at least one projective. First, consider the transjective component of $\Gamma(\text{mod}\,B)$. Denote by $\Sigma_L$ a local slice in the transjective component that precedes all indecomposable injective $B$-modules lying in the transjective component. Then $B/ \text{Ann}_B \Sigma_L=C_1$ is a tilted algebra having a complete slice in the preinjective component. Define $\Sigma_R$ to be a local slice which is a successor of all indecomposable projectives lying in the transjective component. Then $B/\text{Ann}_B \Sigma_R=C_2$ is a tilted algebra having a complete slice in the postprojective component. Also, $C_1$ (respectively, $C_2)$ has a tube $\mathcal{T}_1$ (respectively, $\mathcal{T}_2$) containing the indecomposable projective $C_1$-modules (respectively, injective $C_2$-modules) corresponding to the projective $B$-modules in $\mathcal{T}$ (respectively, injective $B$-modules in $\mathcal{T}$). An indecomposable projective $P(x)$ (respectively, injective $I(x)$) $B$-module that lies in a tube, is said to be a *root projective* (respectively, a *root injective*) if there exists an arrow in $B$ between $x$ and $y$, where the corresponding indecomposable projective $P(y)$ lies in the transjective component of $\Gamma(\text{mod}\,B)$. Let $\mathcal{S}_1$ be the coray in $\mathcal{T}_1$ passing through the projective $C_1$-module that corresponds to the root projective $P_B(i)$ in $\mathcal{T}$. Similarly, let $\mathcal{S}_2$ be the ray in $\mathcal{T}_2$ passing through the injective that corresponds to the root injective $I_B(i)$ in $\mathcal{T}$. Recall that if $A$ is hereditary and $T\in\text{mod}\,A$ is a tilting module, then there exists an associated torsion pair $({\mathscr{T}(T)}, {\mathscr{F}(T)})$ in $\text{mod}\,A$, where $$\xymatrix@R=5pt{{\mathscr{T}(T)}=\{M\in\text{mod}\,A\mid \text{Ext}^1_A(T,M)=0\} \\ {\mathscr{F}(T)}= \{M\in \text{mod}\,A\mid \text{Hom}_A(T,M)=0\}.}$$ \[lem tube1\] With the above notation - $\mathcal{S}_1 \otimes _{C_{1}} B$ is a coray in $\mathcal{T}$ passing through $P_B(i)$. - $\textup{Hom}_{C_2}(B, \mathcal{S}_2)$ is a ray in $\mathcal{T}$ passing through $I_B(i)$. Since $C_1$ is tilted, we have $C_1=\text{End}_A T$ where $T$ is a tilting module over a hereditary algebra $A$. As seen in the proof of Theorem 5.1 in [@SS], we have a commutative diagram $$\xymatrix@C=60pt{{\mathscr{T}(T)}\ar@{^{(}->}[d]\ar[r]^{\text{Hom}_A(T,-)}&\mathcal{Y}(T)\ar[d]^{-\otimes_{C_1} B}\\ \mathcal{C}_A\ar[r]^{\text{Hom}_{\mathcal{C}_A}(T,-)}&\text{mod}\,B\;\;}$$ where $\mathcal{Y}(T)=\{N\in\text{mod}\,C\mid \text{Tor}_1^C(N,T)=0\}$. Let $\mathcal{T}_A$ be the tube in $\text{mod}\,A$ corresponding to the tube $\mathcal{T}$ in $\text{mod}\,B$. By what has been seen above, we have a commutative diagram $$\xymatrix@C=60pt{\mathcal{T}_A \cap {\mathscr{T}(T)}\ar[r]^{\text{Hom}_A(T,-)}\ar[dr]_-{\text{Hom}_{\mathcal{C}_A}(T,-)\;\;\;\;\;}&\mathcal{T}_1\ar[d]^{-\otimes_{C_1} B}\\ & \mathcal{T}_1\otimes_{C_1} B \subset \mathcal{T}\;\;.}$$ Let $\mathcal{S}$ be any coray in $\mathcal{T}_1$, so it can be lifted to a coray $\mathcal{S}_{A}$ in $\mathcal{T}_A\cap {\mathscr{T}(T)}$ via the functor $\text{Hom}_A(T,-)$. If we apply $\text{Hom}_{\mathcal{C}_A}(T,-)$ to this lift, we obtain a coray in $\mathcal{T}_1\otimes_{C_1} B$. Thus, any coray in $\mathcal{T}_1$ induces a coray in $\mathcal{T}$. Let $\mathcal{S}_1$ be the coray passing through the root projective $P_{C_1}(i)$. Then $\mathcal{S}_1\otimes_{C_1} B$ is the coray passing through $P_{C_1}(i)\otimes_{C_1} B = P_B(i)$. This proves (a) and part (b) is proved dually. However, we must still justify that the ray $\mathcal{S}_1\otimes_{C_1} B$ and the coray $\text{Hom}_{\mathcal{C}_2}(B, \mathcal{S}_2)$ actually intersect (and thus lie in the same tube of $\Gamma(\text{mod}\,B)$). Because $P_{C_1}(i)\in\mathcal{S}_1$, we have $P_{C_1}(i)\otimes B \cong P_B(i)\in \mathcal{S}_1\otimes_{C_1} B$, and $P_B(i)$ lies in a tube $\mathcal{T}$. It is well-known that the injective $I_B(i)$ also lies in $\mathcal{T}$. In particular, we have the following local configuration in $\mathcal{T}$, where $R$ is an indecomposable summand of the radical of $P_B(i)$ and $J$ an indecomposable summand of the quotient of $I_B(i)$ by its socle. $$\xymatrix@!C=5pt@R=5pt{I_B(i)\ar[dr]&& \circ \ar@{-->}[dr]&& P_B(i)\\ &J\ar[dr]\ar@{-->}[ur]&&R\ar[ur]\\ &&N\ar[ur]}$$ Now $I_B(i) =\text{Hom}_{C_2} (B, I_C(i))$ is coinduced, and we have shown above that the ray containing it is also coinduced. Because $I_C(i)\in\mathcal{S}_2$, this is the ray $\text{Hom}_{C_2}(B, \mathcal{S}_2)$. Therefore, this ray and this coray lie in the same tube, so must intersect in a module $N$, where there exists an almost split sequence $$\xymatrix{0\ar[r]&J\ar[r]& N \ar[r] & R \ar[r] & 0.}$$ Knowing the ray $\text{Hom}_{C_2}(B, \mathcal{S}_2)$ and the coray $\mathcal{S}_1\otimes_{C_1} B$ for every root projective $P_B(i)$ in $\mathcal{T}$, one may apply the knitting procedure to construct the whole of $\mathcal{T}$. In this way, $\mathcal{T}$ can be determined completely. Next we show that all modules over a tilted algebra lying on the same coray change in the same way under the induction functor. \[lem tube2\] Let $A$ be a hereditary algebra of euclidean type, $T$ a tilting $A$-module without preinjective summands and let $C=\textup{End}_A T$ be the corresponding tilted algebra. Let $\mathcal{T}_A$ be a tube in $\textup{mod}\,A$ and $T_i\in\mathcal{T}_A$ an indecomposable summand of $T$, such that $\textup{pd}\, I_C(i)=2$. Then there exists an $A$-module $M$ on the mouth of $\mathcal{T}_A$ such that we have $$\tau_C\Omega_C I_C(i) = \textup{Hom}_A (T,M)$$ in $\textup{mod}\,C$. In particular, the module $\tau_C\Omega_C I_C(i)$ lies on the mouth of the tube $\textup{Hom}_A(T, \mathcal{T}_A\cap {\mathscr{T}(T)})$ in $\textup{mod}\,C$. The injective $C$-module $I_C(i)$ is given by $$I_C(i) \cong \text{Ext}_A^1(T, \tau T_i) \cong D\text{Hom}_A(T_i, T),$$ where the first identity holds by [@ASS Proposition VI 5.8] and the second identity is the Auslander-Reiten formula. Moreover, since $T_i$ lies in the tube $\mathcal{T}_A$ and $T$ has no preinjective summands, we have $\text{Hom}(T_i, T_j) \not=0$ only if $T_j$ lies in the hammock starting at $T_i$. Furthermore, if $T_j$ is a summand of $T$ then it must lie on a sectional path starting from $T_i$ because $\text{Ext}^1(T_j, T_i)=0$. This shows that a point $j$ is in the support of $I_C(i)$ if and only if there is a sectional path $T_i\to \dots \to T_j$ in $\mathcal{T}_A$. We shall distinguish two cases. Case 1. If $T_i$ lies on the mouth of $\mathcal{T}_A$ then let $\omega$ be the ray starting at $T_i$ and denote by $T_1$ the last summand of $T$ on this ray. Let $L_1$ be the direct predecessor of $T_1$ not on the ray $\omega$. Thus we have the following local configuration in $\mathcal{T}_A$. $$\xymatrix@!C=5pt@!R=5pt{\ar[dr]&&\tau T_i \ar[dr] && T_i \ar[dr]\\ &\ar[ur]\ar[dr]&&\ar[ur]\ar[dr]&&\ar@{..}[dr]\\ &&\ar[ur]\ar@{..}[dr]&&\ar@{..}[dr]&&\ar[dr]\\ &&&\ar[dr]&&\tau T_1\ar[dr]\ar[ur]&&T_1\ar[dr]&\\ &&&&\tau L_1\ar[ur]\ar[dr]&&L_1\ar[ur]&&\tau^{-1} L_1 \\ &&&&&E_1\ar[ur]}$$ Then $I_C(i)$ is uniserial with simple top $S(1)$. Moreover there is a short exact sequence $$\xymatrix{0\ar[r]&\tau T_i\ar[r]& L_1\ar[r] & T_1\ar[r] & 0}$$ and applying $\text{Hom}_A(T, -)$ yields $$\label{s1} \xymatrix{0\ar[r]&\text{Hom}_A(T, L_1)\ar[r]& P_C(1) \ar[r]^{f} & I_C(i) \ar[r] & \text{Ext}^1(T, L_1)\ar[r]&0}$$ By the Auslander-Reiten formula, we have $\text{Ext}^1(T, L_1) \cong D\text{Hom}(\tau^{-1}L_1, T)$ and this is zero because $T_1$ is the last summand of $T$ on the ray $\omega$. Thus the sequence (\[s1\]) is short exact, the morphism $f$ is a projective cover, because $I_C(i)$ is uniserial, and hence $$\Omega_C I_C(i) \cong \text{Hom}_A(T, L_1).$$ Applying $\tau_C$ yields $$\tau_C \Omega_C I_C(i) \cong \tau_C \text{Hom}_A(T, L_1).$$ Let $E_1$ be the indecomposable direct predecessor of $L_1$ such that the almost split sequence ending at $L_1$ is of the form $$\label{s2} \xymatrix{0\ar[r]&\tau L_1 \ar[r] & E_1\oplus \tau T_1 \ar[r] & L_1 \ar[r] &0}$$ We claim that $E_1\in {\mathscr{T}(T)}$. Recall that $L_1$ is not a summand of $T$ because $\Omega_C I_C (i) = \text{Hom}_A(T,L_1)$ is non projective. Also, recall that $T_1$ is the last summand of $T$ on the ray $\omega$. Suppose $E_1\not\in{\mathscr{T}(T)}$, thus $0\not=\text{Ext}^1_A(T, E_1) = D\text{Hom} (\tau^{-1} E_1, T)$. Then it follows that there is a summand of $T$ on the ray $\tau\omega$ that is a successor of $\tau^{-1}E_1$. Let $T^1$ denote the first such indecomposable summand. $$\xymatrix@!C=5pt@!R=5pt{\ar[dr]&&\tau T_1\ar[dr] && T_1\ar[dr]\\ &\tau L_1\ar[ur]\ar[dr] && L_1 \ar[dr]\ar[ur]&&\tau^{-1}L_1\ar[dr]\\ &&E_1 \ar[ur] && \tau^{-1} E_1\ar[dr] \ar[ur]&& \ar@{..}[dr]\\ &&&&&\ar@{..}[dr]&& N \ar[dr]\\ &&&&&&T^1\ar[ur]\ar[dr] && \ar@{..}[dr]\\ &&&&&&&\ar@{..}[dr]&& \omega \\ &&&&&&&&\tau\omega &&&\\ }$$ Then we have a short exact sequence $$\xymatrix{0\ar[r]& L_1\ar[r]^-{h}&T_1\oplus T^1 \ar[r] & N \ar[r] & 0}$$ with $h$ an $\text{add}\,T$-approximation. Applying $\text{Hom}_A(-, T)$ yields $$\xymatrix@R=5pt{0\ar[r]&\text{Hom}_A(N,T)\ar[r]& \text{Hom}_A(T_1\oplus T^1, T)\ar[r]^-{h^*}& \text{Hom}_A(L_1, T)\\ &\hspace{2cm}\ar[r]&\text{Ext}^1_A(N,T)\ar[r]&0\hspace{1.5cm}}$$ and since $h$ is an $\text{add}\,T$-approximation, the morphism $h^*$ is surjective. Thus $\text{Ext}^1_A(N,T) = 0$. On the other hand, $T_1\oplus T^1$ generates $N$, so $N\in\text{Gen}\,T = {\mathscr{T}(T)}$, and thus $\text{Ext}^1_A(T,N)=0$. But then both $\text{Ext}^1_A(T,N) = \text{Ext}^1_A(N,T)=0$ and we see that $N$ is a summand of $T$. This is a contradiction to the assumption that $T_1$ is the last summand of $T$ on the ray $\omega$. Thus $E_1\in{\mathscr{T}(T)}$. Therefore, in the almost split sequence (\[s2\]), we have $L_1, E_1 \in {\mathscr{T}(T)}$ and $\tau T_1 \in {\mathscr{F}(T)}$. Moreover, all predecessors of $\tau T_1$ on the ray $\tau \omega$ are also in ${\mathscr{F}(T)}$ because the morphisms on the ray are injective. Since $\text{Hom}_A(T, -):\; {\mathscr{T}(T)}\to \mathcal{Y}(T)$ is an equivalence of categories, it follows that $\text{Hom}_A(T, L_1)$ has only one direct predecessor $$\text{Hom}_A(T, E_1)\to \text{Hom}_A(T, L_1)$$ in $\text{mod}\,C$ and this irreducible morphism is surjective. The kernel of this morphism is $\text{Hom}_A(T, t(\tau_A L_1))$ where $t$ is the torsion radical. Thus we get $$\tau_C\Omega_C I_C(i) = \tau_C \text{Hom}_A(T, L_1) = \text{Hom}_A(T, t(\tau_A L_1)).$$ We will show that $t(\tau_A L_1)$ lies on the mouth of $\mathcal{T}_A$ and this will complete the proof in case 1. Let $M$ be the indecomposable $A$-module on the mouth of $\mathcal{T}_A$ such that the ray starting at $M$ passes through $\tau_A L_1$. Thus $M$ is the starting point of the ray $\tau^{2} \omega$. Then there is a short exact sequence of the form $$\label{s3} \xymatrix{0\ar[r]& M \ar[r] & \tau_A L_1 \ar[r] & \tau_A T_1 \ar[r] & 0}$$ with $\tau_A T_1\in {\mathscr{F}(T)}$. We claim that $M\in {\mathscr{T}(T)}$. Suppose to the contrary that $0\not=\text{Ext}^1_A(T,M)=D\text{Hom}_A(\tau^{-1}M, T)$. Since $\tau^{-1}M$ lies on the mouth of $\mathcal{T}_A$, this implies that there is a direct summand $T^1$ of $T$ which lies on the ray $\tau \omega$ starting at $\tau^{-1}M$. Since $T$ is tilting, $T^1$ cannot be a predecessor of $\tau T_1$ on this ray and since $L_1$ is not a summand of $T$, we also have $L_1\not=T^1$. Thus $T^1$ is a successor of $L_1$ on the ray $\tau\omega$. This is impossible since such a $T^1$ would satisfy $\text{Ext}^1_A(T^1, E_1)\not=0$ contradicting the fact that $E_1\in {\mathscr{T}(T)}$. Therefore, $M\in{\mathscr{T}(T)}$ and the sequence (\[s3\]) is the canonical sequence for $\tau_A L_1$ in the torsion pair $({\mathscr{T}(T)}, {\mathscr{F}(T)})$. This shows that $t(\tau_A L_1) = M$ and hence $\tau_C\Omega_C I_C(i) = \text{Hom}_A(T,M)$ as desired. Case 2. Now suppose that $T_i$ does not lie on the mouth of $\mathcal{T}_A$. Let $\omega_1$ denote the ray passing through $T_i$ and $\omega_2$ the coray passing through $T_i$. Denote by $T_1$ the last summand of $T$ on $\omega_1$, by $T_2$ the last summand of $T$ on $\omega_2$, and by $L_j$ the direct predecessor of $T_j$ which does not lie on $\omega_j$. Note that $L_2$ does not exist if $T_2$ lies on the mouth of $\mathcal{T}_A$, and in this case we let $L_2=0$. Thus we have the following local configuration in $\mathcal{T}_A$. $$\xymatrix@!C=5pt@!R=5pt{M\ar[dr] &&&& && &&& & L_2 \ar[dr] && \tau^{-1}L_2\\ &\ar@{..}[dr]&&&&&&&&\tau T_2\ar[ur]\ar[dr]\ar@{..}[dl]&& T_2\ar[ur]&&\\ &&\ar@{..}[dr]&&&&&&&&\ar[ur]\ar@{..}[dl]\\ &&&\ar@{..}[dr]&&\ar[dr]\ar@{..}[ur]&&\ar[dr]\ar@{..}[ur]&&&&&&&&\\ &&&&\tau^2 T_i\ar[ur]\ar[dr] && \tau T_i \ar[dr]\ar[ur] && T_i \ar[ur]\ar[dr]\\ &&&&&\ar[ur]\ar@{..}[dr]&&\ar[ur]\ar@{..}[dr]&&\ar@{..}[dr]&&\\ &&&&&&\ar[dr]&&\tau T_1\ar[dr] && T_1\ar[dr] \\ &&&&&&&\tau L_1 \ar[ur] && L_1\ar[ur] && \tau^{-1}L_1}$$ The injective $C$-module $I_C(i) = \text{Ext}^1_A(T, \tau T_i)$ is biserial with top $S(1)\oplus S(2)$. Moreover, there is a short exact sequence $$\xymatrix{0\ar[r]&\tau T_i \ar[r] & L_1\oplus L_2 \oplus T_i \ar[r] & T_1\oplus T_2 \ar[r] & 0.}$$ Applying $\text{Hom}_A(T,-)$ yields the following exact sequence. $$\label{s4} \xymatrix@R=5pt{0\ar[r]&\text{Hom}_A(T, L_1\oplus L_2)\oplus P_C(i) \ar[r] & P_C(1)\oplus P_C(2) \ar[r]^-{f}&I_C(i)\\ &\hspace{3cm}\ar[r]&\text{Ext}^1_A(T, L_1\oplus L_2)\ar[r]&0.}$$ By the same argument as in case 1, using that $T_1$ and $T_2$ are the last summands of $T$ on $\omega_1$ and $\omega_2$ respectively, we see that $\text{Ext}^1_A(T, L_1\oplus L_2)=0$. Therefore, the sequence (\[s4\]) is short exact. Moreover, the morphism $f$ is a projective cover and thus $$\Omega_C I_C(i) = \text{Hom}_A(T, L_1\oplus L_2) \oplus P_C (i).$$ Applying $\tau_C$ yields $$\tau_C\Omega_C I_C(i) = \tau_C \text{Hom}_A(T, L_1) \oplus \tau_C \text{Hom}_A(T, L_2).$$ By the same argument as in case 1 we see that $$\tau_C \text{Hom}_A (T, L_1) = \text{Hom}_A (T, t(\tau_A L_1)) = \text{Hom}_A(T, M)$$ where $M$ is the indecomposable $A$-module on the mouth of $\mathcal{T}_A$ such that the ray starting at $M$ passes through $\tau L_1$. In other words, $M$ is the starting point of the ray $\tau^2 \omega$. Therefore, it only remains to show that $\tau_C \text{Hom}_A(T, L_2) = 0$. To do so, it suffices to show that $L_2$ is a summand of $T$. We have already seen that $\text{Ext}^1_A(T, L_2)=0$. We show now that we also have $\text{Ext}^1_A(L_2, T)=0$. Suppose the contrary. Then there exists a non-zero morphism $u:\, T\to \tau_A L_2$. Composing it with the irreducible injective morphism $\tau_A L_2\to \tau_A T_2$ yields a non-zero morphism in $\text{Hom}_A(T, \tau_A T_2)$. But this is impossible since $T$ is tilting. Thus we have $\text{Ext}^1_A(T, L_2)= \text{Ext}^1_A(L_2, T)=0$ and thus $L_2$ is a summand of $T$, the module $\text{Hom}_A(T, L_2)$ is projective and $\tau_C \text{Hom}_A (T, L_2)=0$. This completes the proof. The module $M$ in the statement of the lemma is the starting point of the ray passing through $\tau^2 T_i$. \[cor5\] Let $A, T, C, \mathcal{T}_A$ be as in Lemma \[lem tube2\], and let $B = C \ltimes E$, with $E = \textup{Ext}^2_C(DC,C)$. Let $X,Y$ be two modules lying on the same coray in the tube $\textup{Hom}_A(T, \mathcal{T}_A\cap {\mathscr{T}(T)})$ in $\textup{mod}\,C$. Then $X\otimes_C E \cong Y\otimes _C E$ and thus the two projections $X\otimes_C B \to X \to 0$ and $Y\otimes _C B \to Y \to 0$ have isomorphic kernels. For all $C$-modules $X$ we have $$X\otimes _B E \cong D\text{Hom} (X, DE) \cong D\text{Hom}(X, \tau_C\Omega_C DC)$$ where the first isomorphism is [@SS Proposition 3.3] and the second is [@SS Proposition 4.1]. Since $T$ has no preinjective summands, and $X$ is regular, the only summand of $\tau\Omega DC$ for which $\text{Hom}(X, \tau\Omega DC)$ can be nonzero, must lie in the same tube as $X$. By the lemma, the only summands of $\tau\Omega DC$ in the tube lie on the mouth of the tube. Let $M$ denote an indecomposable $C$-module on the mouth of a tube. Then $$\text{Hom}_C(X,M) \cong \text{Hom}_C (Y,M) \cong \left \{ \begin{array}{ll}k & \text{if }\, M \, \text{ lies on the coray passing}\\ &\text{ through } \, X \text{ and }\, Y, \\ 0 & \text{otherwise.}\end{array}\right.$$ We summarize the results of this section in the following proposition. \[prop5\] - Let ${\mathcal{S}}_1$ be the coray in ${\Gamma}(\textup{mod}\,C_1)$ passing through the projective $C_1$-module corresponding to the root projective $P_B(i)$ Then ${\mathcal{S}}_1\otimes_{C_1} B$ is a coray in ${\Gamma}(\textup{mod}\,B)$ passing through $P_B(i)$. Furthermore all modules in ${\mathcal{S}}_1\otimes_{C_1} B $ are extensions of modules of ${\mathcal{S}}_1$ by the same module $P_{C_1}(i)\otimes E$. - Let ${\mathcal{S}}_2$ be the ray in ${\Gamma}(\textup{mod}\,C_2)$ passing through the injective $C_2$-module corresponding to the root injective $I_B(i)$ Then ${\textup{Hom}}_{C_2}(B,{\mathcal{S}}_2)$ is a ray in ${\Gamma}(\textup{mod}\,B)$ passing through $I_B(i)$. Furthermore all modules in ${\textup{Hom}}_{C_2}(B,{\mathcal{S}}_2)$ are extensions of modules of ${\mathcal{S}}_2$ by the same module ${\textup{Hom}}_{C_2}(E,I_{C_2}(i))$. \(a) The first statement is Lemma \[lem tube1\], and the second statement is a restatement of Corollary \[cor5\]. Let $B$ be the cluster-tilted algebra given by the quiver $$\xymatrix@C=15pt@R=15pt{1\ar@<2pt>[rr]^{\lambda}\ar@<-2pt>[rr]_{\beta}&&5\ar[dl]^{\epsilon}\\ &3\ar[ul]^{\alpha}\ar[dr]^{\delta}\\ 2\ar[ur]^{\gamma}&&4\ar[ll]^{\sigma}}$$ bound by $\alpha\beta=0, \beta\epsilon=0, \epsilon\alpha=0, \gamma\delta=0, \sigma\gamma=0, \delta\sigma=0$. The algebras $C_1$ and $C_2$ are respectively given by the quivers $$\xymatrix@C=15pt@R=15pt{1\ar@<2pt>[rr]^{\lambda}\ar@<-2pt>[rr]_{\beta}&&5&&&& 1\ar@<2pt>[rr]^{\lambda}\ar@<-2pt>[rr]_{\beta}&&5\ar[dl]^{\epsilon} \\ &3\ar[dr]^{\delta} \ar[ul]^{\alpha}&&&\text{and} & && 3\ar[dr]^{\delta}\\ 2\ar[ur]^{\gamma}&&4&&&& 2\ar[ur]^{\gamma}&&4}$$ with the inherited relations. We can see the tube in $\Gamma(\textup{mod}\, C_{1})$ below and the coray passing through the root projective $P_{C_1}(3) = \begin{smallmatrix}3\\4\;1\\\;\;\;5\end{smallmatrix}$ is given by $$\xymatrix{\mathcal{S}_1: &\dots\ar[r]& {\begin{smallmatrix}1\\5\end{smallmatrix}}\ar[r]&{\begin{smallmatrix}3\\4\;1\\\;\;\;5\end{smallmatrix}}\ar[r] &{\begin{smallmatrix}3\\1\\5\end{smallmatrix}}\ar[r]&{\begin{smallmatrix}2\\3\\1\\5\end{smallmatrix}}.}$$ $$\xymatrix@!C=15pt@!R=15pt{&&&&&& {\begin{smallmatrix}4\end{smallmatrix}}\ar[dr]\ar@{--}[dd]\\ &&&&&&&{\begin{smallmatrix}3\\4\;1\\\;\;\;5\end{smallmatrix}}\\ &&&& {\begin{smallmatrix}2\\3\\1\\5\end{smallmatrix}}\ar[dr]&&{\begin{smallmatrix}1\\5\end{smallmatrix}}\ar[ur]\ar@{--}[d]\\ &{\begin{smallmatrix}4\end{smallmatrix}}\ar[dr]\ar@{--}[dd]&&{\begin{smallmatrix}3\\1\\5\end{smallmatrix}}\ar[ur]\ar[dr]&&{\begin{smallmatrix}2\;\;\\3\;\;\\1\;1\\\;5\;5\end{smallmatrix}}\ar[ur]&&\\ &&{\begin{smallmatrix}3\\4\;1\\\;\;\;5\end{smallmatrix}}\ar[ur]\ar[dr]&&{\begin{smallmatrix}3\\1\;1\\\;5\;5\end{smallmatrix}}\ar[ur]&&\\ &{\begin{smallmatrix}1\\5\end{smallmatrix}}\ar[ur]\ar[dr]\ar@{--}[d]&&{\begin{smallmatrix}3\\4\;1\;1\\\;\;\;\;5\;5\end{smallmatrix}}\ar[ur]\\ \ar[ur]&&{\begin{smallmatrix}1\;1\\\;5\;5\end{smallmatrix}}\ar[ur]}$$ Dually, the ray in $\Gamma(\textup{mod}\, C_2)$ passing through the root injective $I_{C_2}(3) = {\begin{smallmatrix}1\;\;\;\\5\;2\\3\end{smallmatrix}}$ is given by $$\xymatrix{\mathcal{S}_2: & {\begin{smallmatrix}1\\5\\3\\4\end{smallmatrix}}\ar[r]&{\begin{smallmatrix}1\\5\\3\end{smallmatrix}}\ar[r]&{\begin{smallmatrix}1\;\;\;\\5\;2\\3\end{smallmatrix}}\ar[r]&{\begin{smallmatrix}1\\5\end{smallmatrix}}\ar[r]&\dots }$$ The root projective $P_B(3)$ lies on the coray $$\xymatrix{\mathcal{S}_1\otimes_{C_1} B: & \dots\ar[r]& {\begin{smallmatrix}1\\5\\3\\4\end{smallmatrix}}\ar[r]&{\begin{smallmatrix}3\\4\;1\\\;\;\;5\\\;\;\;3\\\;\;\;4\end{smallmatrix}}\ar[r] &{\begin{smallmatrix}3\\1\\5\\3\\4\end{smallmatrix}}\ar[r]&{\begin{smallmatrix}2\\3\\1\\5\\3\\4\end{smallmatrix}}}$$ and the root injective $I_B(3)$ lies on the ray $$\xymatrix{\textup{Hom}_{C_2}(B, \mathcal{S}_2): & {\begin{smallmatrix}2\\3\\1\\5\\3\\4\end{smallmatrix}}\ar[r]&{\begin{smallmatrix}2\\3\\1\\5\\3\end{smallmatrix}}\ar[r]&{\begin{smallmatrix}2\;\;\;\\3\;\;\;\\1\;\;\;\\5\;2\\3\end{smallmatrix}}\ar[r]&{\begin{smallmatrix}2\\3\\1\\5\end{smallmatrix}}\ar[r]&\dots }$$ Note that by Proposition \[prop5\], every module in $\mathcal{S}_1\otimes _{C_1} B$ is an extension of a module in $\mathcal{S}_1$ by $\begin{smallmatrix}3\\4\end{smallmatrix}$. Similarly, every module in $\textup{Hom}_{C_2}(B, \mathcal{S}_2)$ is an extension of a module in $\mathcal{S}_2$ by $\begin{smallmatrix}2\\3\end{smallmatrix}$. Applying the knitting algorithm we obtain the tube in $\Gamma(\textup{mod}\,B)$ containing both $\mathcal{S}_1\otimes_{C_1} B$ and $\textup{Hom}_{C_2}(B, \mathcal{S}_2)$. $$\xymatrix@!C=15pt@!R=15pt{ {\begin{smallmatrix}4\\2\end{smallmatrix}}\ar[dr]\ar@{--}[dd]&&\circ&&{\begin{smallmatrix}2\\3\\1\\5\\3\\4\end{smallmatrix}}\ar[dr]&&\circ&&{\begin{smallmatrix}4\\2\end{smallmatrix}}\ar@{--}[dd]\\ &{\begin{smallmatrix}4\end{smallmatrix}}\ar[dr]&&{\begin{smallmatrix}3\\1\\5\\3\\4\end{smallmatrix}}\ar[dr]\ar[ur]&&{\begin{smallmatrix}2\\3\\1\\5\\3\end{smallmatrix}}\ar[dr]&&{\begin{smallmatrix}2\end{smallmatrix}}\ar[ur]\\ \circ \ar@{--}[dd]&&{\begin{smallmatrix}3\\4\;1\\\;\;\;5\\\;\;\;3\\\;\;\;4\end{smallmatrix}}\ar[ur]\ar[dr]&&{\begin{smallmatrix}3\\1\\5\\3\end{smallmatrix}}\ar[ur]\ar[dr]&&{\begin{smallmatrix}2\;\;\;\\3\;\;\;\\1\;\;\;\\5\;2\\3\end{smallmatrix}}\ar[ur]\ar[dr]&&\circ\ar@{--}[dd]\\ &{\begin{smallmatrix}1\\5\\3\\4\end{smallmatrix}}\ar[ur]\ar[dr]&&{\begin{smallmatrix}3\\4\;1\\\;\;\;\;5\\\;\;\;3\end{smallmatrix}}\ar[ur]\ar[dr]&&{\begin{smallmatrix}3\;\;\;\\1\;\;\;\\5\;2\\3\end{smallmatrix}}\ar[ur]\ar[dr]&&{\begin{smallmatrix}2\\3\\1\\5\end{smallmatrix}}\ar[dr]\\ {\begin{smallmatrix}2\;\;\;\\3\;\;\;\\\;1\;1\\\;\;5\;5\\\;\;\;\;\;3\\\;\;\;\;\;4\end{smallmatrix}}\ar[ur]\ar[dr]\ar@{--}[d]&&{\begin{smallmatrix}1\\5\\3\end{smallmatrix}}\ar[ur]\ar[dr]&&{\begin{smallmatrix}3\;\;\;\\4\;1\\\;\;\;5\;2\\\;\;\;\;\;\;3\end{smallmatrix}}\ar[ur]\ar[dr]&&{\begin{smallmatrix}3\\1\\5\end{smallmatrix}}\ar[ur]\ar[dr]&&{\begin{smallmatrix}2\;\;\;\\3\;\;\;\\\;1\;1\\\;\;5\;5\\\;\;\;\;\;3\\\;\;\;\;\;4\end{smallmatrix}}\ar@{--}[d]\\ &\ar[ur]&&\ar[ur]&&\ar[ur]&&\ar[ur]&}$$ From cluster-tilted algebras to quasi-tilted algebras ===================================================== Let $B$ be cluster-tilted of euclidean type $Q$ and let $A=kQ$. Then there exists $T\in {\mathcal{C}}_A$ tilting such that $B={\textup{End}}_{{\mathcal{C}}_A}T$. Because $Q$ is euclidean, ${\mathcal{C}}_A$ contains at most 3 exceptional tubes. Denote by $T_{0},T_1,T_2,T_3$ the direct sums of those summands of $T$ that respectively lie in the transjective component and in the three exceptional tubes. In the derived category ${\mathcal{D}^b(\textup{mod}\,A)}$, we can choose a lift of $T$ such that we have the following local configuration. Let ${\mathcal{H}}$ be a hereditary category that is derived equivalent to $\textup{mod}\,A$ and such that ${\mathcal{H}}$ is not the module category of a hereditary algebra. Then ${\mathcal{H}}$ is of the form ${\mathcal{H}}={\mathcal{T}}^-\vee{\mathcal{C}}\vee{\mathcal{T}}^+$, where ${\mathcal{T}}^-, {\mathcal{T}}^+$ consist of tubes, and ${\mathcal{C}}$ is a transjective component, see [@LenzingSkowronski]. Let $T_-$, $T_+$ be the direct sum of all indecomposable summands of $T$ lying in ${\mathcal{T}}^-$, ${\mathcal{T}}^+ $ respectively. We define two subspaces $L$ and $R$ of $B$ as follows. $$L={\textup{Hom}}_{{\mathcal{D}^b(\textup{mod}\,A)}}(F^{-1}T_+,T_{0}) \quad \textup{and} \quad R={\textup{Hom}}_{{\mathcal{D}^b(\textup{mod}\,A)}}(T_{0},FT_-).$$ The transjective component of $\textup{mod}\,B$ contains a left section ${\Sigma}_L$ and a right section ${\Sigma}_R$, see [@Assem]. Thus ${\Sigma}_L,{\Sigma}_R$ are local slices, ${\Sigma}_L$ has no projective predecessors, and ${\Sigma}_R $ has no projective successors in the transjective component. Define $K$ to be the two-sided ideal of $B$ generated by $ {\textup{Ann}}\, {\Sigma}_L\cap{\textup{Ann}}\, {\Sigma}_R $ and the two subspaces $L$ and $R$. Thus $$K=\langle {\textup{Ann}}\, {\Sigma}_L\cap{\textup{Ann}}\, {\Sigma}_R , L, R\rangle.$$ We call $K$ the [*partition ideal*]{} induced by the partition ${\mathcal{T}}^-\vee {\mathcal{C}}\vee{\mathcal{T}}^+$. \[thm ctaqt\] The algebra $C=B/K$ is quasi-tilted and such that $B={\widetilde{C}}$. Moreover $C$ is tilted if and only if $L=0$ or $R=0$. We have $B={\textup{End}}_{{\mathcal{C}}_A} T =\oplus_{i\in{\mathbb{Z}}}{\textup{Hom}}_{{\mathcal{D}^b(\textup{mod}\,A)}}(T,F^iT)$, where the last equality is as $k$-vector spaces. Using the decomposition $T=T_-\oplus T_0 \oplus T_+$, we see that $B$ is equal to $$\begin{array} {ccccccccccc} & {\textup{Hom}}_{{\mathcal{D}}}(T_-,T_-) &\oplus& {\textup{Hom}}_{{\mathcal{D}}}(T_-,T_0) &\oplus&{\textup{Hom}}_{{\mathcal{D}}}(T_-,FT_-) \\ \oplus& {\textup{Hom}}_{{\mathcal{D}}}(T_0,T_0) &\oplus& {\textup{Hom}}_{{\mathcal{D}}}(T_0,T_+)&\oplus& {\textup{Hom}}_{{\mathcal{D}}}(T_0,FT_-) \\ \oplus& {\textup{Hom}}_{{\mathcal{D}}}(T_0,FT_0) &\oplus& {\textup{Hom}}_{{\mathcal{D}}}(F^{-1}T_+,FT_0)&\oplus& {\textup{Hom}}_{{\mathcal{D}}}(F^{-1}T_+,T_+) \\ \oplus& {\textup{Hom}}_{{\mathcal{D}}}(T_+,T_+), \end{array}$$ where all Hom spaces are taken in ${\mathcal{D}^b(\textup{mod}\,A)}$. On the other hand, $$\begin{array} {ccccccccccc} {\textup{End}}_{{\mathcal{H}}} T&=& {\textup{Hom}}_{{\mathcal{H}}}(T_-,T_-) &\oplus& {\textup{Hom}}_{{\mathcal{H}}}(T_-,T_0) &\oplus&{\textup{Hom}}_{{\mathcal{H}}} (T_0,T_0)\\ &\oplus& {\textup{Hom}}_{{\mathcal{H}}}(T_0,T_+)&\oplus& {\textup{Hom}}_{{\mathcal{H}}}(T_+,T_+) \end{array}$$ is a quasi-tilted algebra. Thus in order to prove that $C$ is quasi-tilted it suffices to show that $K$ is the ideal generated by $${\textup{Hom}}_{{\mathcal{D}}}(T_-,FT_-) \oplus {\textup{Hom}}_{{\mathcal{D}}}(T_0,FT_-\oplus FT_0) \oplus {\textup{Hom}}_{{\mathcal{D}}}(F^{-1}T_+,T_0\oplus T_+).$$ But this follows from the definition of $L$ and $R$ and the fact that the annihilators of the local slices ${\Sigma}_L$ and ${\Sigma}_R$ are given by the morphisms in ${\textup{End}}_{{\mathcal{C}}_A}T$ that factor through the lifts of the corresponding local slice in the cluster category. More precisely, $$\begin{array}{rcl}{\textup{Ann}}\,{\Sigma}_L &\cong&{\textup{Hom}}_{{\mathcal{D}}}(F^{-1} T_0\oplus F^{-1}T_+\oplus T_- \ ,\ T_0\oplus T_+ \oplus FT_-),\\ {\textup{Ann}}\,{\Sigma}_R &\cong&{\textup{Hom}}_{{\mathcal{D}}}( F^{-1}T_+\oplus T_-\oplus T_0 \ ,\ T_+ \oplus FT_-\oplus FT_0), \end{array}$$ and thus $$\begin{array}{rcl}{\textup{Ann}}\,{\Sigma}_L\cap{\textup{Ann}}\,{\Sigma}_R &\cong&{\textup{Hom}}_{{\mathcal{D}}}( T_0,F T_0)\oplus {\textup{Hom}}_{{\mathcal{D}}}( T_-,F T_-)\\&&\ \oplus {\textup{Hom}}_{{\mathcal{D}}}( F^{-1}T_+,T_+), \end{array}$$ where we used the fact that ${\textup{Hom}}_{{\mathcal{D}}}(T_-,T_+)={\textup{Hom}}_{{\mathcal{D}}}(T_+,T_-)=0$. This completes the proof that $C$ is quasi-tilted. Since $C={\textup{End}}_{\mathcal{H}}T$, we have ${\widetilde{C}}={\textup{End}}_{{\mathcal{C}}_{\mathcal{H}}}T\cong{\textup{End}}_{{\mathcal{C}}_A}T=B.$ Now assume that $R=0$. Then $T_-=0$ and thus $K$ is generated by $({\textup{Ann}}\,{\Sigma}_L\cap{\textup{Ann}}\,{\Sigma}_R)\oplus L$, and this is equal to $$\label{eq ctaqt} {\textup{Hom}}_{\mathcal{D}}(T_0,FT_0)\oplus {\textup{Hom}}_{\mathcal{D}}(F^{-1}T_+,T_+)\oplus {\textup{Hom}}_{\mathcal{D}}(F^{-1}T_+,FT_0).$$ On the other hand, $T_-=0$ implies that $${\textup{Ann}}\,{\Sigma}_L = {\textup{Hom}}_{\mathcal{D}}(F^{-1}T_0\oplus F^{-1}T_+,T_0\oplus T_+),$$ and since ${\textup{Hom}}_{\mathcal{D}}(F^{-1}T_0,T_+)=0$, this implies that $K={\textup{Ann}}\,{\Sigma}_L$ is the annihilator of a local slice. Therefore $C=B/K$ is tilted by [@ABS2]. The case where $L=0$ is proved in a similar way. Conversely, assume $C$ is tilted. Then $K={\textup{Ann}}\,{\Sigma}'$ for some local slice ${\Sigma}'$ in $\textup{mod}\,B$. We show that $K={\textup{Ann}}\,{\Sigma}_L$ or $K={\textup{Ann}}\,{\Sigma}_R$. Suppose to the contrary that ${\Sigma}'$ has both a predecessor and a successor in ${\textup{add}}\, T_0$. Then there exists an arrow ${\alpha}$ in the quiver of $B$ such that ${\alpha}\in{\textup{Hom}}_{\mathcal{D}}(T_0,T_0)$ and ${\alpha}\in{\textup{Ann}}\,{\Sigma}'=K$. But by definition of ${\Sigma}_L,{\Sigma}_R,L $ and $R$, we see that this is impossible. Thus $K={\textup{Ann}}\,{\Sigma}_L$ or $K={\textup{Ann}}\,{\Sigma}_R$. In the former case, we have $R=0$, by the computation (\[eq ctaqt\]), and in the latter case, we have $L=0$. \[thm ctaqt2\] If $C$ is quasi-tilted of euclidean type and $B={\widetilde{C}}$ then $$C=B/{\textup{Ann}}({\Sigma}^-\oplus {\Sigma}^+),$$ where ${\Sigma}^- $ is a right section in the postprojective component of $C$ and ${\Sigma}^+$ is a left section in the preinjective component. $C$ being quasi-tilted implies that there is a hereditary category ${\mathcal{H}}$ with a tilting object $T$ such that $C={\textup{End}}_{\mathcal{H}}T$. Moreover, $B={\textup{End}}_{{\mathcal{C}}_{\mathcal{H}}} T$ is the corresponding cluster-tilted algebra. As before we use the decomposition $T=T_-\oplus T_0\oplus T_+$. Then the algebras $$C^- ={\textup{End}}_{\mathcal{H}}(T_-\oplus T_0)\quad\textup{and}\quad C^+ ={\textup{End}}_{\mathcal{H}}( T_0\oplus T_+)$$ are tilted. Let ${\Sigma}^-$ and $ {\Sigma}^+$ be complete slices in $\textup{mod}\,C^-$ and $\textup{mod}\,C^+$ respectively. Note that ${\Sigma}^-$ lies in the postprojective component and ${\Sigma}^+$ lies in the preinjective component of their respective module categories. Then $C$ is a branch extension of $C^-$ by the module $$M^+={\textup{Hom}}_{\mathcal{H}}(T_+,T_+)\oplus {\textup{Hom}}_{\mathcal{H}}(T_0,T_+).$$ Similarly $C$ is a branch coextension of $C^+$ by the module $$M^-={\textup{Hom}}_{\mathcal{H}}(T_-,T_-)\oplus {\textup{Hom}}_{\mathcal{H}}(T_-,T_0).$$ Observe that the postprojective component of $C^-$ does not change under the branch extension, and the preinjective component of $C^+$ does not change under the branch coextension. Therefore ${\Sigma}^-$ is a right section in the postprojective component of $C$ and ${\Sigma}^+$ is a left section in the preinjective component. Moreover, by construction, we have $${\textup{Ann}}_B{\Sigma}^- =M^+\oplus {{\textup{Ext}}^2_C(DC,C)}\quad\textup{and}\quad {\textup{Ann}}_B{\Sigma}^+ =M^-\oplus {{\textup{Ext}}^2_C(DC,C)},$$ and therefore $${\textup{Ann}}_B({\Sigma}^- \oplus {\Sigma}^+)={\textup{Ann}}_B{\Sigma}^- \cap{\textup{Ann}}_B{\Sigma}^+= {{\textup{Ext}}^2_C(DC,C)}.$$ This completes the proof. The main theorem of this section is the following. \[thm ctaqt3\] Let $C$ be a quasi-tilted algebra whose relation-extension $B$ is cluster-tilted of euclidean type. Then $C$ is one of the following. - $C=B/{\textup{Ann}}\,{\Sigma}$ for some local slice ${\Sigma}$ in ${\Gamma}(\textup{mod}\,B)$. - $C=B/K$ for some partition ideal $K$. Assume first that $C$ is tilted. Then, because of [@ABS2], there exists a local slice ${\Sigma}$ in the transjective component of ${\Gamma}(\textup{mod}\,B)$ such that $B/{\textup{Ann}}\,{\Sigma}=C$. Otherwise, assume that $C$ is quasi-tilted but not tilted. Then, because of [@LenzingSkowronski], there exists a hereditary category ${\mathcal{H}}$ of the form $${\mathcal{H}}= {\mathcal{T}}^-\vee{\mathcal{C}}\vee{\mathcal{T}}^+$$ and a tilting object $T$ in ${\mathcal{H}}$ such that $C={\textup{End}}_{\mathcal{H}}T$. Because of Theorem \[thm ctaqt\] we get $C=B/K$ where $K$ is the partition ideal induced by the given partition of ${\mathcal{H}}$. Let $B$ be the cluster-tilted algebra of type $\widetilde{\mathbb{E}}_7$ given by the quiver $$\xymatrix@C50pt{8\ar[dr] && 7\ar[ll]_{\epsilon}\\ &6\ar[ru]\ar[rdd]^(0.33){{{\beta}_3}} \\ &5\ar[rd]^(0.4){{\beta}_2\quad}\\ 1\ar[ruu]^(0.65){ {\alpha}_3}\ar[ru]^(0.6){ {\alpha}_2}\ar[rd]_(0.6){{\alpha}_1} && 2\ar@<1.5pt>[ll]\ar@<-1.5pt>[ll]\\ &3\ar[ru]_(0.4){{\beta}_1}\ar[rd] \\ &&4 }$$ As usual let $T_i$ denote the indecomposable summand of $T$ corresponding to the vertex $i$ of the quiver. In this example $T$ has two transjective summands $T_1,T_2$, and the other summands lie in three different tubes. $T_3, T_4$ lie in a tube ${\mathcal{T}}_1$, $T_5 $ lies in a tube ${\mathcal{T}}_2$ and $T_6,T_7,T_8$ lie in a tube ${\mathcal{T}}_3$. Choosing a partition ideal corresponds to choosing a subset of tubes to be predecessors of the transjective component. Thus there are 8 different partition ideals corresponding to the 8 subsets of $\{{\mathcal{T}}_1,{\mathcal{T}}_2,{\mathcal{T}}_3\}$. If the tube ${\mathcal{T}}_i$ is chosen to be a predecessor of the transjective component, then the arrow ${\beta}_i$ is in the partition ideal. And if ${\mathcal{T}}_i $ is not chosen to be a predecessor of the transjective component, then it is a successor and consequently the arrow ${\alpha}_i$ is in the partition ideal. The arrow ${\epsilon}$ is always in the partition ideal since it corresponds to a morphim from $T_8$ to $FT_7$ in the derived category. Sumarizing, the 8 partition ideals $K$ are the ideals generated by the following sets of arrows. $$\{{\alpha}_{i}, {\beta}_j,{\epsilon}\mid i\notin I , j\in I, I \subset\{1,2,3\}\}.$$ The quiver of the corresponding quasi-tilted algebra $B/ K$ is obtained by removing the generating arrows from the quiver of $B$. Exactly 2 of these 8 algebras are tilted, and these correspond to cutting ${\alpha}_1,{\alpha}_2,{\alpha}_3,{\epsilon}$, respectively ${\beta}_1,{\beta}_2,{\beta}_3,{\epsilon}$. C. Amiot, Cluster categories for algebras of global dimension 2 and quivers with potential, *Ann. Inst. Fourier* [**59**]{} no 6, (2009), 2525–2590. I. Assem, Left sections and the left part of an Artin algebra, [*Colloq. Math.*]{} [**116**]{} (2009), no. 2, 273–300. I. Assem, T. Brüstle, G. Charbonneau-Jodoin and P. G. Plamondon, Gentle algebras arising from surface triangulations. [*Algebra Number Theory*]{} [**4**]{} (2010), no. 2, 201–229. , Cluster-tilted algebras as trivial extensions, *Bull. Lond. Math. Soc.* [**40**]{} (2008), 151–162. , Cluster-tilted algebras and slices, *J. of Algebra* [**319**]{} (2008), 3464–3479. , On the Galois covering of a cluster-tilted algebra, *J. Pure Appl. Alg.* [**213**]{} (7) (2009) 1450–1463. , Cluster-tilted algebras without clusters, *J. Algebra* [**324**]{}, (2010), 2475–2502. I. Assem, R. Schiffler and K. Serhiyenko, Modules that do not lie on local slices, in preparation. I. Assem, D. Simson and A. Skowroński, *Elements of the Representation Theory of Associative Algebras, 1: Techniques of Representation Theory*, London Mathematical Society Student Texts 65, Cambridge University Press, 2006. M. Auslander, I. Reiten and S.O. Smalø, *Representation Theory of Artin Algebras* Cambridge Studies in Advanced Math. 36, (Cambridge University Press, Cambridge, 1995). M. Barot and S. Trepode, Cluster tilted algebras with a cyclically oriented quiver. [*Comm. Algebra*]{}[**41**]{} (2013), no. 10, 3613–3628. , From iterated tilted to cluster-tilted algebras, [*Adv. Math.*]{} [**223**]{} (2010), no. 4, 1468–1494. M. A. Bertani-Økland, S. Oppermann and A Wrlsen, Constructing tilted algebras from cluster-tilted algebras, [*J. Algebra*]{} [**323**]{} (2010), no. 9, 2408–2428. A. B. Buan, R. Marsh, M. Reineke, I. Reiten and G. Todorov, *Tilting theory and cluster combinatorics*, Adv. Math. [**[204]{}**]{} (2006), no. 2, 572-618. , Cluster-tilted algebras, *Trans. Amer. Math. Soc.* [**359**]{} (2007), no. 1, 323–332 (electronic). , Cluster-tilted algebras of finite representation type, *J. Algebra* [**306**]{} (2006), no. 2, 412–431. , Quivers with relations arising from clusters ($A_n$ case), *Trans. Amer. Math. Soc.* [**358**]{} (2006), no. 3, 1347–1364. E. Fernández, N. I. Pratti and S. Trepode, On $m$-cluster tilted algebras and trivial extensions, [*J. Algebra*]{} [**393**]{} (2013), 132–141. S. Fomin and A. Zelevinsky, Cluster algebras I: Foundations, *J. Amer. Math. Soc.* **15 (2002), 497–529. D. Happel, A characterization of hereditary categories with tilting object. [*Invent. Math.*]{} [**144**]{} (2001), no. 2, 381–398. D. Happel, I. Reiten and S. Smalø, Tilting in abelian categories and quasitilted algebras. [*Mem. Amer. Math. Soc.*]{} [**120**]{} (1996), no. 575. H. Lenzing and A. Skowroński, Quasi-tilted algebras of canonical type, [*Colloq. Math.*]{} [**71**]{} (1996), no. 2, 161–181. B. Keller and I. Reiten, [Cluster-tilted algebras are Gorenstein and stably Calabi-Yau]{}, [*Adv. Math.*]{} [**211**]{} (2007), no. 1, 123–151. M. Oryu and R. Schiffler, On one-point extensions of cluster-tilted algebras, [em J. Algebra]{} [**357**]{} (2012), 168–182. I. Reiten, Cluster categories. Proceedings of the International Congress of Mathematicians. Volume I, 558–594, Hindustan Book Agency, New Delhi, 2010. 16-02 C.M. Ringel, The regular components of the Auslander-Reiten quiver of a tilted algebra. [*Chinese Ann. Math. Ser. B*]{} [**9**]{} (1988), no. 1, 1–18. C. M. Ringel, Representation theory of finite-dimensional algebras. Representations of algebras (Durham, 1985), 7–79, London Math. Soc. Lecture Note Ser., 116, Cambridge Univ. Press, Cambridge, 1986. R. Schiffler, *Quiver Representations*, CMS Books in Mathematics, Springer International Publishing, 2014. R. Schiffler and K. Serhiyenko, Induced and coinduced modules over cluster-tilted algebras, preprint, [arXiv:1410.1732.]{} R. Schiffler and K. Serhiyenko, Injective presentations of induced modules over cluster-tilted algebras, preprint, [arXiv:1604.06907]{} A. Skowroński, Tame quasi-tilted algebras, [*J. Algebra*]{} [**203**]{} (1998), no. 2, 470–490.** [^1]: The first author gratefully acknowledges partial support from the NSERC of Canada. The second author was supported by the NSF CAREER grant DMS-1254567 and by the University of Connecticut. The third author was supported by the NSF Postdoctoral fellowship MSPRF-1502881.
--- abstract: 'We establish the equidistribution with respect to the bifurcation measure of post-critically finite maps in any one-dimensional algebraic family of unicritical polynomials. Using this equidistribution result, together with a combinatorial analysis of certain algebraic correspondences on the complement of the Mandelbrot set ${\mathcal{M}}_2$ (or generalized Mandelbrot set ${\mathcal{M}}_d$ for degree $d>2$), we classify all algebraic curves $C\subset {{\mathbb C}}^2$ with Zariski-dense subsets of points $(a,b)\in C$, such that both $z^d+a$ and $z^d+b$ are simultaneously postcritically finite for a fixed degree $d\geq 2$. Our result is analogous to the famous result of André [@Andre] regarding plane curves which contain infinitely many points with both coordinates CM parameters, and is the first complete case of the dynamical André-Oort phenomenon studied by Baker and DeMarco [@Baker-DeMarco].' address: - | Dragos Ghioca\ Department of Mathematics\ University of British Columbia\ Vancouver, BC V6T 1Z2\ Canada - | Holly Krieger\ Department of Mathematics\ Massachusetts Institute of Technology\ 77 Massachusetts Avenue\ Cambridge, MA 02139\ USA - | Khoa Nguyen\ Department of Mathematics\ University of British Columbia\ Vancouver, BC V6T 1Z2\ Canada - | Hexi Ye\ Department of Mathematics\ University of British Columbia\ Vancouver, BC V6T 1Z2\ Canada author: - 'D. Ghioca' - 'H. Krieger' - 'K. Nguyen' - 'H. Ye' title: 'The Dynamical André-Oort Conjecture: Unicritical Polynomials' --- [^1] Introduction ============ In the past 20 years there was a considerable interest in studying the principle of *unlikely intersections* in arithmetic geometry (for a comprehensive discussion, see the book of Zannier [@Zannier-book]). Informally, this principle of unlikely intersections (of which special cases are both the André-Oort and the Pink-Zilber conjectures) predicts that each time an intersection of an algebraic variety with a family of algebraic varieties is larger than expected, then this is explained by the presence of a rigid geometric constraint. Motivated by a version of the Pink-Zilber Conjecture for semiabelian schemes, Masser and Zannier (see [@M-Z-1; @M-Z-2]) proved that in a non-constant elliptic family $E_t$ parametrized by $t\in {{\mathbb C}}$, for any two sections $\{P_t\}_t$ and $\{Q_t\}_t$, if there exist infinitely many $t\in {{\mathbb C}}$ such that both $P_t$ and $Q_t$ are torsion points on $E_t$, then the two sections are linearly dependent. Motivated by a question of Zannier, Baker and DeMarco [@Baker-DeMarco] proved a first result for the unlikely intersections principle in the context of algebraic dynamics. More precisely, Baker and DeMarco showed that for an integer $d\ge 2$, and for two complex numbers $a$ and $b$, if there exist infinitely many $t\in{{\mathbb C}}$ such that both $a$ and $b$ are preperiodic under the action of $z\mapsto z^d+t$, then $a^d=b^d$. Baker and DeMarco’s result [@Baker-DeMarco] can be seen as an analogue of Masser and Zannier result [@M-Z-1; @M-Z-2] (which can be reformulated for simultaneous preperiodic points in a family of Lattés maps) *without* the presence of an algebraic group. The absence of an algebraic group in the background is an added difficulty for the problem, which is solved by Baker and DeMarco employing an argument which relies on a theorem regarding the equidistribution of points of small height for algebraic dynamical systems (see [@Baker-Rumely06; @CLoir; @favre-rivera06]). New results followed (see [@GHT-ANT; @Matt-Laura-2]) extending the results of [@Baker-DeMarco] to arbitrary $1$-parameter families of polynomials. In [@Matt-Laura-2], Baker and DeMarco posed a very general question for families of dynamical systems, which is motivated by the classical André-Oort. As a parallel to the classical André-Oort Conjecture, Baker and DeMarco’s question asks that if a subvariety $V$ of the moduli space of rational maps of given degree contains a Zariski dense set of *special points*, then $V$ itself is *special* (i.e., cut out by critical orbit relations; see [@Matt-Laura-2] for more details). The *special points* of the moduli space in Baker-DeMarco’s question are the ones corresponding to *postcritically finite (PCF) maps* $f$, i.e. each critical point of $f$ is preperiodic. In this article we prove a general result for curves supporting this Dynamical André-Oort Conjecture. \[main theorem\] Let $C$ be an irreducible algebraic plane curve defined over ${{\mathbb C}}$, and let $d\geq 2$ be an integer. There exist infinitely many points $(a,b)\in C$, such that both $z\mapsto z^d+a$ and $z\mapsto z^d+b$ are postcritically finite, if and only if one of the following conditions holds: - there exists $t_0\in {{\mathbb C}}$ such that $z\mapsto z^d+t_0$ is PCF and $C$ is the curve $\{t_0\}\times {{\mathbb A}}^1$; - there exists $t_0\in {{\mathbb C}}$ such that $z\mapsto z^d+t_0$ is PCF and $C$ is the curve ${{\mathbb A}}^1\times \{t_0\}$; - there exists a $(d-1)$-st root of unity $\zeta$ such that $C$ is the zero locus of the equation $y-\zeta x=0$. In [@GKN:preprint], Theorem \[main theorem\] was proven in the special case $C$ is the graph of a polynomial. The extension to arbitrary curves in Theorem \[main theorem\] requires overcoming several technical difficulties. We also note that Theorem \[main theorem\] can be viewed as a dynamical analogue of André’s theorem [@Andre] regarding plane curves containing infinitely many points with both coordinates CM points in the parameter space of elliptic curves. In the world of polynomial dynamics, the equivalent notion of a CM elliptic curve is a PCF polynomial. Indeed, the parallel between the two can be viewed also at the level of arboreal Galois representation associated to a polynomial which is expected to have smaller image for PCF maps, analogous to the situation for the Galois representation associated to an elliptic curve, which has smaller image in the case of CM elliptic curves; for more details, see [@Jones; @Pink-1; @Pink-2; @Pink-3]. We observe that it is immediate to see that a curve of the form (1) to (3) as in the conclusion of Theorem \[main theorem\] contains infinitely many points with both coordinates PCF parameters; the difficulty in Theorem \[main theorem\] is proving that *only* such curves have infinitely many such points. If $C$ does not project dominantly onto one of the axis of ${{\mathbb A}}^2$, it is immediate to see that $C$ must have the form (1) or (2) above. So, the content of Theorem \[main theorem\] is to show that when $C$ projects dominantly onto both axis of ${{\mathbb A}}^2$, and in addition $C$ contains infinitely many points with both coordinates PCF parameters, then $C$ must be of the form (3) as in the conclusion of our result. Theorem \[main theorem\] can be viewed as a generalization of the problem studied in [@Baker-DeMarco], as follows. Given a plane curve $C$, we have two families of polynomials parametrized by the points $t\in C$: ${{\mathbf f}}_{1,t}(z):=z^d+\pi_1(t)$ and ${{\mathbf f}}_{2,t}(z):=z^d+\pi_2(t)$, where $\pi_1$ and $\pi_2$ are the two projections of $C$ onto the two coordinate axis of the affine plane. Then we study under what conditions there are infinitely many $t\in C$ such that $0$ is preperiodic under both ${{\mathbf f}}_{1,t}$ and ${{\mathbf f}}_{2,t}$. More generally, one could consider any two families of rational maps ${{\mathbf f}}_{1,t}$ and ${{\mathbf f}}_{2,t}$ parametrized by points $t$ on some curve $C$, take any two rational maps $c_1,c_2:C{\longrightarrow}{{\mathbb P}}^1$, and ask under what conditions on the curve $C$, on the two families ${{\mathbf f}}_1$ and ${{\mathbf f}}_2$, and on the starting points $c_1$ and $c_2$, there exist infinitely many points $t\in C$ such that both $c_1(t)$ and $c_2(t)$ are preperiodic under the action of ${{\mathbf f}}_{1,t}$, respectively of ${{\mathbf f}}_{2,t}$. However this is a very hard question, and there are only a handful of results with restricted conditions in the literature; see [@Baker-DeMarco; @GHT-ANT; @Matt-Laura-2; @GHT:preprint]. For examples in [@Baker-DeMarco; @GHT-ANT; @Matt-Laura-2], ${{\mathbf f}}_1$ and ${{\mathbf f}}_2$ are families of polynomials and $C={{\mathbb A}}^1$. In [@GHT:preprint], the family of rational functions, parametrized by a projective curve $C$, must have exactly one degenerate point on $C$ and also the families ${{\mathbf f}}_1$ and ${{\mathbf f}}_2$ must satisfy additional technical conditions. In this article, we release all the restrictions on the curve $C$, which parametrizes a family of unicritical polynomials. Also, we note that the result of [@GKN:preprint] relied on the results from [@GHT-ANT], hence the restriction to curves $C$ which were graphs of polynomials because then the families of maps ${{\mathbf f}}_{1,t}$ and ${{\mathbf f}}_{2,t}$ were parametrized by the affine line. One of the main ingredients of our article (and also of all of the above articles) is the arithmetic equidistribution of small points on an algebraic variety (in the case of ${{\mathbb P}}^1$, see [@Baker-Rumely06; @favre-rivera06], in the general case of curves, see [@CLoir], while for arbitrary varieties, see [@Yuan]). Another main ingredient of this article is the geometric properties of the generalized Mandelbrot sets ${\mathcal{M}}_d$ (recall that ${\mathcal{M}}_d$ is the set of all $t\in{{\mathbb C}}$ where the orbit of $0$ under $z\mapsto z^d+t$ is bounded). More precisely, we use the combinatorial behaviour of the landing of the external rays, from which we get the precise equations for the curves $C$ in Theorem \[main theorem\]. Using Yuan’s powerful theorem [@Yuan] we show that postcritically finite maps equidistribute on the parameter space with respect to the bifurcation measure; see Theorem \[pcf equidistribution\]. Assuming there exist infinitely many points $(a,b)$ on the plane curve $C$ such that both $z^d+a$ and $z^d+b$ are PCF, then the potential (escape-rate) functions for the bifurcation measures (with respect to the families $z^d+\pi_1(t)$ and $z^d+\pi_2(t)$, where $\pi_1$ and $\pi_2$ are the two projections of $C$ on the coordinates of ${{\mathbb A}}^2$) are proportional to each other; see Theorem \[escape relation\]. Hence we get an algebraic correspondence on the $d$-th generalized Mandelbrot set ${\mathcal{M}}_d$: for each $(a,b)\in C$, we have that $a\in {\mathcal{M}}_d$ if and only if $b\in {\mathcal{M}}_d$. Using the theory of landing external rays on the $d$-th generalized Mandelbrot set we prove that the only algebraic correspondences on ${\mathcal{M}}_d$ are linear given by an equation as in the conclusion of Theorem \[main theorem\]. We are indebted to Laura DeMarco and Thomas Tucker for their careful reading of, and helpful comments on, an early version of this article. We also thank Bjorn Poonen and Curt McMullen for helpful discussions during the writing of this paper. Preliminaries ============= In this section, we introduce terminologies and results (e.g. Yuan’s arithmetic equidistribution theorem [@Yuan]) as needed for the latter sections. Though Yuan’s equidistribution works for varieties of all dimensions, we focus on the one dimensional case. The height functions {#height subsection} -------------------- Let $K$ be a number field and ${\overline{K}}$ be the algebraic closure of $K$. The number field $K$ is naturally equipped with a set ${\Omega_K}$ of pairwise inequivalent nontrivial absolute values, together with positive integers $N_v$ for each $v\in {\Omega_K}$ such that - for each $\alpha \in K^*$, we have $|\alpha|_v=1$ for all but finitely many places $v\in {\Omega_K}$. - every $\alpha \in K^*$ satisfies the [*product formula*]{} $$\label{product formula} \prod_{v\in {\Omega_K}} |\alpha|_v^{N_v}=1$$ For each $v\in {\Omega_K}$, let $K_v$ be the completion of $K$ at $v$, let ${\overline{K}}_v$ be the algebraic closure of $K_v$ and let ${{\mathbb C}}_v$ denote the completion of ${\overline{K}}_v$. We fix an embedding of ${\overline{K}}$ into ${{\mathbb C}}_v$ for each $v\in {\Omega_K}$; hence we have a fixed extension of $|\cdot |_v$ on ${\overline{K}}$. When $v$ is archimedean, then ${{\mathbb C}}_v\cong {{\mathbb C}}$. For any $x\in {\overline{K}}$, the Weil height is $$\label{naive height} h(x)=\frac{1}{[K(x):K]} \sum_{y\in {\operatorname{Gal}}({\overline{K}}/K)\cdot x}~ \sum_{v\in {\Omega_K}} \log^+|y|_v$$ where $\log^+ z=\log \max\{1, z\}$ for any real number $z$. Let $f\in K[z]$ be any polynomial with degree $d\geq 2$. We use the notation $f^n$ for the composition of $f$ with itself $n$ times. As introduced by Call and Silverman [@Call:Silverman], we have the following [*canonical height*]{} for every $x\in {\overline{K}}$ $$\label{call-silverman height} {\hat{h}}_f(x)=\lim_{n\to \infty} \frac{h(f^n(x))}{d^n}$$ where $h(x)$ is the Weil height from (\[naive height\]). Call and Silverman [@Call:Silverman] showed that the above canonical height is well-defined, and moreover, ${\hat{h}}_f(x)\geq 0$ with equality if and only if $x$ is preperiodic under the iteration of $f$. Hence, $f$ is postcritically finite if and only if all its critical points have canonical height zero. Adelic metrized line bundle and equidistribution ------------------------------------------------ Let ${\mathcal{L}}$ be a line bundle of a nonsingular projective curve $X$ over a number field $K$. As in Subsection \[height subsection\] , $K$ is naturally equipped with absolutes $|\cdot|_v$ for $v\in {\Omega_K}$. A [*metric*]{} $\|\cdot\|_v$ on ${\mathcal{L}}$ is a collection of norms, one for each $x\in X(K_v)$, on the fibres ${\mathcal{L}}(x)$ of the line bundle, with $$\|\alpha s(x)\|_v=|\alpha|_v\|s(x)\|_v$$ for any section $s$ of ${\mathcal{L}}$. An [*adelic metrized line bundle*]{} ${\mathcal{\overline{L}}}=\{{\mathcal{L}}, \{\|\cdot\|_v\}_{v\in {\Omega_K}}\}$ over ${\mathcal{L}}$ is a collection of metrics on ${\mathcal{L}}$, one for each place $v\in {\Omega_K}$, satisfying certain continuity and coherence conditions; see [@Zhang:line; @Zhang:metrics]. For example, we can define adelic metrized line bundles for ${{\mathbb P}}^1$ over the line bundle ${\mathcal{L}}={\mathcal{O}}_{{{\mathbb P}}^1}(1)$. Let $s=u_0X_0+u_1X_1$ be a global section of ${\mathcal{L}}={\mathcal{O}}_{{{\mathbb P}}^1}(1)$, where $u_0$ and $u_1$ are scalars. The metrics are defined for each $[x_0:x_1]\in {{\mathbb P}}^1({\overline{K}})$ as $$\|s\left([x_0: x_1]\right)\|_v:=\frac{|u_0x_0+u_1x_1|_v}{\max\{|x_0|_v, |x_1|_v\}}$$ for places $v\in {\Omega_K}$. It can be checked without any difficulty that ${\mathcal{\overline{L}}}:=\{{\mathcal{L}}, \{\|\cdot\|_v\}_{v\in {\Omega_K}}\}$ defined this way is an adelic metrized line bundle over ${\mathcal{L}}$. Moreover, we can work with pullback metrics by an endomorphism of ${{\mathbb P}}^1$. More precisely, let $F\left([x_0: x_1]\right)=\left(F_0(x_0, x_1): F_1(x_0, x_1)\right)$ be an endomorphism of ${{\mathbb P}}^1$ where $F_1$ and $F_2$ are coprime homogeneous polynomials of degree $d$. The metrics on $s=u_0X_0+u_1X_1$ are defined as $$\|s\left([x_0:x_1]\right)\|^F_v:=\frac{|u_0x_0+u_1x_1|_v}{\max\{|F_0(x_0, x_1)|_v, |F_1(x_0, x_1)|_v\}^{1/d}}$$ Hence ${\mathcal{\overline{L}}}_F:=\{{\mathcal{L}}, \{\|\cdot\|^F_v\}_{v\in {\Omega_K}}\}$ is an adelic metrized line bundle over ${\mathcal{L}}$. A sequence $\{{\mathcal{L}}, \{\|\cdot\|_{v,n}\}_{v\in {\Omega_K}}\}_{n\geq 1}$ of adelic metrized line bundles over ${\mathcal{L}}$ is convergent to $\{{\mathcal{L}}, \{\|\cdot\|_v\}_{v\in {\Omega_K}}\}$, if for all $n$ and all but finitely many $v\in{\Omega_K}$, $\|\cdot\|_{v,n}=\|\cdot\|_v$, and if $\{\log\frac{\|\cdot\|_{v,n}}{\|\cdot\|_v}\}_{n\geq 1}$ converges to $0$ uniformly on $X(K)$ for all $v\in {\Omega_K}$. It is clear that the limit $\{{\mathcal{L}}, \{\|\cdot\|_v\}_{v\in {\Omega_K}}\}$ is an adelic metrized line bundle. All metrics we consider here are induced by models or uniform limits of metrics from models; see[@Zhang:metrics; @Yuan]. An adelic metrized line bundle ${\mathcal{\overline{L}}}$ is [*algebraic*]{} if there is a model $\mathcal{X}$ of $X$ that induces the metrics on ${\mathcal{L}}$. An algebraic adelic metrized line bundle ${\mathcal{\overline{L}}}$ is [*semipositive*]{} if ${\mathcal{\overline{L}}}$ has semipositive curvatures at archimedean places and non-negative degree on any complete vertical curve of $\mathcal{X}$. An adelic metrized line bundle ${\mathcal{\overline{L}}}$ is semipositive if it is the uniform limit of a sequence of algebraic adelic semipositive metrics over ${\mathcal{L}}$. For a semipositive line bundle ${\mathcal{\overline{L}}}$, we can define a height for each subvariety $Y$ of $X$ (denoted ${\hat{h}}_{\mathcal{\overline{L}}}(Y)$); see [@Zhang:metrics] for more details. Let $X$ be a nonsingular projective curve. In the case of points on $X$, the height for $x\in X({\overline{K}})$ is given by $$\label{points height} {\hat{h}}_{{\mathcal{\overline{L}}}}(x)=\frac{1}{|{\operatorname{Gal}}({\overline{K}}/K)\cdot x|}\sum_{y\in{\operatorname{Gal}}({\overline{K}}/K)\cdot x}~\sum_{v\in {\Omega_K}}-N_v\log\|s(y)\|_v$$ where $|{\operatorname{Gal}}({\overline{K}}/K)\cdot x|$ is the number of points in the Galois orbits of $x$, and $s$ is any meromorphic section of ${\mathcal{L}}$ with support disjoint from ${\operatorname{Gal}}({\overline{K}}/K)\cdot x$. A sequence of points $x_n\in X({\overline{K}})$ is [*small*]{}, if $\lim_{n\to \infty} {\hat{h}}_{{\mathcal{\overline{L}}}}(x_n)={\hat{h}}_{{\mathcal{\overline{L}}}}(X)$. [@Yuan Theorem 3.1]\[yuan equidistribution\] Suppose $X$ is a projective curve over a number field $K$, and ${\mathcal{\overline{L}}}$ is a metrized line bundle over $X$ such that ${\mathcal{L}}$ is ample and the metric is semipositive. Let $\{x_n\}$ be a non-repeating sequence of points in $X({\overline{K}})$ which is small. Then for any $v\in {\Omega_K}$, the Galois orbits of the sequence $\{x_n\}$ are equidistributed in the analytic space $X^{an}_{{{\mathbb C}}_v}$ with respect to the probability measure $d\mu_v=c_1({\mathcal{\overline{L}}})_v/\deg_{\mathcal{L}}(X)$. \[definition equidistribution remark\] When $v$ is archimedean, $X_{{{\mathbb C}}_v}^{an}$ corresponds to $X({{\mathbb C}})$ and the curvature $c_1({\mathcal{\overline{L}}})_v$ of the metric $\|\cdot\|_v$ is given by $c_1({\mathcal{\overline{L}}})_v=\frac{\partial \overline{\partial}}{\pi i}\log \|\cdot\|_v$. For non-archimedean place $v$, $X_{{{\mathbb C}}_v}^{an}$ is the Berkovich space associated to $X({{\mathbb C}}_v)$, and Chambert-Loir [@CLoir] constructed an analog of curvature on $X_{{{\mathbb C}}_v}^{an}$. The precise meaning of the equidistribution above is: $$\lim_{n\to \infty} \frac{1}{|{\operatorname{Gal}}({\overline{K}}/K)\cdot x_n|}\sum_{y\in {\operatorname{Gal}}({\overline{K}}/K)\cdot x_n} \delta_{y}=\mu_v$$ where $\delta_{y}$ is point mass probability measure supported on $y\in X^{an}_{{{\mathbb C}}_v}$, and the limit is the weak limit for probability measures on compact space $X^{an}_{{{\mathbb C}}_v}$. Equidistribution of PCF points {#equidistribution section} ============================== In Sections \[equidistribution section\] and \[identical section\], we prove that for a (one dimensional and non-isotrivial) family of unicritical polynomials with degree $d\geq 2$, the set of postcritically finite polynomials equidistributes on the parameter space. The main tool we use in this section is the arithmetic equidistribution theorem introduced in the previous section; for the setup of the equidistribution theorem, we follow [@GHT:preprint]. We start by stating Theorem \[pcf equidistribution\] which is our main goal; in order to do this we need to set up the proper notation. Statement of the equidistribution theorem for PCF parameters {#subsection PCF statement} ------------------------------------------------------------ For the definition of algebraic families of unicritical polynomials, we follow [@DeMarco:; @heights]. Let $f: X'\times {{\mathbb C}}\to {{\mathbb C}}$ be a one dimensional [*algebraic family*]{} of unicritical polynomials of degree $d\geq 2$. That is, $X'$ is a Zariski dense, open subset of an irreducible, smooth curve $X$ defined over ${{\mathbb C}}$, while $\psi:X'{\longrightarrow}{{\mathbb A}}^1$ is a morphism, and $f$ is a polynomial map of degree $d$ given by $f_{\psi(t)}(z):=f(t,z)=z^d+\psi(t)$, for each $t\in X'({{\mathbb C}})$. We say that $f$ is [*isotrivial*]{} if $\psi$ is a constant map. Since there is nothing to study for an isotrivial family of unicritical polynomials, we focus on the non-isotrivial case. In addition, we assume $X$ and $X'$ are defined over a number field $K$. If $\psi$ is a morphism defined over $K$, then we call $f$ an algebraic family of unicritical polynomials over the number field $K$. We can view $X$ as a [*parameter space*]{} for an algebraic family of unicritical polynomials. The main goal for us is proving the following result. \[pcf equidistribution\] Let $f: X'\times {{\mathbb C}}\to {{\mathbb C}}$ be a non-isotrivial, one dimensional algebraic family of degree $d\geq 2$ unicritical polynomials over a number field $K$. The set of parameters $t\in X'({\overline{K}})$, for which $f(t, z): {{\mathbb C}}\to {{\mathbb C}}$ is postcritically finite, equidistributes on the parameter space $X({{\mathbb C}})$ (with respect to the normalized bifurcation measure). We postpone the proof of this theorem to Subsection \[curvature and bif\] (for a precise definition of the equidistribution, see Remark \[definition equidistribution remark\]). In Subsection \[bif subsection\] we define the (normalized) biffurcation measure and also show its connection with the measures corresponding to certain adelic metrized line bundles as appearing in the work of Yuan [@Yuan]. The key to our proof of Theorem \[pcf equidistribution\] is the equidistribution theorem of Yuan (see Theorem \[yuan equidistribution\] and its consequence to our setting stated in Theorem \[general equidistriibution\]). Metrics on a line bundle {#metrics definition subsection} ------------------------ As previously stated, a non-isotrivial, one dimensional algebraic family of unicritical polynomials over a number field $K$ is uniquely determined by a morphism $\psi: X'\to {{\mathbb A}}^1$, where $X'$ is a Zariski dense open subset of an irreducible, nonsingular projective curve $X$ defined over $K$. Hence the morphism $\psi: X'{\longrightarrow}{{\mathbb A}}^1$ induces a unique morphism $\psi:X{\longrightarrow}{{\mathbb P}}^1$ (for the sake of simplifying the notation, we use the same notation for both morphisms). Let ${\mathcal{L}}$ be the line bundle on the projective curve $X$ which is the pullback of ${\mathcal{O}}_{{{\mathbb P}}^1}(1)$ by $\psi$, i.e. ${\mathcal{L}}:=\psi^*{\mathcal{O}}_{{{\mathbb P}}^1}(1)$. Next we are going to introduce metrics on this line bundle. Let $S$ be the set of poles of $\psi$ on $X$, i.e. $S$ consists of all $x\in X$ such that $\psi(x)=[1:0]$ (the infinity point of ${{\mathbb P}}^1$). Let $X_0, X_1$ be the canonical sections on ${{\mathbb P}}^1$, and $s:=\psi^*(u_0X_0+u_1X_1)$ be a section of the line bundle $L$ with $u_0$ and $u_1$ being the scalars. For any point $t\in X({{\mathbb C}}_v)\backslash S$, we define the metrics for each $n\geq 1$ and each place $v\in {\Omega_K}$ as follows: $$\label{metric definition} \|s(t)\|_{v, n}:=\frac{|u_0\psi(t)+u_1|_v^{1/d_\psi}}{\max\{1, |f_{\psi(t)}^n(0)|_v\}^{1/(d_{\psi}\cdot d^{n-1})}}$$ where $d_\psi$ is the degree of the morphism $\psi: X\to {{\mathbb P}}^1$. Moreover, for each $t_0\in S\subset X({{\mathbb C}}_v)$, we define $$\label{at poles} \|s(t_0)\|_{v, n}:=v\textup{-$\lim_{t\to t_0}$} \|s(t)\|_{v,n}=|u_0|_v^{1/d_{\psi}}.$$ The last equality in the above formula is obvious once we notice that when $t$ is close to $t_0$, we have $|f^n_{\psi(t)}(0)|_v^{1/ d^{n-1}}\sim |\psi(t)|_v$. \[good reduction\] For any nonarchimedean place $v\in {\Omega_K}$ and any integer $n\geq 1$, we have $$\|\cdot\|_{v,n}=\|\cdot\|_{v,1}$$ on the line bundle ${\mathcal{L}}$. It suffices to show that $\max\{1,|f_{\psi(t)}^n(0)|_v\}=\max\{ 1, |\psi(t)|_v^{d^{n-1}}\}$ for all $t\in X({{\mathbb C}}_v)\backslash S$, where $S$ is the set of poles for $\psi$. We prove it by induction. Suppose $\max\{1,|f_{\psi(t)}^n(0)|_v\}=\max\{ 1, |\psi(t)|_v^{d^{n-1}}\}$. If $|\psi(t)|_v\leq 1$, then $|f_{\psi(t)}^n(0)|_v\leq 1$. Hence $|f^{n+1}_{\psi(t)}(0)|_v=|(f^{n}_{\psi(t)}(0))^d+\psi(t)|_v\leq \max\{|(f^{n}_{\psi(t)}(0))^d|_v, |\psi(t)|_v\}\leq 1$ as $v$ is nonarchimedean. Otherwise if $|\psi(t)|_v> 1$ and $|f_{\psi(t)}^n(0)|_v=|\psi(t)|_v^{d^{n-1}}\geq |\psi(t)|_v>1$, then $|f^{n+1}_{\psi(t)}(0)|_v=|(f^{n}_{\psi(t)}(0))^d+\psi(t)|_v=|(f^{n}_{\psi(t)}(0))^d|_v=|\psi(t)|_v^{d^n}$. We define the metric $$\|s(t)\|_v:=\lim_{n\to \infty} \|s(t)\|_{v,n}\text{ for each place }v$$ and we prove next that $\log\|\cdot \|_{v,n}$ converges uniformly to $\log\|\cdot \|_v$. \[convergence prop\] For each place $v\in {\Omega_K}$, $\log\|\cdot\|_{v,n}$ converges uniformly on $X({{\mathbb C}}_v)$ to $\log\|\cdot\|_v$. Fix a place $v\in {\Omega_K}$ and a real number $R$ greater than $3$. Let $$R_{v,n}(t):=\max\{ 1, |f_{\psi(t)}^n(0)|_v\}^{1/d^{n-1}}.$$ To prove this proposition, it suffices to show that $\{\log R_{v,n}(t)\}_{n\geq 1}$, as a sequence of functions on $X({{\mathbb C}}_v)$, converges uniformly. First we prove the uniform convergence assuming $|\psi(t)|_v\leq R$. Then from the definition of $R_{v,n}(t)$, we know that $$\begin{split} R_{v, n+1}^{d^{n}}(t)& \leq (R_{v,n}^{d^{n-1}}(t))^d+R\\ & \leq (R+1)\cdot R_{v,n}^{d^{n-1}}(t) \textup{, since $R_{v,n}(t)\geq 1$}\\ &\leq 2R\cdot R_{v,n}^{d^{n}}(t) \end{split}$$ Similarly, if $R_{v,n}^{d^{n}}(t)\geq 2R$, then $R_{v, n+1}^{d^{n}}(t)\geq (R_{v, n}^{d^{n-1}}(t))^d-R\geq R_{v, n}^{d^{n}}(t)/2\geq R_{v, n}^{d^{n}}(t)/2R$. On the other hand, if $R_{v,n}^{d^{n}}(t)< 2R$, then $R_{v, n+1}^{d^{n}}(t) \geq R_{v, n}^{d^{n}}(t)/2R$. So in all cases, $$\label{R 0} \frac{1}{(2R)^{1/d^{n-1}}}\leq \frac{R_{v,n+1}(t)}{R_{v,n}(t)}\leq (2R)^{1/d^{n-1}}$$ which yields the uniform convergence of $\{\log R_{v,n}(t)\}_n$ (by taking logarithms in and the use a telescoping sum) for all $t\in X({{\mathbb C}}_v)$ satisfying $|\psi(t)|_v\leq R$. Secondly, we assume $t\in X({{\mathbb C}}_v)\backslash S$ such that $|\psi(t)|_v> R$. We prove by induction on $n$ that $R^{d^{n-1}}_{v,n}(t)\geq |\psi(t)|_v$. Indeed, the case $n=1$ is obvious, while in general (also noting that $R>3$ and $d\ge 2$) we have: $$R^{d^{n}}_{v,n+1}(t)\geq (R^{d^{n-1}}_{v,n}(t))^d-|\psi(t)|_v\geq R^{d^{n}}_{v,n}(t)-\frac{R^{d^{n}}_{v,n}(t)}{2}\geq \frac{R^{d^{n}}_{v,n}(t)}{2}\geq R^{d^{n-1}}_{v,n}(t)\geq |\psi(t)|_v.$$ Then it is easy to see that $$|R_{v,n+1}^{d^{n}}(t)-(R^{d^{n-1}}_{v,n}(t))^d|\leq |\psi(t)|_v\leq \frac{R^{d^{n}}_{v,n}(t)}{2}$$ and so, $$\left|\frac{R_{v,n+1}^{d^{n}}(t)}{R^{d^{n}}_{v,n}(t)}-1\right|\leq \frac{1}{2}$$ or equivalently, $$\label{R 1} \left(\frac{1}{2}\right)^{1/d^{n}}\leq \frac{R_{v,n+1}(t)}{R_{v,n}(t)}\leq \left(\frac{3}{2}\right)^{1/d^{n}}.$$ Taking logarithms in and using again a telescoping sum, we obtain the uniform convergence of $\{\log R_{v,n}(t)\}_n$ for all $t\in X({{\mathbb C}}_v)\setminus S$ such that $|\psi(t)|_v>R$. Finally, using also the convergence at the poles (according to ), we conclude the proof of Proposition \[convergence prop\]. Equidistribution of small points -------------------------------- We use the same construction as in [@GHT:preprint Section 7]. So, from Lemma \[good reduction\] and Proposition \[convergence prop\], we know that $$\label{definition of ALB}{\mathcal{\overline{L}}}:=({\mathcal{L}}, \{\|\cdot\|_v\}_{v\in {\Omega_K}}),$$ is an adelic metrized line bundle which is semipositive. The height function ${\hat{h}}_{\mathcal{\overline{L}}}$ on $X({\overline{K}})$ associated to ${\mathcal{\overline{L}}}$ is given by: $$\label{height of line bundle} {\hat{h}}_{{\mathcal{\overline{L}}}}(t):=\sum_{v\in {\Omega_K}}\frac{N_v}{|{\operatorname{Gal}}({\overline{K}}/K)\cdot t|}\cdot \sum_{y\in{\operatorname{Gal}}({\overline{K}}/K)\cdot t}-\log \|s(y)\|_v, \textup{ for any $t\in X({\overline{K}})$}$$ where $s$ is any section of ${\mathcal{L}}=\psi^*{\mathcal{O}}_{{{\mathbb P}}^1}(1)$ which does not vanish on the Galois orbits of $t$. The product formula guarantees that this height does not depend on the section $s$ in the above formula. The adelic metrized line bundle ${\mathcal{\overline{L}}}$ is uniquely determined by the non-constant morphism $\psi: X\to {{\mathbb P}}^1$ (defined over $K$). For convenience, we use a new notation for the height ${\hat{h}}_{\mathcal{\overline{L}}}$ on $X$ associated to the morphism $\psi$: $$\label{psi height} {\hat{h}}_{\psi}(t):={\hat{h}}_{\mathcal{\overline{L}}}(t), \textup{ for $t\in X({\overline{K}})$}.$$ So, as a corollary of Theorem \[yuan equidistribution\] applied to the problem we study, we obtain the following equidistribution theorem for points of height tending to $0$. \[general equidistriibution\] Let $X$ be a nonsingular projective curve over a number field $K$ and $\psi: X\to {{\mathbb P}}^1$ be a non-constant morphism defined over $K$. The adelic metrized line bundle ${\mathcal{\overline{L}}}$ in (\[definition of ALB\]), corresponding to the ample line bundle ${\mathcal{L}}=\psi^*{\mathcal{O}}_{{{\mathbb P}}^1}(1)$ is semipositive. Let $\{t_n\}_{n\geq 1}\subset X({\overline{K}})$ be any non-repeating sequence of small points, i.e. $\lim_{n\to \infty}{\hat{h}}_\psi(t_n)=0$. Then for any place $v\in {\Omega_K}$, the Galois orbits of this sequence are equidistributed in the analytic space $X^{an}_{{{\mathbb C}}_v}$ with respect to the probability measure $ d \mu_v=c_1({\mathcal{\overline{L}}})_v/\deg_{{\mathcal{L}}}(X)$. We note that $h_{\mathcal{\overline{L}}}(X)=0$ because $X$ contains an infinite set of points with height $0$ (see [@Zhang:metrics Theorem (1.10)], Proposition \[height relations\] and Remark \[zariski dense\]). We obtain next the relation between the two heights ${\hat{h}}_\psi$ and ${\hat{h}}_{f_{\psi(t)}}$; we recall that ${\hat{h}}_{f_{\psi(t)}}$ is the canonical height for points on the affine line under the action of the polynomial $$f_{\psi(t)}(z):=z^d+\psi(t).$$ Also, we recall that $S$ is the set of poles for $\psi$. \[height relations\] For each $t\in X({\overline{K}})\setminus S$, we have ${\hat{h}}_\psi(t)=\frac{d}{d_\psi}\cdot {\hat{h}}_{f_{\psi(t)}}(0)$, while ${\hat{h}}_\psi (t)=0$ for each $t\in S$. In particular, ${\hat{h}}_\psi(t)\geq 0$ on $X({\overline{K}})$ with equality if and only if $t$ is a pole of $\psi$ or $f_{\psi(t)}$ is postcritically finite. First, assume that $t\in X({\overline{K}})$ is a pole of $\psi$, i.e. $t\in S$. As $\psi$ is defined over $K$, then the Galois orbit of $t$ is contained in $S$. By the product formula together with the definition of metrics at a pole and the definition of the height , we see that ${\hat{h}}_\psi (t)=0$. Secondly, let $t\in X({\overline{K}})\backslash S$; in this case, the points in the Galois orbit of $t$ are not poles of $\psi$. Let $X_0, X_1$ be the two canonical sections of ${\mathcal{O}}_{{{\mathbb P}}^1}(1)$, and pick $u_0, u_1\in K$ such that the section $u_0X_0+u_1X_1$ of ${\mathcal{O}}_{{{\mathbb P}}^1}(1)$ does not vanish on $[\psi(t):1]\in {{\mathbb P}}^1({\overline{K}})$. For each $y\in {\operatorname{Gal}}({\overline{K}}/K)\cdot t$, this section does not vanish on $\psi(y)$. Define $s:=\psi^*(u_0X_0+u_1X_1)$, noting that $s$ does not vanish on the Galois orbits of $t\in X({\overline{K}})$. Writing $d_\psi:=\deg(\psi)$, we have $$\begin{split} {\hat{h}}_\psi (t)&= \sum_{v\in {\Omega_K}} \sum_{y\in{\operatorname{Gal}}({\overline{K}}/K)\cdot t}\frac{-N_v\cdot \log \|s(y)\|_v}{|{\operatorname{Gal}}({\overline{K}}/K)\cdot t|}, \textup{ by (\ref{height of line bundle}) and (\ref{psi height})}\\ &= \sum_{v\in {\Omega_K}} \sum_{y\in{\operatorname{Gal}}({\overline{K}}/K)\cdot t}\lim_{n\to \infty}\frac{N_v\cdot \log \max\{1, |f_{\psi(y)}^n(0)|_v\}^{1/(d_\psi\cdot d^{n-1})}}{|{\operatorname{Gal}}({\overline{K}}/K)\cdot t|}, \textup{ by (\ref{metric definition}) and (\ref{product formula})}\\ &= \frac{1}{|{\operatorname{Gal}}({\overline{K}}/K)\cdot t|} \lim_{n\to \infty}\sum_{v\in {\Omega_K}} \sum_{y\in{\operatorname{Gal}}({\overline{K}}/K)\cdot t}\frac{N_v\cdot \log^+ |f_{\psi(y)}^n(0)|_v}{d_\psi\cdot d^{n-1}}\\ &=\frac{d}{d_\psi}\cdot {\hat{h}}_{f_{\psi(t)}}(0), \textup{ by \eqref{naive height} and \eqref{call-silverman height}.} \end{split}$$ The second part of the proposition follows, since ${\hat{h}}_{f_{\psi(t)}}(0)\geq 0$ with equality if and only if the critical point $0$ is preperiodic under iteration of $f_{\psi(t)}$; see [@Call:Silverman]. \[zariski dense\] It is well known that there are infinitely many $t\in {\overline{{{\mathbb Q}}}}$ such that $f_t(z)=z^d+t$ is postcritically finite. Since the morphism $\psi: X\to {{\mathbb P}}^1$ is non-constant, the set of points with zero height (for the height function ${\hat{h}}_\psi$) is Zariski dense on $X({\overline{K}})$. Bifurcation and potential functions {#identical section} =================================== In this section, we study the bifurcation of algebraic families of unicritical polynomials, parametrized by quasi-projective curves. Let $X'$ be (as in the previous Section) a Zariski open dense subset of an irreducible, nonsingular, projective curve $X$ which is a parameter space for two families of unicritical polynomials. In Theorem \[escape relation\] we prove that if there are infinitely many points in $X'$ such that the corresponding two polynomials for these two families are simultaneously PCF, then these two families of polynomials have the same normalized bifurcation measure on $X'({{\mathbb C}})$; this result is a consequence of Theorem \[general equidistriibution\] and the definition of the bifurcation measure (see Subsection \[curvature and bif\]). Bifurcation {#bif subsection} ----------- For a holomorphic family $f(t,\cdot): {{\mathbb P}}^1\to {{\mathbb P}}^1$ of rational functions of degree $d\geq 2$ parametrized by a complex manifold, we have a stable region, a bifurcation locus (which is the complement of the stable region) and a bifurcation measure (or (1,1)-current) on the parameter space; see [@D:current; @D:lyap; @Dujardin:Favre:critical; @McMullen:CDR; @Mane:Sad:Sullivan]. One of the main goals in complex dynamics is to study the stability of holomorphic families (or moduli spaces) of rational functions. In this article, we restrict our study to algebraic families of unicritical polynomials, parametrized by quasi-projective curves. We work with the notation as in Subsection \[subsection PCF statement\]. So, $X$ is a smooth, irreducible curve, $X'$ is a Zariski dense open subset of $X$, and $f:X'\times {{\mathbb C}}\to {{\mathbb C}}$ is an algebraic family of unicritical polynomials of degree $d\geq 2$, i.e. $f_{\psi(t)}(z)=z^d+\psi(t)$ where $\psi: X'{\longrightarrow}{{\mathbb A}}^1$ is a morphism. A point $t_0\in X'$ is [*stable*]{} if the Julia sets $J_{f_{\psi(t)}}$ are moving holomorphically in a neighbourhood of $t_0$, or equivalently, $\{f^n_{\psi(t)}(0)\}_{n\geq 1}$ is a normal family of functions on some neighbourhood of $t_0$. The [*bifurcation locus*]{} on $X'({{\mathbb C}})$ is the set of parameters where $f_{\psi(t)}$ fails to be stable. By definition, the stable region is always an open subset of $X'({{\mathbb C}})$. We define the [*escape-rate function*]{} for $\psi$ as $$G_{\psi}(t):=\lim_{n\to \infty}\frac{1}{d^n}\log^+|f_{\psi(t)}^n(0)|,$$ which is a subharmonic function on $X'({{\mathbb C}})$. It is convenient to extend the function $G_\psi$ on $X({{\mathbb C}})$ by defining it $$G_\psi(t)=0\text{ for each }t\in (X\setminus X')({{\mathbb C}}).$$ The differential of the [*bifurcation measure*]{} is defined as $$\label{definition of bifurcation measure} d \mu_{\psi}:=dd^c G_{\psi}(t)$$ with $dd^c=\frac{i}{\pi }{\partial \overline{\partial}}$ being the Laplacian operator. For the sake of simplifying the notation, when $\psi(t)=t$ is the identity map, we use $\mu$ and $G(t)$ instead of $\mu_\psi$ and $G_\psi(t)$. The support of the bifurcation measure coincides with the bifurcation locus on $X'({{\mathbb C}})$, and the bifurcation locus is empty if and only if $\psi$ is a constant (i.e. $f_{\psi(t)}$ is isotrivial). From the definition of the the escape-rate function, we see that $$G_\psi(t)=G(\psi(t)),$$ i.e. $G_\psi=\psi^* G$ is the pullback of the escape-rate function on the complex plane by $\psi$. Hence the bifurcation measure (resp. bifurcation locus) is the pullback of the bifurcation measure (resp. bifurcation locus) on the complex plane $$\mu_\psi=\psi^*\mu,$$ i.e. $\mu_\psi(A)=\mu(\psi(A))$ for $A\subset X'({{\mathbb C}})$ with $\psi$ being injective on $A$. The generalized Mandelbrot sets ------------------------------- Here we deal with the simplest case: $\psi(t)=t$ (i.e. $f_{\psi(t)}(z)=f_t(z)=z^d+t$) and $X'$ itself is the affine (complex) line. The degree $d$ generalized Mandelbrot set ${\mathcal{M}}_d$ is the set of parameters where the critical point $0$ is bounded under the iterates of $f_t$ $${\mathcal{M}}_d:=\{t\in {{\mathbb C}}: ~ |f^n_t(0)| \not \to \infty \textup{ as $n\to \infty$} \}$$ When $d=2$, ${\mathcal{M}}_2$ is the classical Mandelbrot set. See Figure \[external rays\] for the pictures of ${\mathcal{M}}_2$ and ${\mathcal{M}}_3$. We recall some basic properties of the generalized Mandelbrot sets. Every generalized Mandelbrot set is bounded and simply connected, and there is a unique biholomorphic map $\Phi$ (depending on $d$) from ${{\mathbb C}}\backslash {\mathcal{M}}_d$ to the complement of the closed unit disk ${{\mathbb C}}\backslash \overline{{{\mathbb D}}}$ $$\label{change coordinate} \Phi: {{\mathbb C}}\backslash {\mathcal{M}}_d \tilde{\longrightarrow} {{\mathbb C}}\backslash \overline{{{\mathbb D}}}$$ with $\Phi(t)=t+O(1), \textup{ for $|t|>>0$}$. The Green’s function $G_{{\mathcal{M}}_d}$ for the compact set ${{\mathcal{M}}_d}$ on ${{\mathbb C}}\backslash{\mathcal{M}}_d$ is given by $$\label{Phi} G_{{\mathcal{M}}_d}(t)=\log |\Phi(t)|$$ and it is known that $G_{{\mathcal{M}}_d}(t)=d\cdot G(t)$ (for example, see [@Baker-DeMarco]). Moreover, the escape-rate function satisfies the inequality $G(t)\geq 0$ with equality if and only if $t\in {\mathcal{M}}_d$. The bifurcation locus for $f_t$ is the boundary $\partial {\mathcal{M}}_d$ of ${\mathcal{M}}_d$, and the bifurcation measure is proportional to the harmonic measure for ${\mathcal{M}}_d$. Two algebraic families of unicritical polynomials {#curvature and bif} ------------------------------------------------- Let $X$ be a nonsingular projective curve defined over a number field $K$, let $\psi:X{\longrightarrow}{{\mathbb P}}^1$ be a non-constant morphism, and let $S\subset X$ be its set of poles. We proceed as in subsection \[metrics definition subsection\] and define the adelic metrized line bundle ${\mathcal{\overline{L}}}$ endowed with metrics $\|\cdot \|_v$ for each $v\in\Omega_K$. We recall that when $v$ is archimedean, ${{\mathbb C}}_v\cong {{\mathbb C}}$ and $X^{an}_{{{\mathbb C}}_v}\cong X({{\mathbb C}})$. The curvature $c_1({\mathcal{\overline{L}}})_v$ of $\|\cdot\|_v$ is given by $c_1({\mathcal{\overline{L}}})_v=-dd^c\log \|\cdot\|_v$. For the rest of this subsection, we fix an archimedean place $v$ and identify ${{\mathbb C}}_v$ with ${{\mathbb C}}$. For $t_0\in X({{\mathbb C}})\backslash S$, we let $s$ be a section on ${{\mathbb P}}^1$ defined over $K$ which does not vanish at $\psi(t_0)$. Hence for $t\in X({{\mathbb C}})$ in a neighbourhood of $t_0$, using , we have $$c_1({\mathcal{\overline{L}}})_v(t)=-dd^c\log \|\psi^*(s)(t_0)\|_v=dd^c \lim_{n\to \infty}\frac{\log^+|f_{\psi(t)}^n(0)|_v}{d_\psi \cdot d^{n-1}}=\frac{d}{d_\psi}\cdot dd^cG_\psi(t).$$ For the bifurcation measure $\mu_\psi$ on $X({{\mathbb C}})\backslash S$, we have $$\label{measures relations} \mu_\psi=\frac{d_\psi}{d}\cdot \mu_v$$ where $d_\psi$ is the degree of $\psi$. In particular, we consider $\mu_v$ be the *normalized bifurcation measure* with respect to which we get the equidistribution statement from Theorem \[pcf equidistribution\]. Let $\{t_n\}\subset X'({{\mathbb C}})$ be a sequence of PCF parameters for the algebraic family $f:X'\times {{\mathbb C}}{\longrightarrow}{{\mathbb C}}$ of unicritical polynomials of degree $d$. First of all, we note that each $\psi(t_n)\in{\overline{{{\mathbb Q}}}}$ since $z^d+\psi(t_n)$ is a PCF map; since $\psi$ is defined over ${\overline{{{\mathbb Q}}}}$, then also $t_n\in{\overline{{{\mathbb Q}}}}$. Then by Proposition \[height relations\], $${\hat{h}}_\psi(t_n)=0={\hat{h}}_{f_{\psi(t_n)}}(0).$$ Using Theorem \[general equidistriibution\], we conclude that the points $\{t_n\}$ equidistribute with respect to $\mu_v$, as desired. Now, we consider two non-constant morphisms $\psi_i: X\to {{\mathbb P}}^1$ for $i=1, 2$, with sets of poles $S_1$ and $ S_2$, respectively. They determine two algebraic families of unicritical polynomials $f_{\psi_1(t)}$ and $f_{\psi_2(t)}$ of degree $d\geq 2$. \[escape relation\] Suppose there are infinitely many $t\in X\backslash (S_1\cup S_2)$, such that $f_{\psi_1(t)}$ and $f_{\psi_2(t)}$ are simultaneously postcritically finite. Then $d_{\psi_2}\cdot \mu_{\psi_1}=d_{\psi_1}\cdot \mu_{\psi_2}$ on $X\backslash (S_1\cup S_2)$. Furthermore, on $X({{\mathbb C}})$ $$\label{G1 and G2} d_{\psi_2}\cdot G_{\psi_1}(t)=d_{\psi_1}\cdot G_{\psi_2}(t).$$ The relations of the two bifurcation measures is clear from Theorem \[general equidistriibution\] and (\[measures relations\]). And then these two families have the same stable region on $X({{\mathbb C}})\backslash (S_1\cup S_2)$. Let $$H(t):=d_{\psi_2}\cdot G_{\psi_1}(t)-d_{\psi_1}\cdot G_{\psi_2}(t)$$ be the difference of the two continuous subharmonic functions on $X({{\mathbb C}})\backslash (S_1\cup S_2)$. Since $d_{\psi_2}\cdot \mu_{\psi_1}=d_{\psi_1}\cdot \mu_{\psi_2}$, and also using (\[definition of bifurcation measure\]), $H(t)$ is harmonic on $X({{\mathbb C}})\backslash (S_1\cup S_2)$. The pullback (by $\psi_1$ or $\psi_2$) of a connected component of the stable region ${{\mathbb C}}\backslash \partial {\mathcal{M}}_d$, consists of finitely many (up to the degree $d_{\psi_1}$ or $d_{\psi_2}$) connected components of the stable region on $X$. As ${{\mathbb C}}\backslash \partial {\mathcal{M}}_d$ consists of infinitely many connected components (see Figure \[external rays\]), so is $\psi_2^{-1}({{\mathbb C}}\backslash \partial {\mathcal{M}}_d)$. Then we can pick one connected component of the stable region on $X({{\mathbb C}})\backslash (S_1\cup S_2)$, such that its images under both $\psi_1$ and $\psi_2$ are stable subsets within the generalized Mandelbrot set ${\mathcal{M}}_d$. Hence for any $t$ in this component, $G_{\psi_1}(t)=G(\psi_1(t))=0=G(\psi_2(t))=G_{\psi_2}(t)$, which yields that $H(t)=0$. So the harmonic function $H(t)$ on $X({{\mathbb C}})\backslash (S_1\cup S_2)$, which is identically zero on some open subset of $X({{\mathbb C}})\backslash (S_1\cup S_2)$, must be zero everywhere, and so follows. \[phiequality\] The set $S$ of poles for $\psi$ is the set of points $t_0\in X$ such that $\lim_{t \to t_0}G_\psi(t)=\infty$. With the same assumptions as in Theorem \[escape relation\], one has $S_1=S_2$ for the sets of poles of $\psi_1$ and $ \psi_2$. And moreover, by (\[Phi\]) (\[G1 and G2\]), for any $t\in X({{\mathbb C}})$ with $\psi_1(t)\in {{\mathbb C}}\backslash {\mathcal{M}}_d$ (hence $\psi_2(t)\in {{\mathbb C}}\backslash {\mathcal{M}}_d$ by proportionality of $G_{\psi_1}$ and $G_{\psi_2}$), we have $$|\Phi(\psi_1(t))|^{d_{\psi_2}}=|\Phi(\psi_2(t))|^{d_{\psi_1}}.$$ Proof of the main theorem ========================= Suppose now that $X$ is an irreducible, nonsingular projective curve satisfying the hypothesis of Theorem \[escape relation\]. By Remark \[phiequality\], we conclude that for all $ t \in X({{\mathbb C}})$, $$\psi_1(t) \in {{\mathbb C}}\setminus {\mathcal{M}}_d \Leftrightarrow \psi_2(t) \in {{\mathbb C}}\setminus {\mathcal{M}}_d,$$ and further that the uniformizing map $\Phi: {{\mathbb C}}\setminus {\mathcal{M}}_d \rightarrow {{\mathbb C}}\setminus \bar{\mathbb{D}}$ satisfies $$|\Phi(\psi_1(t))|^{d_{\psi_2}}=|\Phi(\psi_2(t))|^{d_{\psi_1}}.$$ Write $d_1 = d_{\psi_1}, d_2 = d_{\psi_2}$. Let $X_0$ be a connected, unbounded component of the stable region in $X$; i.e., $X_0$ is a component of the preimage of $\mathbb{C} \setminus {\mathcal{M}}_d$ under $\psi_2$. The quotient $$\Phi(\psi_1(t))^{d_{2}}/ \Phi(\psi_2(t))^{d_{1}}$$ provides a holomorphic map $X_0 \rightarrow S^1$ (where $S^1$ is the complex unit circle); by the Open Mapping Theorem, this map is constant, so there exists $\eta \in \mathbb{R}$ such that for all $t \in X_0,$ $$\label{phirelation} \Phi(\psi_1(t))^{d_{2}} = e^{2 \pi i \eta} \cdot \Phi(\psi_2(t))^{d_{1}}.$$ Following [@Douady-Hubbard], the standard tool for studying the behavior of the degree $d$ Mandelbrot set ${\mathcal{M}}_d$ is given by the [*external rays*]{} of the map $\Phi$. We define the external ray for an angle $\theta \in \mathbb{R} / \mathbb{Z}$ to be $$\mathcal{R}(\theta) := \Phi^{-1} (\{ re^{2 \pi i \theta} : r >1 \}).$$ We recall some standard facts about external rays; see Chapters 8 and 13 of [@Douady-Hubbard], and [@EMS]. An external ray $\mathcal{R}(\theta)$ is said to be [*rational*]{} if $\theta$ is rational. A point $c \in {\mathcal{M}}$ is [*Misiurewicz*]{} if the critical point $0$ of $z^d+c$ is strictly preperiodic, and clearly every PCF point on the boundary of ${\mathcal{M}}_d$ is a Misiurewicz point. [@Douady-Hubbard] All rational rays [*land*]{}; that is, there exists a unique point $c_{\theta} \in \partial {\mathcal{M}}_d$ such that $\lim_{r \rightarrow 1} \Phi^{-1}(re^{2 \pi i \theta}) = c_{\theta}$. Misiurewicz points are contained in the boundary of ${\mathcal{M}}_d$, and every Misiurewicz point is the landing point of at least one rational ray. Let $\alpha$ be a periodic point of $f_c(z)=z^d+c$ with exact period $n$. The [*multiplier*]{} of this cycle is $\lambda:=(f^n_c)'(\alpha)$. The cycle is [*attracting*]{} if $|\lambda|<1$, [*repelling*]{} if $|\lambda| > 1$, and [*parabolic*]{} if $\lambda$ is a root of unity. A parameter $c$ is [*parabolic*]{} if $f_c(z)=z^d+c$ contains a parabolic cycle; in this case, there is a unique parabolic cycle. Parabolic points also lie in $\partial {\mathcal{M}}_d$, and are the landing points of rational rays. [@Douady-Hubbard], [@EMS] Every parabolic point $c$ is the landing point of either one or two rational rays. If the parabolic cycle of $f_c(z)$ has multiplier $\lambda \ne 1$, then exactly two distinct rational rays land at $c$. If $\mathcal{R}(\theta)$ and $\mathcal{R}(\theta')$ land at the same point, we say $\theta$ and $\theta'$ are a [*landing pair*]{}. If their common landing point $c$ is parabolic with multiplier $\ne 1$, then ${\mathcal{M}}_d \setminus \{ c \}$ consists of two connected components. In this case, the component which does not contain 0 is the [*wake*]{} $w_c$ of $c$, and if $\mathcal{R}(\theta)$ and $\mathcal{R}(\theta')$ land at $c$, the [*width*]{} of the wake $w_c$ is defined to be $|w_c| := \theta' - \theta$, assuming $0 < \theta < \theta' < 1$. For more about external rays, one can refer to [@EMS]. For illustration, see Figure \[external rays\]. Recall that a stable, connected component $H$ in ${\mathcal{M}}_d$ is hyperbolic of period $\ell$ if $z^d+c$ has an attracting cycle of exact period $\ell$ for every $c\in H$. \[standard pair\] For all $k \geq 1,$ $\frac{1}{d^k-1}$ and $\frac{d}{d^k-1}$ are a landing pair, and their landing point $c_k$ lies on the boundary of both the unique period 1 hyperbolic component, and a component of period $k$. The proof of the proposition is by standard arguments; see Proposition 3.5 of [@GKN:preprint] for details. ![ Selected external rays of the degree 2 (left) and degree 3 (right) Mandelbrot sets. Angles of each ray are indicated next to the ray. A select number of hyperbolic components are labeled with the period of the component. External rays drawn by Wolf Jung’s program Mandel.[]{data-label="external rays"}](externalrays.png "fig:"){width="2.25in"} ![ Selected external rays of the degree 2 (left) and degree 3 (right) Mandelbrot sets. Angles of each ray are indicated next to the ray. A select number of hyperbolic components are labeled with the period of the component. External rays drawn by Wolf Jung’s program Mandel.[]{data-label="external rays"}](externalrays3.png "fig:"){width="2.25in"} A hyperbolic component $H$ in ${\mathcal{M}}_d$ is equipped with a $(d-1)$-to-1 map $\lambda_H: H \rightarrow \mathbb{D}$, given by the multiplier of the attracting cycle; this map extends continuously to the boundary; the point $0$ has a unique preimage under the multiplier map, known as the [*center*]{} of the hyperbolic component. Given $\frac{p}{q} \in \mathbb{Q} / \mathbb{Z}$, the preimage under $\lambda_H$ of the ray $\{ re^{2 \pi i p/q} : 0 < r < 1 \} \subset \mathbb{D} \setminus \{ 0 \}$ is a collection of $d-1$ disjoint curves in the component (known as [*internal rays*]{}), which land at parabolic points on $\partial H$. In this case, we say that the wake is a [*$\frac{p}{q}$-subwake*]{} of $H$. Conversely, each parabolic point is the landing of some internal ray. See Figure \[internal rays\] for an illustration. \[t\] ![ Select internal rays of the period $1$ component of the Mandelbrot set, with angles indicated. Internal rays drawn by Wolfram’s Mathematica.[]{data-label="internal rays"}](internalrays.jpg "fig:"){width="2.05in"} For example, if $H$ is the period $1$ component, some preimage of the point $e^{2 \pi i / k} \in \partial \mathbb{D}$ will land at the point $c_k \in \partial H$ of Proposition \[standard pair\]. If $H$ is a hyperbolic component of ${\mathcal{M}}_d$ of period $> 1$, there is a unique point $c_H$ on the boundary of $H$ so that both $\lambda(c_H) = 1$ and ${\mathcal{M}}_d \setminus \{ c_H \}$ consists of two connected components; this is the [*root*]{} of $H$. There will be exactly two rays landing at $c_H$, and we correspondingly define the [*width*]{} of the component $H$ to be the width of the wake at $c_H$. Our key tool towards the main theorem is the so-called [*wake formula*]{}, which was folklore, eventually due to Bruin-Kaffl-Schleicher in [@BKL] for $d=2$ and Kauko for general $d$ (see [@Kauko]): \[width formula\] Let $H$ be a hyperbolic component of ${\mathcal{M}}_d$ with period $k$ and width $|H|$. Let $w_{p/q}$ be any $\frac{p}{q}$-subwake of $H$. Then $$|w_{p/q}| = \frac{|H|}{d-1} \frac{(d^k-1)^2}{d^{qk}-1}.$$ \[1/2 rays\] Let $H$ be a hyperbolic component of the degree $d$ Mandelbrot set. Then the $1/2$-subwakes of $H$ are precisely the set of subwakes of $H$ with maximal possible width. We provide now a key proposition towards Theorem \[main theorem\]. \[etarational\] Under the hypothesis of Theorem \[escape relation\], there exists a component $X_0$ of the preimage of $\mathbb{C} \setminus {\mathcal{M}}_d$ under $\psi_2$ such that the real number $\eta$ of Equation \[phirelation\] is rational. We have two cases. Suppose first that there exists $t \in X$ satisfying the following: 1. $t_0$ is not a branch point for $\psi_1$ or $\psi_2$, 2. both $\psi_1(t)$ and $\psi_2(t)$ are PCF parameters, and 3. $\psi_1(t)$ or $\psi_2(t)$ is a Misiurewicz point. Suppose without loss of generality that $\psi_1(t)$ is Misiurewicz; since $\psi_1(t) \in \partial {\mathcal{M}}_d$, every open neighborhood of $\psi_1(t)$ contains parameters $c$ such that $|\Phi(c)| > 0;$ by Equation \[phirelation\], the same holds for $\psi_2(t)$. Choose $X_0$ so that $t \in X_0$, and write $\mathcal{R}(\theta_1)$ and $\mathcal{R}(\theta_2)$ for the rational external rays landing at $\psi_1(t)$ and $\psi_2(t)$, respectively. By Equation \[phirelation\], we have $$d_2 \theta_1 - d_1 \theta_2 = \eta,$$ and so $\eta$ is rational as desired. Suppose now that the conditions above are not satisfied for any $t \in X$. Call $B$ the set of branch points of the projection maps $\psi_i$. Since $B$ is a finite set, then by hypothesis, there exists some $t_0 \in X \setminus B$ such that both $\psi_1(t_0)$ and $\psi_2(t_0)$ are centers of hyperbolic components. In fact, we may choose $t_0$ so that the component $H_2$ of ${\mathcal{M}}_d$ which has center $\psi_2(t_0)$ is far from the branch points in the following sense: there exists an open neighborhood $U$ of $t_0$ such that $\psi_2(U)$ is simply connected, $H_2 \subset \psi_2(U)$, and there exists a parabolic parameter $c \in \partial H_2$ such that $c$ is the root of a component $H'_2$ satisfying $\overline{H'_2} \subset \psi_2(U)$. Therefore we have a well-defined analytic function $$h(z) := \psi_1 \circ \psi_2^{-1} : U \rightarrow h(U).$$ Since $h$ is an open map, $h(H_2)$ is a component of $\mathbb{C} \setminus \partial {\mathcal{M}}_d$ which contains the PCF parameter $\psi_1(t_0)$, so is hyperbolic. Since $U$ contains no branch points, $h(c)$ lies on the boundary of two stable components, so by Theorem 4.1 of [@Schleicher1], $h(c)$ is a parabolic parameter. Choose $t \in X$ such that $\psi_1(t) = h(c)$ and $\psi_2(t) = c$, and choose $X_0$ so that $t \in X_0$. By Equation \[phirelation\], any rational rays $\mathcal{R}(\theta)$ landing at $c$ and $\mathcal{R}(\theta')$ landing at $h(c)$ satisfy the relation $$d_2 \theta' - d_1 \theta = \eta,$$ and we conclude that $\eta$ is rational as desired. We are now ready to prove the remaining significant result towards Theorem \[main theorem\]. \[trivial relation\] Assume the hypothesis of Theorem \[escape relation\]. Then there exists an open subset $U$ of the complex plane containing infinitely many PCF parameters on which an analytic branch of $\psi_1 \circ \psi_2^{-1}$ is given by $z \mapsto \zeta z$, for some $(d-1)$st root of unity $\zeta$. Fix an integer $m > 2$. We will define a neighborhood $U(m)$ as follows: define $c_m$ to be the landing point of the external ray $\mathcal{R}(\frac{d}{d^m-1})$; by Proposition \[standard pair\], this point lies on the main component. By the preceding discussion, there also exists an internal ray $r_m$ landing at $c_m$, as well the internal ray $r_0$ of angle zero which lands at $c_0 = (d-1)/d^{d/(d-1)}$. The union $$C := r_0 \cup r_m \cup \mathcal{R}(0) \cup \mathcal{R}\left( \frac{d}{d^m-1} \right) \cup \{ 0 \}$$ is a curve such that $\mathbb{C} \setminus C$ has two (simply connected) components. Define $U(m)$ to be the component of $\mathbb{C} \setminus C$ which contains the hyperbolic component with root $c_m$; in other words, the component containing parameters of arbitrarily small argument (see Figure \[external rays\], \[internal rays\]). By Proposition \[standard pair\], $U(m)$ contains infinitely many PCF parameters. Note now that $m$ may be chosen sufficiently large so that $U(m)$ omits the images of the branch points of $\psi_1$ and $\psi_2$. Therefore we may define an analytic branch $h(z) = \psi_1 \circ \psi_2^{-1}$ on $U(m)$. By Equation \[phirelation\], $h$ sends external rays to external rays, and there exists $0 < \ell \leq d_2$ (given by choice of branch) such that $h$ acts on external angles $0 < \theta < \frac{d}{d^m-1}$ by $$\theta \mapsto \frac{d_1}{d_2} \theta + \eta + \frac{\ell}{d_2}.$$ Denote such a choice of $U(m)$ simply by $U$, and write $\eta + \frac{\ell}{d_2} = \frac{a}{b}$ in lowest terms (this is possible by Proposition \[etarational\]). Note then that if $\mathcal{R}(\theta_1)$ and $\mathcal{R}(\theta_2)$ land together in $U$, their images under the continuous map $h$ must land together, and the wake of the image rays has width $\frac{d_1}{d_2} |\theta_2 - \theta_1|.$ \[prop same period\] Let $U$ and $h$ be defined as above. Then every hyperbolic component of $U$ is sent by $h$ to a hyperbolic component of the same period. Suppose $H$ is a hyperbolic component of period $N$ contained in the neighborhood $U$ with root $c$ and wake $W$. Since $h$ is an open map on $U$ which preserves rational external rays, $h(H)$ is also a hyperbolic component, say of period $N'$, root $h(c)$, and wake $h(W)$. Choose any landing point of an internal ray of $H$ with angle $1/2$; call this point $c_{1/2}$, and its subwake $H_{1/2}$, noting that $c_{1/2}$ is the root of a hyperbolic component of period $2N$. Since $h$ has a linear action on external angles, Corollary \[1/2 rays\] guarantees that $h(H_{1/2})$ is one of the $1/2$-subwakes of $h(H)$; call it $h(H)_{1/2}$. The width formula of Proposition \[width formula\] computes: $$\frac{|h(H)|}{d-1} \frac{(d^{N'}-1)^2}{d^{2N'}-1} = |h(H)_{1/2}| = |h(H_{1/2})| = \frac{d_1}{d_2} |H_{1/2}| = \frac{d_1}{d_2} \frac{|H|}{d-1} \frac{(d^N-1)^2}{d^{2N}-1}.$$ Since $|h(H)| = \frac{d_1 \ell}{d_2} |H|,$ the right- and left-hand sides of the equation above imply that $N = N'$, as desired. For any hyperbolic component of period $N$, the rays landing at the root of the component have denominators which divide $d^N-1$ (this is again standard; see Chapter 8 of [@Douady-Hubbard]), and therefore the width of a period $N$ component has denominator which is a divisor of $d^N-1$. Since $h$ fixes the period of any hyperbolic component in $U$, and $U$ contains components of period $N$ and width $\frac{d-1}{d^N-1}$ for all $N > m$ (see Proposition \[standard pair\]), we deduce that $\frac{d_1}{d_2} \cdot \frac{d-1}{d^N-1}$ has denominator which is a divisor of $d^N-1$ for all $N > m$. We conclude that $d_2$ divides $d_1 (d-1)$, and so the map that $h$ induces on $\mathbb{R} / \mathbb{Z} \cap (0, \frac{d}{d^m-1})$ is simply $$\theta \mapsto \frac{k}{d-1} \theta + \frac{a}{b}$$ for some $1 \leq k \leq d-1$. By the same arguments and choosing $N$ sufficiently large, the ray $$h\left(\mathcal{R}\left(\frac{1}{d^N-1}\right)\right) = \mathcal{R}\left(\frac{d-1}{d^N-1}\cdot \frac{k}{d-1} + \frac{a}{b}\right)$$ lands at a hyperbolic component of period $N$, and so $\frac{k}{d^N-1} + \frac{a}{b}$ has denominator dividing $d^N-1$. Since $a/b$ is in lowest terms, we conclude that $b$ divides $d^N-1$ for all $N$ sufficiently large. Choosing $M$ and $N$ large and coprime, the greatest common divisor of $d^M-1$ and $d^N-1$ is $d-1$, so we have $b \mid (d-1)$. We now have integers $k$, $j$ so that $h$ acts on external angles of $U$ by $$\theta \mapsto \frac{k}{d-1} \theta + \frac{j}{d-1}.$$ However, we know that the ray of angle $\frac{1}{d^N-1}$ maps to a ray with denominator dividing $d^N-1$ for all $N$ sufficiently large; in other words, $$k + j(d^N-1) \ \equiv 0 \ {\operatorname{mod}}(d-1)$$ for all $N$ sufficiently large. Thus $k = d-1$, and so $h$ acts on external angles by translation by $\frac{j}{d-1}$; that is, $h$ acts on external rays as multiplication by some $(d-1)$st root of unity $\zeta$. By definition of $\Phi$, $$\Phi(\zeta z) = \zeta \Phi(z)$$ for all $z \in \mathbb{C} \setminus {\mathcal{M}}_d$, so $h$ coincides with the map $z \mapsto \zeta z$ on $(\mathbb{C} \setminus {\mathcal{M}}_d) \cap U$, and thus on the entire domain $U$. The proof of the main theorem follows easily from Theorem \[trivial relation\]. First we note that indeed, if $C$ has the form (1), (2) or (3) as in the conclusion of Theorem \[main theorem\], then it contains infinitely many points $(a,b)$ with both coordinates PCF parameters, i.e., both $z^d+a$ and $z^d+b$ are PCF polynomials. For curves of the form (1) or (2), this fact is obvious, while for curves of the form (3), we note that once $f_c(z):=z^d+c$ is PCF, then also $f_{\zeta c}(z):=z^d+\zeta c$ is PCF (where $\zeta^{d-1}=1$) because $\zeta^{-1}f_{\zeta c}(\zeta z) = f_c(z)$. Also, there exist infinitely many $c\in{\overline{{{\mathbb Q}}}}$ such that $f_c$ is PCF. So, from now on, assume $C$ be an irreducible plane curve containing infinitely many $(a, b)$ such that $z^d+a$ and $z^d+b$ are both PCF. Since the PCF parameters are algebraic numbers, we conclude that $C$ is defined over ${\overline{{{\mathbb Q}}}}$. If $C$ does not project dominantly onto one of the two coordinates of ${{\mathbb A}}^2$ then, without loss of generality, we may assume $C=\{t_0\}\times {{\mathbb A}}^1$ for some $t_0\in{\overline{{{\mathbb Q}}}}$. But then by the hypothesis satisfied by $C$, we conclude that $t_0$ is a PCF parameter, i.e., $C$ has the form (1) as in the conclusion of Theorem \[main theorem\]. So, from now on, we assume $C$ projects dominantly onto both coordinates of ${{\mathbb A}}^2$. Let $\pi : X \rightarrow C$ be a nonsingular projective model of $C$; therefore $X$ is defined over some number field $K$. Write $\pi_1$ and $\pi_2$ for the projection maps of $C$ onto the axis of ${{\mathbb A}}^2$, and let $\psi_i = \pi_i \circ \pi$ for $i=1,2$. Then we can apply Theorem \[escape relation\] and deduce Theorem \[trivial relation\]. Thus there exists a $(d-1)$st root of unity $\zeta$ such that for infinitely many $c \in {\mathcal{M}}_d$, $\zeta c = \psi_1(t_c)$ and $c = \psi_2(t_c)$ for some $t_c \in X({{\mathbb C}})$; accordingly, there exist infinitely many $c\in{{\mathbb C}}$ such that $(\zeta c, c) \in C({{\mathbb C}})$. Since $C$ is irreducible, we conclude the proof of Theorem \[main theorem\]. [Zha95b]{} Y. André, *Finitude des couples d’invariants modulaires singuliers sur une courbe algébrique plane non modulaire*, J. Reine Angew. Math. **505** (1998), 203–208. M. Baker and L. DeMarco, *Preperiodic points and unlikely intersections*, Duke Math. J. **159** (2011), 1–29. M. Baker and L. DeMarco, *Special curves and postcritically-finite polynomials*, Forum Math. PI **1** (2013), e3, 35 pages. M. Baker and R. Rumely, *Equidistribution of small points, rational dynamics, and potential theory*, Ann. Inst. Fourier (Grenoble) **56(3)** (2006), 625–688. E. Bombieri, D. Masser and U. Zannier, *Intersecting a curve with algebraic subgroups of multiplicative groups*, Int. Math. Res. Not. **1999** (1999), 1119–1140. H. Bruin, A. Kaffl and D. Schleicher, *Existence of quadratic Hubbard trees*, Fund. Math. **202** (2009), 251–279. A.  Chambert-Loir, , (2006), 215–235. G.  Call and J. Silverman, *Canonical heights on varieties with morphisms*, Compositio Math. **89**(1993), 163–205. L. DeMarco,*Dynamics of rational maps: a current on the bifurcation locus*, Math. Res. Lett. **8** (2001), 57–66. L. DeMarco, *Dynamics of rational maps: [L]{}yapunov exponents, bifurcations, and capacity*, Math. Ann. **326** (2003), 43–73. L. DeMarco, *Bifurcations, intersections, and heights*, Preprint, 2014. A. Douady and J.H. Hubbard, *Étude dynamique des polynomes complexes*, Publications Mathématiques d’Orsay, 1984-1985. R. Dujardin and C. Favre, *Distribution of rational maps with a preperiodic critical point*, Amer. J. Math. **130** (2008), 979–1032. D.  Eberlein, S.  Mukherjee and S. Schleicher, *Rational parameter rays of the Multibrot sets*, Preprint, 2014. C. Favre and J. Rivera-Letelier, *Équidistribution quantitative des points de petite hauteur sur la droite projective*, Math. Ann. **355** (2006), 311–361. D. Ghioca, L.-C. Hsia and T. J. Tucker, *Preperiodic points for families of polynomials*, Algebra $\&$ Number Theory **7** (2012), 701–732. D. Ghioca, L. Hsia and T. Tucker, *Preperiodic points for families of rational maps*, Proc. London Math. Soc. **110** (2015), 395–427. D. Ghioca, H. Krieger and K. Nguyen, *A case of the Dynamical André-Oort Conjecture*, to appear in the Intern. Math. Res. Not., 2014. R. Jones, *Galois representations from pre-image trees: an arboreal survey*, Pub. Math. Besançon, 2013, 107–136. V. Kauko, *Trees of visible components in the Mandelbrot set*, Fund. Math. **164** (2000), 41–60. D. Masser and U. Zannier, *Torsion anomalous points and families of elliptic curves*, Amer. J. Math. **132** (2010), 1677–1691. D. Masser and U. Zannier, *Torsion points on families of squares of elliptic curves*, Math. Ann. **352** (2012), 453–484. C. McMullen, *Complex Dynamics and Renormalization*, Princeton University Press, Princeton, NJ, 1994. R. Mañ[é]{}, P. Sad and D. Sullivan, *On the dynamics of rational maps*, Ann. Sci. Ec. Norm. Sup. **16** (1983), 193–217. R. Pink, *Profinite iterated monodromy groups arising from quadratic morphisms with infinite postcritical orbits*, preprint, 2013, available on arxiv. R. Pink, *Profinite iterated monodromy groups arising from quadratic polynmials*, preprint, 2013, available on arxiv. R. Pink, *Finiteness and liftability of postcritically finite quadratic morphisms in arbitrary characteristic*, preprint, 2013, available on arxiv. D.  Schleicher, *On fibers and local connectivity of Mandelbrot and Multibrot sets*, Fractal Geometry and Applications: A Jubilee of Benoît Mandelbrot, Proc. Sympos. Pure Math. **72**, Part 1, Amer. Math. Soc., Providence, RI, 2004, pp. 477–507. X. Yuan, *Big line bundles over arithmetic varieties*, Invent. Math. **173** (2008), 603–649. U. Zannier, *Some problems of unlikely intersections in arithmetic and geometry*, Annals of Mathematics Studies, vol. 181, Princeton University Press, Princeton, NJ, 2012, With appendixes by David Masser. S. Zhang, *Positive line bundles on arithmetic varieties*, J. Amer. Math. Soc. **8** (1995), 187–221. S. Zhang, *Small points and adelic metrics*, J. Alg. Geom. **4** (1995), 281–300. [^1]: The research of H.K. was partially supported by an NSF grant, while the research of D.G., K.N. and H.Y. was partially supported by NSERC grants.
--- abstract: 'We consider two minimal models of active fluid droplets that exhibit complex dynamics including steady motion, deformation, rotation and oscillating motion. First we consider a droplet with a concentration of active contractile matter adsorbed to its boundary. We analytically predict activity driven instabilities in the concentration profile, and compare them to the dynamics we find from simulations. Secondly, we consider a droplet of active polar fluid of constant concentration. In this system we predict, motion and deformation of the droplets in certain activity ranges due to instabilities in the polarisation field. Both these systems show spontaneous transitions to motility and deformation which resemble dynamics of the cell cytoskeleton in animal cells.' author: - 'Carl A. Whitfield' - 'Rhoda J. Hawkins' title: 'Instabilities, motion and deformation of active fluid droplets' --- Introduction {#sec:intro} ============ In animal cells, motility and morphology are strongly coupled and are largely due to the activity of the cell cytoskeleton. Research into these areas is broad and has many applications, from studying metastatic cancer cells to wound healing. In order to mimic aspects of these systems we model, both analytically and numerically, examples of active cytoskeletal material confined to droplets. An active material is defined as driven out-of-equilibrium by the internal energy of its constituent particles [@Marchetti2013]. We use the hydrodynamic model of an active polar fluid outlined in [@Kruse2004; @Kruse2005; @Furthauer2012] to model the behaviour of such a material at long length and time scales. Over the past decade there have been a number of calculations of instabilities and non-equilibrium steady states in active liquid crystals; thin or 2D flat films [@Kruse2004; @Voituriez2005; @Voituriez2006; @Kruse2006; @Bois2011; @Sarkar2015a], thin cortical layers [@Zumdieck2005; @Hawkins2011; @Joanny2013; @Khoromskaia2015], confined in emulsion droplets or vesicles [@Callan-Jones2008a; @Tjhung2012; @Joanny2012a; @Blanch-Mercader2013; @Giomi2014; @Whitfield2014; @Marth2014; @Tjhung2015], and simplified models of animal and plant cells [@Hawkins2009; @Woodhouse2012; @Callan-Jones2013; @Kumar2014; @Turlier2014; @Callan-Jones2016]. In this paper we model deforming active droplets immersed in a passive fluid using linear perturbation theory. By making justified assumptions, we are able to predict non-equilibrium phase transitions in both of the systems we consider, and predict how the droplet deformation couples to these. These analytical calculations are presented for the three-dimensional case and also repeated for the two-dimensional analogue where we find qualitatively similar results. Numerical simulations use the two-dimensional Immersed Boundary method used in [@Whitfield2016] and are directly compared to the two-dimensional analytical calculation. The models presented here are relevant to active systems *in vitro* (constructed using techniques in [@Bendix2008; @Sanchez2012; @Keber2014]) as well mimicking aspects of cell dynamics. The two cases we consider correspond to two limits of active cytoskeletal behaviour (see figure \[fig:sketch\]) that represent the minimum degrees of freedom required to observe interesting out-of-equilibrium dynamics. In both cases we consider a 1-component model used originally in [@Kruse2004], which allows us to investigate the coupling with droplet shape dynamics analytically. The linear stability analyses are restricted by assumptions which enable an analytical understanding of the mechanisms involved in producing the observed behaviour in numerical simulations. ![2D schematic of **(a)** Active fluid interface: active concentration $c$ on the droplet interface coupled to the internal concentration $\rho$. **(b)** Active polar droplet: constant density of active filaments with local average polarisation ${\boldsymbol{p}}$ (red arrows). Blue arrows indicate active contractile force dipoles.[]{data-label="fig:sketch"}](fig1.pdf){width="\columnwidth"} Firstly, we consider an isotropic layer of contractile active material confined to an interface between two fluids, which has physical similarities to the actomyosin cortex in cells. The stresses generated are confined to the plane of the interface giving rise to flows in the surrounding fluid and deformation of the interface itself. Interestingly, diffusion of the active particles through the bulk can result in [ a change in which mode of the perturbation has lowest critical activity, from a single peak instability driving droplet motion to higher modes which produce symmetric deformation]{}. Furthermore, simulations show that advection through the bulk can stabilise such modes. This suggests that droplets with an active interface could spontaneously deform and possibly divide due to the feedback from the fluid flow. Secondly, we consider a highly ordered active polar liquid crystal confined inside a fluid droplet. In this case the polarisation gradients direct the internal stresses giving rise to fluid flow. A polar anchoring condition at the interface means that the deformation of the droplet and polarisation field are strongly coupled. We find in this case there is a separation of swimming and stationary deforming modes, such that extensile activity destabilises the defect position and results in a swimming drop, whereas a contractile activity stabilises the centred defect position and gives rise to deformations of the interface. Active Fluid Interface {#sec:model} ====================== In this section we consider a fluid droplet coated by active particles on its interface that are isotropically ordered. Such systems have been found to self-organise in *in vitro* experiments using reconstituted active cytoskeletal material contained in vesicles or droplets [@Tsai2011; @Shah2014]. These experimental systems are a useful tool for understanding the more complex dynamics of cells. The model in this section makes predictions of interesting active phenomena including symmetry breaking, and droplet deformation, that are relevant to the field of cell mechanics. Model {#sec:abmodel} ----- We consider a fluid droplet described by an interfacial surface $\Sigma$ separating the contained fluid domain $\Omega_0$ and external fluid domain $\Omega_1$ with viscosities $\eta_0$ and $\eta_1$ respectively. We define a concentration of active matter $c(\theta,\phi,t)$ on the interface $\Sigma$, which alters the droplet surface tension $\gamma$ such that $\gamma = \gamma_0 - {\zeta_c}\, c - Bc^2/2$. $\gamma_0$ is the bare surface tension, ${\zeta_c}$ is the activity (${\zeta_c}<0$ for contractile) and $B$ is a passive repulsion force. This higher order repulsive term represents passive pressure, similar to that in [@Joanny2013], which parametrises the compressibility of the active fluid on the interface. We denote the effective surface tension $\gamma'_0 = \gamma_0 - {\zeta_c}\, c_0 - Bc_0^2/2$, which is the value of $\gamma$ in the stationary state. The force density on the droplet interface is then: ${\boldsymbol{F}} = \kappa \gamma {\boldsymbol{\hat{n}}} + {\left( \nabla_s \gamma \right)}{\boldsymbol{\hat{t}}}_i $, where ${\boldsymbol{\hat{n}}} = {\boldsymbol{\hat{n}}}(\theta,\phi,t)$ is the outward surface normal, ${\boldsymbol{\hat{t}}}_i = {\boldsymbol{\hat{t}}}_i(\theta,\phi,t)$ are the orthogonal surface tangent vectors, $\kappa = {\nabla \cdot}{{\boldsymbol{\hat{n}}}}$ is the local curvature, and $\nabla_s = ({\boldsymbol{\hat{t}}}_i \cdot \nabla)$ is the surface gradient. It is useful to define the effective activity ${\tilde{\zeta}}={\zeta_c}+Bc_0$ which defines the scale of the force ${\boldsymbol{F}}$ for small deviations of the concentration $c$ from $c_0$. Thus, the interface has net contractility for ${\tilde{\zeta}}<0$. The only forces acting on the system originate at the droplet surface $\Sigma$, with position ${\boldsymbol{R}} = R(\theta,\phi,t)\hat{{\boldsymbol{e}}}_r$ assuming this is single-valued with respect to the angular coordinates ($\theta$,$\phi$). Thus, the resulting force density in the fluid is ${\boldsymbol{f}}^{\rm ext}(r,\theta,\phi,t) = {\boldsymbol{F}} \delta{\left[ r - R(\theta,\phi,t) \right]}$. We ignore inertia taking the low Reynolds’ number limit, $Re = 0$, thus the incompressible fluid flow (${\nabla \cdot}{\boldsymbol{v}} = 0$) is described by Stokes’ equation $\eta_n \nabla^2{\boldsymbol{v}} + {\boldsymbol{f}}^{\rm ext} - \nabla P = 0$, where $n=0,1$ denotes the domain $\Omega_0$ or $\Omega_1$, ${\boldsymbol{v}}={\boldsymbol{v}}(r,\theta,\phi,t)$ is the fluid velocity, ${\boldsymbol{f}}^{\rm ext}={\boldsymbol{f}}^{\rm ext}(r,\theta,\phi,t)$ denotes any external force densities and $P=P(r,\theta,\phi,t)$ is the hydrostatic pressure. We take the limit of a zero-thickness interface and assume flow and stress continuity between the two fluids $\Omega_0$ and $\Omega_1$. This means the active particles act as an active surfactant, rather than a thin viscous layer (as in [@Bois2011; @Hawkins2011; @Joanny2013; @Khoromskaia2015; @Turlier2014; @Callan-Jones2016]), which allows us to study the dynamics of deformation in a 3D viscous environment analytically. The evolution of the surface concentration $c$ with respect to time $t$ is: $$\begin{aligned} \label{dcdt} \dot{c} = -\nabla_s \cdot (c {\boldsymbol{v}}_b) + D\nabla_s^2 c - k_{\rm off}c + k_{\rm on}\rho_b \,,\end{aligned}$$ where $\dot{c}=\partial c/\partial t$, ${\boldsymbol{v}}_b = {\boldsymbol{v}}(r=R,\theta,\phi,t)$ is the interface flow velocity, $D$ is the diffusion constant for the active particles on $\Sigma$, and $k_{\rm on,off}$ are binding and unbinding rates of the particles to the interface. The concentration of unbound particles in the bulk of the drop is denoted $\rho = \rho(r,\theta,\phi,t)$. Binding occurs at the interface where we denote the concentration of unbound prticles $\rho_b = \rho(r=R,\theta,\phi,t)$. Note that $k_{\rm on}$ has units of velocity, as it contains the adsorption depth parameter. We assume that the active particles are insoluble in the external fluid, and so the evolution of the bulk concentration $\rho$ is given by: $$\begin{aligned} \label{drhodt} \dot{\rho} = -({\boldsymbol{v}}\cdot\nabla)\rho + D_\rho\nabla^2 \rho \end{aligned}$$ with the boundary condition $D_\rho({\boldsymbol{n}}\cdot\nabla)\rho = k_{\rm on}\rho - k_{\rm off}c$ at $r=R$, to ensure conservation of mass. The parameter $D_\rho$ is the bulk diffusion constant of the active particles. Here we assume that the active particles only generate stresses at the interface, so the bulk concentration acts as a buffer to recycle the surface concentration. Linear Stability Analysis {#sec:ablsa} ------------------------- In this section we present the results of a linear perturbation to the stationary ground state of the droplet. The system is in a stationary (velocity ${\boldsymbol{v}} = 0$) steady state when the interface is spherical (fixed radius $R = R_0$) with a homogeneous concentration of active particles ($c = c_0$). Then the bulk concentration is $\rho_0 = k_{\rm off} c_0/k_{\rm on}$ inside the drop, and the hydrostatic pressure inside is ${\boldsymbol{P}} = P_{\rm ext} + (2\gamma_0 - {\tilde{\zeta}}c_0)/R_0$ where $P_{\rm ext}$ is the stationary state pressure in the external fluid. We perform a linear stability analysis by applying a small perturbation to the variables defined at the interface $R$ and $c$ of the form: $\tilde{g} = g_0 + \sum_{l=1}^{\infty}\sum_{m=-l}^{l}\delta g_{lm}(t)Y_l^m(\theta,\phi)$, where $Y_l^m$ are the spherical harmonic functions and $\delta g_{lm} \ll g_0$. To first order, the resulting flow is given by Lamb’s solutions for flow around a sphere, which can be expressed as vector spherical harmonics [@Lamb1945]. Solving the Stokes equation with flow and stress continuity conditions at the droplet interface gives expressions for $\delta v^{(i)}_{lm}$ (as defined in [@Carrascal1991] and Supplementary Information appendix A) in terms of $\delta c_{lm}$ and $\delta R_{lm}$. The perturbation on the interface is also coupled to a perturbation of the internal concentration $\rho$ such that $$\begin{aligned} \rho = {\left[ \frac{k_{\rm off} c_0}{k_{\rm on}} + \sum_{l=1}^{\infty}\sum_{m=-1}^{l}\delta\rho(r,t)Y_l^m \right]} \, .\end{aligned}$$ We obtain analytical solutions for the stability by assuming a quasistatic solution for $\delta \rho$ (taking $\dot{\rho}=0$). This assumption corresponds to a fast relaxation of the bulk concentration $\rho$ compared to the timescale of evolution of the surface concentration $c$. At linear order, the solution for $\delta \rho$ simply satisfies the diffusion equation with a flux condition at the boundary: $$\begin{aligned} \delta \rho = \frac{k_{\rm off} R_0 \delta c}{D_\rho l + k_{\rm on} R_0} {\left( \frac{r}{R_0} \right)}^l \,.\end{aligned}$$ This solution enables us to predict the effect of the feedback by diffusion through the bulk analytically. The full solutions to the coupled linear equations are solved exactly with Bessel functions as in [@Hawkins2011], however these solutions do not permit an analytical calculation of the stability condition, hence we do not consider them here, but instead compare our approximate analytical solutions directly with the full dynamical simulations. Finally, we evaluate the coupled system of dynamic equations for the concentration (equation in section \[sec:abmodel\]) and radius $\dot{R} = {\boldsymbol{v}}_b.\hat{{\boldsymbol{n}}}$ (the normal velocity at the interface) to first order in the perturbations. We find instabilities by looking for positive eigenvalues of the stability matrix that relates $\dot{c}$ and $\dot{R}$ to $\delta c$ and $\delta R$ to first order in the perturbations (see Supplementary Information appendix A for further details of this calculation). From this analysis we find an instability threshold for the effective activity ${\tilde{\zeta}}<\alpha_I$ where $$\begin{aligned} \label{intZ} \alpha_I = -\frac{2\tilde{\eta}}{c_0}{\left( 2l+1 \right)}{\left( \frac{D}{R_0} + \frac{D_\rho R_0 k_{\rm off}}{(l+1){\left( D_\rho l + k_{\rm on} R_0 \right)}} \right)},\end{aligned}$$ where $\tilde{\eta} = (\eta_0 + \eta_1)/2$ is the mean viscosity of the internal and external fluid. We see that $\alpha_I$ is independent of the effective surface tension $\gamma_0'$ which shows that the coupled droplet deformation does not contribute to the symmetry breaking threshold. However, the corresponding maximum eigenvalue of the stability matrix does weakly depend on the effective surface tension $\gamma_0'$ for $l>1$. This weak positive relation suggests that the instability should evolve more quickly in large surface tension drops when $l>1$. In this linear limit there is no contribution from the advection term in and the second term in (proportional to the binding rates) always increases the threshold. This is because the binding terms allows the concentration on the interface to be recycled by unbinding and diffusing into the bulk of the drop. The stability analysis shows how the droplet will initially deform. This deformation is characterised at short times by the maximally unstable mode $l_{\rm max}$, which can be found exactly when binding is not included (see figure \[fig:lmax\] and Supplementary Information appendix A). At linear order the instability is independent of the spherical harmonic parameter $m$. Generically, $l_{\rm max}$ predicts that as contractile activity is increased, the more concentration peaks will be initially formed on the droplet surface (figure \[fig:lmax\]). The total droplet activity scales with droplet size, and so $l_{\rm max}$ is more sensitive to the activity parameter ${\tilde{\zeta}}$ in larger droplets. Thus it is easier to observe modes with small $l$ in smaller droplets, where the dynamics are less sensitive to small changes in the activity. Note that only the $l=1$ mode ($k=1$ in 2D) produces net propulsion of the droplet (i.e. $\int_\Sigma \dot{R}\hat{{\boldsymbol{n}}}{\rm d}S \neq 0 $), so the first unstable mode corresponds to front-back symmetry breaking of the droplet profile. As shown in Supplementary Information appendix A, one can approximate the maximally unstable mode $l_{\rm max}$ analytically by solving $\dot{R} = 0$ for $\delta R_{lm}$. This approximation imposes that $R$ always assumes the steady state shape for a given fixed concentration perturbation $\delta c_{lm}$ (plotted in figure \[fig:lmax\]). Physically, this assumes that the shape dynamics are much faster than the concentration dynamics, and so can be taken to be quasistatic. Interestingly, while this assumption does not represent the full coupled dynamics of $\delta c_{lm}$ and $\delta R_{lm}$, it does reproduce the critical activity threshold, and also approximates the mode structure well. When binding is included ($k_{\rm off} \neq 0$) the dispersion relation changes, and as we see from the active threshold is non-linear in $l$, and hence higher (non-swimming) modes can have lower activity thresholds than the $l=1$ (swimming) mode. ![Maximum mode number $l_{\rm max}$ plotted against activity in normalised units for increasing values of the droplet radius. Dashed lines show numerical solution and solid lines show analytical approximation using $\dot{R}=0$. Parameters used: $c_0 = 1$, $\gamma_0=1$, $D=0.05$, $\eta_0=\eta_1=1$ and $k_{\rm off}=0$. Insets show flow (blue arrows) and active concentration $c$ (colour gradient from purple (low) to yellow (high)) to linear order on the perturbed interface for a (i) $l=1$ mode and (ii) $l=2$ mode respectively. Deformation of the interface in (ii) is calculated [by solving $\dot{R}=0$ for $\delta R$ given the form of $\delta c$, and is exaggerated for visibility]{} using small $\gamma_0'$.[]{data-label="fig:lmax"}](fig2.pdf){width="\columnwidth"} Within the assumptions made here, the binding and unbinding dynamics always increase the activity threshold. We see that if the binding is fast $k_{\rm on} \gg D_\rho$, the critical activity takes the same form as the 1D model considered in [@Hawkins2011] where the active threshold is always minimal for $l=1$ and is proportional to the effective diffusion parameter $\tilde{D} = (Dk_{\rm on} + D_\rho k_{\rm off})/k_{\rm on}$. However, for fast bulk diffusion, geometrical effects become important. A single peak in the interfacial concentration gives rise to a concentration gradient in the bulk driving diffusion away from it. As the number of peaks on the interface increases the concentration gradients are more localised to the surface, and diffusion has a smaller effect. In this regime, the minimum critical activity can correspond to multi-peak modes ($l>1$) when the contribution from bulk diffusion is significant. This is analogous to the findings in [@Bois2011] for a one-dimensional active fluid consisting of two-components. The droplet shape instability is enslaved to the concentration (as $\alpha_I$ is independent of $\gamma$), so we can estimate how the shape will deform due to certain concentration distributions on the interface by solving $\dot{R}=0$ for $\delta R$ (for $l>1$). Plotted in figure \[fig:lmax\] is an example of these deformations and the associated flow to linear order. In order to calculate the resulting steady state dynamics we require numerical simulation. Results and Comparison with Simulations {#sec:absims} --------------------------------------- We test these analytical results against the 2D simulations developed in [@Whitfield2016]. These use an Immersed Boundary method [@Peskin2002; @Lai2008] to represent the active interface explicitly as a Lagrangian mesh which is coupled to the Cartesian mesh for the 2D fluid via a numerical Dirac delta function. Repeating the stability analysis in 2D, we now take perturbations of the form $g = g_0 + \sum_{k=1}^{\infty}{\rm e}^{ik\theta}$. The calculation reveals that surface tension gradients do not deform the drop in 2D (as found in [@Yoshinaga2014]) however the concentration dynamics remain very similar. We compare our predictions in 2D to the results of the Immersed Boundary simulations in figure \[fig:abphase\]. We run simulations varying the activity, binding rate (taking $k_{\rm off} = k_{\rm on}$) and diffusion parameters. At zero binding we observe two steady phases, a stationary state and a steady moving state \[fig:abphase\](a) separated by the threshold $\alpha_{\rm I,2D}$ which agrees well with the expected analytical result $$\begin{aligned} \label{2DintZ} \alpha_{\rm I,2D} = -4\frac{\tilde{\eta}}{c_0}{\left( \frac{Dk}{R_0} + \frac{D_\rho R_0 k_{\rm off}}{{\left( D_\rho k + k_{\rm on} R_0 \right)}} \right)} \, .\end{aligned}$$ [This moving steady state due to a surface tension gradient is also observed for the the self-propelled droplets studied in [@Yoshinaga2014; @Ohta2009a]. The equations of motion we use (see Model section) are similar to those for the self-propelled droplets studied in [@Yoshinaga2014; @Ohta2009a] and hence some of the same dynamical behaviour is observed. However, our model predicts new stable states and instabilities corresponding to pure deformation and division as discussed below. This arises due to the advection and diffusion of active particles through the bulk of the drop. Unlike in [@Yoshinaga2014; @Ohta2009a] the model here conserves the active particles within the drop making it more relevant to cell cortex dynamics.]{} ![Phase diagram of 2D simulation results for an active isotropic interface, each dot represents a single simulation run. Insets show steady state flow (blue arrows) and concentration fields (colour density, black to yellow) for the different phases. Low values of $k_{\rm off}$ transition from stationary (black squares) to motile (red circles) with a single peak in concentration (shown in (a)). Feedback from the internal concentration [produces intermediate oscillatory states]{} (magenta stars) and a stationary 2-peak state (blue triangles). Solid lines of increasing gradient show predicted activity threshold for modes $k=1,2$ (red, blue). Simulation parameters: $c_0 = 1$, $R_0=1$, $\gamma_0=1$, $D=0.05$, $D_\rho=0.5$, $\eta_0=\eta_1=1$. []{data-label="fig:abphase"}](fig3.pdf){width="\columnwidth"} We next calculate the maximum mode number $k_{\rm max}$ (see Supplementary Information appendix A). In the regime where we predict $k_{\rm max}=2$, our simulations show initial formation of 2 peaks in droplet concentration. Without binding, these peaks are unstable and always coalesce to form one (as predicted for a flat active viscous layer in [@Bois2011]). In this case, the droplet swims persistently and steadily with the concentration peak at its rear. A decomposition of the Fourier modes of this steady state shows that the far field flow is puller like, i.e. its dipole moment is such that it pulls the surrounding fluid inward and pushes it outward along the axis perpendicular to its motion. The activity threshold predicted compares well to that in the simulations for small values of the binding. At larger binding rate, the interior dynamics is not completely diffusion dominated, and the critical activity is underestimated due to the approximation of $\dot{\rho} = 0$. As we increase $k_{\rm off}$ and ${\tilde{\zeta}}$ we see that eventually the droplet becomes immobile with 2 stable peaks in the concentration (see figure \[fig:abphase\]). In the intermediate regime the droplet undergoes a ‘wandering’ motion as the concentration profile oscillates between a single peak and two peaks. Equation predicts a non-trivial $k$ dependence of the active threshold as binding terms become important. For the parameters used in figure \[fig:abphase\], this can be seen by the crossing of the lines for the $k=1$ and $k=2$ modes, meaning that the minimum critical activity is not necessarily for the lowest $k$ mode ($k=1$). Note this is very similar to the prediction in 3D in . The simulation results in figure \[fig:abphase\] demonstrate that as the binding rate increases, advection of the concentration through the droplet bulk becomes more important. The advection can stabilise the two peaks at diametrically opposite points on the circle, resulting in a stationary droplet. However, we see that in 2D the drop does not deform, as the radial forces from the activity gradients are always cancelled by the hydrostatic pressure $P$. This is not the case for the full 3D system where we expect concentration gradients to deform the droplets as shown in figure \[fig:lmax\]. Nonetheless, the 2D simulations show that advection can stabilise the 2 peak configuration, which in 3D would result in symmetric deformation and potentially division of the droplet. Such a 3D simulation is beyond the scope of this work, but would be useful for quantifying the full 3D morphology. Recent work has shown that non-adherent cells exhibit a swimming state similar to the motion described here, and so it would be of interest to test in future work whether the steady state shape in 3D for the model here resembles the ‘pear shape’ observed in [@Ruprecht2015; @Callan-Jones2016]. Active Polar Fluid Droplet ========================== In this section, we consider a droplet filled with an active polar liquid crystal of constant density everywhere. Realising this system experimentally in droplet systems requires high concentrations of active material so that the polar to isotropic phase transition is localised to the droplet centre. This has been achieved *in vitro* for microtubule based active nematics but only in thin films thus far [@Sanchez2012; @Keber2014]. In these systems the measured order parameter is approximately constant everywhere except in the vicinity of topological defects. Thus we consider the limit where the active fluid is strongly polarised and restrict the analysis to only the orientational degrees of freedom of the active liquid crystal, and do not consider the density or polarisation magnitude degrees of freedom. Model {#sec:apfmodel} ----- We utilise the model of an active polar fluid developed by Kruse et al. in [@Kruse2004; @Kruse2005; @Furthauer2012] which has similarities to other continuum models of the cytoskeleton on surfaces (such as [@Lober2015; @Ziebert2014]). We consider the case where the active fluid has strong local ordering and is far from the isotropic phase so that ${\left| {\boldsymbol{p}} \right|}=1$ everywhere (except at defects where ${\boldsymbol{p}}$ is undefined). This approximation is commonly used to model active and passive liquid crystal systems analytically. In the $Re=0$ limit the total stress in the active polar fluid, $\sigma_{i j}^{\mathrm{tot}} = \sigma_{ij}^{\mathrm{visc}} + \sigma_{ij}^{\mathrm{dist}} + \sigma_{ij}^{\mathrm{act}} $, has viscous, distortion and active contributions respectively where: $$\begin{aligned} \sigma_{ij}^{\mathrm{visc}} &= 2\eta_{n} u_{ij} = \eta_{0,1}{\left( \partial_i v_j + \partial_j v_i \right)} \, , \\ \sigma_{ij}^{\mathrm{dist}} &= \frac{\nu}{2} {\left( p_i h_j + p_j h_i \right)} + \frac{1}{2}{\left( p_i h_j - p_j h_i \right)} + \sigma_{ij}^{\mathrm{e}} \, , \\ \sigma_{ij}^{\mathrm{act}} &= - {\zeta}p_i p_j \, .\end{aligned}$$ The viscous stress is the response to flow assuming a Newtonian fluid. The distortion stress is that of a passive polar liquid crystal due to deviations in filament alignment, where the perpendicular part of the molecular field $h_i = -(\delta F/\delta p_j)(\delta_{ij}-p_ip_j)$ acts to minimise the free energy functional $F = \int_{\Omega+\Sigma} {\rm d}^3r f$ with respect to ${\boldsymbol{p}}$, given $\left| {\boldsymbol{p}} \right| = 1$. The Ericksen stress, $\sigma_{ij}^{\mathrm{e}} = f\delta_{ij} - (\partial f/(\partial(\partial_jp_n)))(\delta_{ij} - p_n p_k) \partial_i p_k $, is a generalisation of the hydrostatic pressure for complex fluids. Finally, the active stress represents the active dipolar force and thus is second order in ${\boldsymbol{p}}$. The free energy functional $F$ gives the equilibrium properties of the system. Here for simplicity we use the one constant approximation of the Frank free energy: $$\begin{aligned} \label{FreeE} F = \int_\Omega {\rm d}^3r \frac{K}{2}(\partial_i p_j)^2 + \int_\Sigma {\rm d}S f_s \; ,\end{aligned}$$ where $K$ is the elastic constant and ${\left| {\boldsymbol{p}} \right|}=1$. Since we are modelling a finite droplet, the surface terms are important. We consider normal anchoring of the filaments to the interface, with surface distortion free energy density $f_s = W ({\boldsymbol{p}}\cdot\hat{{\boldsymbol{n}}} - 1)^2 $. [This form of the surface free energy includes the ‘spontaneous splay’ term which is allowed in polar liquid crystals [@Pleiner1989].]{} The polarisation flux is $$\begin{aligned} \label{dpdt} \dot{{\boldsymbol{p}}} = -{\left( {\boldsymbol{v}}\cdot\nabla \right)}{\boldsymbol{p}} - {\underline{\underline{\omega}}}\cdot{\boldsymbol{p}} - \nu{\underline{\underline{u}}}\cdot{\boldsymbol{p}} + \frac{{\boldsymbol{h}}}{\Gamma}\end{aligned}$$ where $\omega_{ij} = (\partial_iv_j - \partial_jv_i)/2$ and $\Gamma$ is the rotational viscosity. Linear Stability Analysis {#sec:apflsa} ------------------------- We contrast the model of an active interface to that of a droplet of active polar fluid of constant density. In this case, rather than the concentration of active particles, the important degree of freedom is the polarisation vector ${\boldsymbol{p}}$ denoting the average direction of the contractile filaments in the fluid. We calculate the linear stability of the droplet in the limit of strong anchoring $W \rightarrow \infty$ in order to study the effects between the coupling of droplet morphology and polarisation. This equates to the boundary condition ${\boldsymbol{p}} = \hat{{\boldsymbol{n}}}$ at ${\boldsymbol{r}}={\boldsymbol{R}}$. In the case of weak or no anchoring, instabilities can occur for both extensile (${\zeta}>0$) and contractile (${\zeta}<0$) active polar drops as shown analytically in [@Whitfield2015] and in simulations [@Tjhung2012]. The condition of fixed polarisation at the interface inhibits certain deformations of the polarisation field at low activities and so the preferred deformation modes are those which can couple to the droplet deformation. This was demonstrated in 2D simulations of active nematic drops in [@Giomi2014]. Here we explain this mechanism analytically in a 3D fluid drop by linear stability analysis. The polar nature of the anchoring produces a “radial hedgehog” topological defect at the droplet centre (or a radial defect with $+1$ winding number in 2D), giving a simple analytical description of the stationary state. Thus we are able to make analytical predictions about spontaneous symmetry breaking in these systems even in the general 3D case. Unlike the case of an active interface, the active fluid here fills the drop, and hence active and passive stresses are generated in the bulk. The stationary steady state is given by the polarisation ${\boldsymbol{p}} = \hat{{\boldsymbol{r}}}$, ${\boldsymbol{R}} = R_0 \hat{{\boldsymbol{r}}}$, and ${\boldsymbol{v}} = 0$. [To perform a general linear stability analysis, one would need to consider generic perturbations to both the polarisation field and interface and study the coupled equations for their evolution, this is not analytically tractable in this case. However, we can perform restricted perturbations that we expect to be representative of the dynamics in a particular limit. We consider the case where the polarisation field is enslaved everywhere to the shape of the boundary by the anchoring condition. This corresponds to the limit where bulk instabilities in the droplet are suppressed by its size (*i.e.* small droplets). In larger droplets, (or equivalently for smaller $K$) the dynamics of the polarisation field becomes more independent of the anchoring condition, and we expect this approximation to break down. ]{} Due to the symmetry of the stationary state, we first need to consider the special case of the translational mode of perturbation, corresponding to the $l=1$ spherical harmonic mode. Without loss of generality we consider a perturbation along the $z$-direction ($m=0$). This mode implies a translation of the hedgehog defect away from the droplet centre. If we assume that the defect has some fixed finite core radius $R_{c}$ then we can treat the liquid crystal as contained between two boundary conditions, one at the defect $r=R_{c}$ and one at the droplet interface $r=R_0-\delta z \cos(\theta)$, where $\delta z$ is a small displacement of the defect position from the droplet centre along the $z$-direction. The calculation is done in the reference frame of the defect so that it coincides with the origin of our coordinate system. In the equilibrium case (${\zeta}=0$), we can write a polarisation field to first order that minimises the bulk free energy in by solving ${\boldsymbol{h}} = 0$ for these boundary conditions: $$\begin{aligned} \label{pl1} {\boldsymbol{p}}_{l=1} = {\boldsymbol{e}}_r - \delta z \frac{r-R_c}{r(R_0-R_{c})}\sin(\theta) {\boldsymbol{e}}_\theta \, .\end{aligned}$$ This method equates the defect to a small colloid with (polar) homeotropic anchoring, and in the strong anchoring case we expect the free energy minimum to correspond to the defect being positioned at the droplet centre as we observe in simulations, and is reported in [@Lubensky1998; @Poulin1998]. Using the polarisation in equation we can estimate what the bulk free energy increase will be for such a deformation (details in Supplementary Information Appendix B) $$\begin{aligned} \notag \Delta F_{\rm bulk} &= \frac{4 K \pi \delta z^2}{3 R_{0} (1-\epsilon)^2} {\left[ 4- 3\epsilon - \epsilon^2 + 4\epsilon \log {\left( \epsilon \right)} \right]} +\, O(\delta z^3) \\ \label{deltaF} &\approx \frac{16 K \pi \delta z^2}{3R_{0}}\end{aligned}$$ where $\epsilon=R_{c0}/R_{0}$ is assumed small in the final approximation of the equation. This $\Delta F$ is positive for all $\epsilon$, suggesting that the free energy minimum corresponds to the defect being positioned at the droplet centre. Note that this polarisation field is only valid to first order in $\delta z$ and so higher order terms could affect the form of the quadratic term here. We now introduce a small activity ${\zeta}$, such that equation remains a valid approximation for the form of the polarisation field, then we see that this gives rise to active forces in the drop. We solve the force balance equations (omitting passive contributions, see Supplementary Information Appendix B) to find the active contribution to the flow. We then integrate to find the active contribution to the velocity of the defect core ${\boldsymbol{v}}_{c}$ and droplet ${\boldsymbol{v}}_{\rm drop}$. The relative velocity of the defect is then: $$\Delta {\boldsymbol{v}} \equiv {\boldsymbol{v}}_c - {\boldsymbol{v}}_{\rm drop} \approx {\zeta}\delta z\frac{(2\eta_0+\eta_1)-\epsilon(\eta_0+\eta_1)}{2 \eta_0 (3 \eta_0 + 2 \eta_1)} \hat{{\boldsymbol{e}}}_z \; .$$ We see that extensile activity (${\zeta}>0$) always results in a relative defect velocity that is in the same direction as the initial defect displacement (along $\hat{{\boldsymbol{e}}}_z$), as shown by figure \[fig:lsavel\]. This implies that extensile activity will destabilise the defect from the centre and give rise to motion of the droplet as a whole (which to linear order is also along $\hat{{\boldsymbol{e}}}_z$). Conversely, we expect contractile activity to stabilise the defect at the droplet centre, as the flows resulting from contractile activity (${\zeta}<0$) act to restore the defect back to its stationary position at the droplet centre. Thus, within the assumptions made above, one can predict that the active polar droplet will break translational symmetry spontaneously above some finite activity. This mode of symmetry breaking is independent of surface deformations at linear order, and so its critical activity threshold should not depend on the droplet surface tension. Hence the critical activity threshold will only depend on the increase in the passive free energy (equation ), which goes to a finite value in the limit of a point defect and scales as the inverse of the droplet size. In general, the parameter $\epsilon$ is difficult to define, which is a consequence of the assumption of ${\left| {\boldsymbol{p}} \right|}=1$, which breaks down around the defect. This can be avoided by using a Landau-De Gennes type free energy description for the passive part of the dynamics such that there is an polar-to-nematic phase transition at the centre of the droplet. However, such an approach is not analytically tractable, as it requires solving non-linear partial differential equations for the radial dependence of ${\boldsymbol{p}}$. Qualitatively though, the predictions here are consistent with what is observed in the simulations. ![Active part of the flow field (blue arrows) to linear order in the perturbations for: (a) defect position (inner sphere) displaced in the vertical direction with ${\zeta}>0$ (extensile activity); (b) $l=2$ mode perturbation of the interface assuming strong anchoring of the polarisation field with ${\zeta}<0$ (contractile activity). The perturbations are made artificially large for visibility here.[]{data-label="fig:lsavel"}](fig4.pdf){width="\columnwidth"} For perturbation modes $l>1$ the flow at the origin will always be zero, and so one can assume that in the strong anchoring limit the defect will remain centred at the origin. We again require an assumption for the $r$-dependence of the polarisation perturbation. Taking $R_{c0} \rightarrow 0$, we can write a general form as $\delta {\boldsymbol{p}} \propto r^n$ for arbitrary $n\ge0$. Importantly, for all $n$, the active flows always give rise to an instability for ${\zeta}<0$ (contractile). Considering only active flows, the maximally unstable perturbation is for $n=0$. Thus, below we consider only the results of this mode, which allows us to consider the dynamics in the limit where the filament polarisation at the interface and in the droplet are strongly coupled. However it comes at the cost of reducing the quantitative power of our predictions, and is an important restriction to the dynamics considered. Note, in 2-dimensions, the assumption $n=0$ gives rise to an infinite passive contribution to the dynamics (proportional to $K$) and so we use $n=1$, which appears consistent with what is observed in simultions. In the strong anchoring limit, the polarisation has to match the perturbed interface normal at $r=R$ to first order, such that $$\begin{aligned} \label{pl2} {\boldsymbol{p}} = \hat{{\boldsymbol{r}}} - \sum_{l=2}^\infty \sum_{m=-l}^{l} &{\left[ \frac{\delta R_{lm}(t)}{R_0} r(\nabla Y_l^m(\theta,\phi)) \right]} \, .\end{aligned}$$ We calculate the resulting flows to first order in $\delta R$. Since ${\boldsymbol{p}}$ is enslaved to the deformation we then only need to consider the radius dynamics given by $\dot{R}$ (for details see Supplementary Information appendix B). In this strong anchoring limit we find that the droplet is unstable if ${\zeta}<\alpha_P<0$, i.e. the activity threshold, $\alpha_P$, is always contractile. The threshold $\alpha_P$ increases linearly with $\gamma$ and $K$. Repeating the linear stability analysis calculation in 2D shows the same qualitative prediction, where this time we take $\delta p \propto r$ as this is the leading order contribution allowed. The analytical expressions for the activity threshold are given in Supplementary Information appendix B and a full discussion of the eigenvalues of the general stability matrix (for weak anchoring) can be found in [@Whitfield2015]. The result of this analysis is somewhat surprising, in this strong anchoring limit we expect the $l=1$ mode to be unstable to extensile activity, whereas the higher modes of deformation are unstable for contractile activity. This suggests that, when our assumptions hold, we should see translational symmetry breaking with the defect moving to the droplet front for an extensile drop and symmetric modes of deformation for a contractile drop (see figure \[fig:lsavel\]). This active threshold scales linearly with $K$ and $\gamma_0$, demonstrating the importance of the coupling of the morphology to the polarisation field. Contrast this to the case of the active interface where the shape does not affect the threshold for a phase transition. This contractile instability can be understood physically by considering the splay in the drop due to perturbations in the interface curvature. High curvature couples to increased splay which couples to outward flow, further increasing the curvature of the interface and hence the splay. A sketch of this is given in figure \[fig:instsketch\]. ![Spatial change in splay induced by boundary pertubation. Dotted line indiciates $R_0$ and solid line the perturbed interface $R$. Increased splay in regions of higher curvature drive outward flows, coupling to further increase in boundary curvature. The black arrows indicate polarisation direction while the colour gradient indicates the splay magnitude ${\left| {\nabla \cdot}{{\boldsymbol{p}}} \right|}$ relative to its value in the stationary state.[]{data-label="fig:instsketch"}](fig5.pdf){width="\columnwidth"} Results and Comparison with Simulations {#sec:apfsims} --------------------------------------- In the 2D simulations (see figure \[fig:bulkphase\]) we see symmetry breaking corresponding to the $k=1$ mode for extensile activity resulting in a steady motile state, as predicted by the stability analysis. This is characterised by the defect centre moving to the front of the drop and is independent of the boundary deformation (and hence $\gamma_0$). Due to the extensile nature of the activity this droplet is a pusher, pushing fluid out along its axis of motion and thus elongating parallel to its motion. Conversely contractile activity stabilises the defect at the droplet centre and we observe a $k=2$ mode instability characterised by deformation of the droplet into a ‘dumbbell’ shape. It is also observed that this phase behaviour breaks down as the value of $K/R_0^2$ is reduced. In this limit the distortions in the droplet bulk are not strongly coupled to those at the interface and so more complex distortions can occur without significant droplet deformation. Our analytical calculations do not predict this as we assume a form for the $r$-dependence of the polarisation such that it is strongly coupled to the curvature. This behaviour goes beyond the scope of the analytical work here as this corresponds to a transition to an ‘active turbulence’ state, as numerically simulated in [@Whitfield2016]. ![Active polar drop stability diagram. Stationary state (white, square dots), spontaneous symmetric deformation (blue, triangular dots) and spontaneous motility (red, round dots) are observed. Dashed line shows analytical prediction from linear stability analysis. Insets show the polarisation field ${\boldsymbol{p}}$ (black arrows) inside the droplet following symmetry breaking with defects labelled by blue dots. Note that due to the simulation method, the polarisation field in the simulations changes continuously from ${\left| p \right|}=1$ inside the drop to ${\left| p \right|}=0$ outside, hence the polarisation is defined everywhere in (i) and (ii). Parameters used: $K = 0.1$, $R_0=1$, $\eta_0=\eta_1=\Gamma=1$, $W = 50$ and $\nu=1.1$.[]{data-label="fig:bulkphase"}](fig6.pdf){width="\columnwidth"} Finally, we also observe rotational steady states in the simulations (for extensile activity when using $\nu=-1.1$) which can be characterised exactly by rotationally invariant distortions of the polarisation field [@Kruse2004; @Kruse2005], but these are not predicted for the parameter range used in figure \[fig:bulkphase\]. Discussion {#sec:discussion} ========== We have used analytical linear stability analysis and numerical simulation to characterise instabilities in active droplets and their resulting non-equilibrium steady states. Recent advances in experimental techniques mean that active gels of cytoskeletal material can be produced *in vitro*. The predictions of our active interface model could be tested by adsorbing an isotropic actin gel onto the interface of an emulsion drop containing myosin and ATP [@Tsai2011; @Shah2014]. We predict an activity threshold for spontaneous motion, and a further continuous transition to a stable symmetric state mediated by advection of motors through the droplet bulk. We predict that in 3D this symmetric configuration will be coupled to deformation of the drop, however this cannot be observed in the 2D model. The active polar drop model we use only predicts some of the dynamics of a real active polar drop system as it ignores the density and ordering magnitude degrees of freedom. However, this model system gives us an insight into the intrinsic instabilities when droplet deformation and filament polarisation direction are strongly coupled. In particular, there is a contractile activity threshold that is linearly dependent on surface tension, above which the droplet spontaneously deforms into a characteristic dumbbell shape. We also see persistent motility in the case of extensile activity such that the droplet acts as a *pusher*, compared to the *puller* type motion exhibited in the active isotropic interface model. This is consistent with previous active droplet models that show contractile activity resulting in droplets which are *pullers* and extensile activity resulting in *pushers* [@Tjhung2012; @Giomi2014; @Marth2014; @Khoromskaia2015; @Tjhung2015]. An interesting future extension of this work would be to consider coupling between both of the active phases studied here within a single drop. The finite active systems we study improve our understanding of how confinement and deformation affect steady state dynamics. Additionally, we see the importance of feedback, driven by advection through the droplet or the internal orientational order, resulting in more complex dynamics. These results should prove useful in characterising future experiments on *in vitro* cytoskeletal networks and be useful in developing more complex models of multicomponent active systems in nature. Acknowledgements {#acknowledgements .unnumbered} ================ We acknowledge the EPSRC for funding this work, grant reference EP- K503149-1. [10]{} References {#references .unnumbered} ========== M. C. Marchetti, J. F. Joanny, S. Ramaswamy, T. B. Liverpool, J. Prost, M. Rao and A. R. Simha, [[Rev. Mod. Phys.]{} [**85**]{} 1143–1189 (2013)](http://dx.doi.org/10.1103/RevModPhys.85.1143) K. Kruse, J. F. Joanny, F. J[ü]{}licher, J. Prost and K. Sekimoto, [ Phys. Rev. Lett. [**92**]{} 078101 (2004)](http://dx.doi.org/10.1103/PhysRevLett.92.078101) K. Kruse, J. F. Joanny, F. Jülicher, J. Prost and K. Sekimoto, [ Eur. Phys. J. E [**16**]{} 5–16 ](http://dx.doi.org/10.1140/epje/e2005-00002-5) (2005) S. F[ü]{}rthauer, M. Neef, S. W. Grill, K. Kruse and F. J[ü]{}licher, [New J. Phys. [**14**]{} 023001 (2012)](http://dx.doi.org/10.1088/1367-2630/14/2/023001) R. Voituriez, J. F. Joanny and J. Prost, [Europhys. Lett. [**70**]{} 404–410 (2005)](http://dx.doi.org/10.1209/epl/i2004-10501-2) R. Voituriez, J. F. Joanny and J. Prost, [ Phys. Rev. Lett. [**96**]{} 028102 (2006)](http://dx.doi.org/10.1103/PhysRevLett.96.028102) K. Kruse, J. F. Joanny, F. J[ü]{}licher and J. Prost, [Phys. Biol. [**3**]{} 130–137 (2006)](http://dx.doi.org/10.1088/1478-3975/3/2/005) J. S. Bois, F. J[ü]{}licher and S. W. Grill, [Phys. Rev. Lett. [**106**]{} 028103 (2011)](http://dx.doi.org/10.1103/PhysRevLett.106.028103) N. Sarkar and A. Basu, [Phys. Rev. E [**92**]{} 052306 (2015)](http://dx.doi.org/10.1103/PhysRevE.92.052306) A. Zumdieck, M. C. Lagomarsino, C. Tanase, K. Kruse, B. Mulder, M. Dogterom and F. J[ü]{}licher, [Phys. Rev. Lett. [**95**]{} 258103 (2005)](http://dx.doi.org/10.1103/PhysRevLett.95.258103) R. J. Hawkins, R. Poincloux, O. B[é]{}nichou, M. Piel, P. Chavrier and R. Voituriez, [Biophys. J. [**101**]{} 1041–1045 (2011)](http://dx.doi.org/10.1016/j.bpj.2011.07.038) J. F. Joanny, K. Kruse, J. Prost and S. Ramaswamy, [Eur. Phys. J. E [**36**]{} 52 (2013)](http://dx.doi.org/10.1140/epje/i2013-13052-9) D. Khoromskaia and G. P. Alexander, [Phys. Rev. E [**92**]{} 062311 (2015)](http://dx.doi.org/10.1103/PhysRevE.92.062311) A. C. Callan-Jones, J. F. Joanny and J. Prost, [Phys. Rev. Lett. [**100**]{} 258106 (2008)](http://dx.doi.org/10.1103/PhysRevLett.100.258106) E. Tjhung, D. Marenduzzo and M. E. Cates, [PNAS [**109**]{} 12381–12386 (2012)](http://dx.doi.org/10.1073/pnas.1200843109) J. F. Joanny and S. Ramaswamy, [J. Fluid Mech. [**705**]{} 46–57 (2012)](http://dx.doi.org/10.1017/jfm.2012.131) C. Blanch-Mercader and J. Casademunt, [Phys. Rev. Lett. [**110**]{}(7) 078102 (2013)](http://dx.doi.org/10.1103/PhysRevLett.110.078102) L. Giomi and A. DeSimone, [Phys. Rev. Lett. [**112**]{} 147802 (2014)](http://dx.doi.org/10.1103/PhysRevLett.112.147802) , D. Marenduzzo, R. Voituriez and R. J. [Hawkins]{}, [ Eur. Phys. J. E [**37**]{} 8 (2014)](http://dx.doi.org/10.1140/epje/i2014-14008-3) W. Marth, S. Praetorius and A. Voigt, [J. R. Soc. Interface [**12**]{}(107) (2015)](http://dx.doi.org/10.1098/rsif.2015.0161) E. Tjhung, A. Tiribocchi, D. Marenduzzo and M. E.Cates, [Nature Comms. [**6**]{} 5420 (2015)](http://dx.doi.org/10.1038/ncomms6420) R. J. Hawkins, M. Piel, G. Faure-Andre, A. M. Lennon-Dumenil, J. F. Joanny, J. Prost and R. Voituriez, [Phys. Rev. Lett. [**102**]{} 058103 (2009)](http://dx.doi.org/10.1103/PhysRevLett.102.058103) A. C. Callan-Jones, V. Ruprecht, S. Wieser, C. P. Heisenberg and R. Voituriez, [Phys. Rev. Lett. [**116**]{}(2) 028102 (2016)](http://dx.doi.org/10.1103/PhysRevLett.116.028102) F. G. Woodhouse and R. E. Goldstein, [Phys. Rev. Lett. [**109**]{} 168105 (2012)](http://dx.doi.org/10.1103/PhysRevLett.109.168105) A. C. Callan-Jones and R. Voituriez, [ New J. Phys. [**15**]{} 025022 (2013)](http://dx.doi.org/10.1088/1367-2630/15/2/025022) A. Kumar, A. Maitra, M. Sumit, S. Ramaswamy and G. V. Shivashankar, [Sci. Rep. [**4**]{} 3781 (2014)](http://dx.doi.org/10.1038/srep03781) H. Turlier, B. Audoly, J. Prost and J. F. Joanny, [Biophys. J. [**106**]{}(1) 114-123 (2014)](http://dx.doi.org/10.1016/j.bpj.2013.11.014) A. C. Callan-Jones, V. Ruprecht, S. Wieser, C. P. Heisenberg and R. Voituriez, [Phys. Rev. Lett. [**116**]{}(2) 028102 (2016)](http://dx.doi.org/10.1103/PhysRevLett.116.028102) C. A. Whitfield and R. J. Hawkins, [ PLOS ONE [**11**]{}(9) e0162474 (2016)](http://dx.doi.org/10.1371/journal.pone.0162474) P. M. Bendix, G. H. Koenderink, D. Cuvelier, Z. Dogic, B. N. Koeleman, W. M. Brieher, C. M. Field, L. Mahadevan and D. A. Weitz, [ Biophys. J. [**94**]{} 3126–3136 (2008)](http://dx.doi.org/10.1529/biophysj.107.117960) T. Sanchez, D. T. N. Chen, S. J. DeCamp, M. Heymann and Z. Dogic, [Nature [**491**]{} 431–434 (2012)](http://dx.doi.org/10.1038/nature11591) F. C. Keber, E. Loiseau, T. Sanchez, S. J. DeCamp, L. Giomi, M. J. Bowick, M. C. Marchetti, Z. Dogic and A. R. Bausch, [ Science [**345**]{} 1135–1139 (2014)](http://dx.doi.org/10.1126/science.1254784) F. C. Tsai, B. Stuhrmann, G. H. Koenderink, [[Langmuir]{} [**27**]{}(16):10061-10071 (2011)](http://dx.doi.org/10.1021/la201604z) E. A. Shah and K. Keren, [E-Life [**3**]{} e01433 (2014)](http://dx.doi.org/10.7554/eLife.01433) Sir H. Lamb, [*[Hydrodynamics]{}*]{} (New York, Dover Publications) ISBN 0486602567 (1945) B. Carrascal, G. A. Estevez, P. Lee and V. Lorenzo, [Eur. J. Phys. [**12**]{} 184–191 (1991)](http://dx.doi.org/10.1088/0143-0807/12/4/007) C. S. Peskin, [Acta Numerica [**11**]{} 479–517 (2002)](http://dx.doi.org/10.1017/S0962492902000077) M. C. Lai, Y. H. Tseng and H. Huang [J. Comp. Phys. [**227**]{} 7279–7293 (2008)](http://dx.doi.org/10.1016/j.jcp.2008.04.014) V. Ruprecht, S. Wieser, A. C. Callan-Jones, M. Smutny, H. Morita, K. Sako, V. Barone, M. Ritsch-Marte, M. Sixt, R. Voituriez and C. P. Heisenberg, [Cell [**160**]{}(4) 673-685 (2015)](http://dx.doi.org/10.1016/j.cell.2015.01.008) N. Yoshinaga, [Phys. Rev. E [**89**]{} 012913 (2014)](http://dx.doi.org/10.1103/PhysRevE.89.012913) T. Ohta and T. Ohkuma, [Phys. Rev. Lett. [**102**]{} 154101 (2009)](http://dx.doi.org/10.1103/PhysRevLett.102.154101) J. L[ö]{}ber, F. Ziebert and I. S. Aranson [Sci. Rep. [**5**]{} 9172 (2015)](http://dx.doi.org/10.1038/srep09172) F.  Ziebert and I. S. Aranson [Eur. Phys. J. Spec. Top. [**223**]{}(7) 1265–1277 (2014)](http://dx.doi.org/10.1140/epjst/e2014-02190-2) H. Pleiner and H. R. Brand [Europhys. Lett. [**9**]{}(3) 243-249 (1989)](http://dx.doi.org/10.1209/0295-5075/9/3/010) C. A. Whitfield, [[*[Modelling Spontaneous Motion and Deformation of Active Droplets]{}*]{} Ph.D. thesis University of Sheffield (2015)](http://etheses.whiterose.ac.uk/id/eprint/11704) T. C. Lubensky, D. Pettey and N. Currier [Phys. Rev. E [**57**]{}(1) 610–625 (1998)](http://dx.doi.org/10.1103/PhysRevE.57.610) P. Poulin and D. A. Weitz [Phys. Rev. E [**57**]{}(1) 626–636 (1998)](http://dx.doi.org/10.1103/PhysRevE.57.626)
--- abstract: 'In this report we present a study on the strength of rocks which are partially fractured from before. We have considered a two dimensional case of a rock in the form of a lattice structure. The fiber bundle model is used for modelling the $2-D$ rock. Each lattice site is considered to be a fiber which has a breaking threshold. Fractures in this system will be of the form a cluster of sites and the length is defined as the number of sites belonging to a single cluster. We introduce fractures in the system initially and apply load until the rock breaks. The breaking of a rock is characterized by a horizontal fracture which connects the left side of the lattice to the right side. The length distribution and the strength of such systems have been measured.' author: - Chandreyee Roy - Srutarshi Pradhan - Anna Stroisz - Erling Fjær title: Strength of Fractured Rocks --- 1. Introduction =============== This report is on the study of strengths of fractured rocks. Some breakdown properties of such rocks have also been studied. For the theoretical analysis, the simple Fiber Bundle Model and the Discrete Fracture Network have been considered. A Fiber Bundle Model (FBM) is used in material sciences to study the breakdown properties of materials. Studying such properties of materials had been first introduced by Leonardo Da Vinci about five hundred years ago. In one of his notebooks he describes an experiment to measure the strength of wires as a function of their lengths. He attached a bucket at one end of a wire and clamped the other end (see Fig. \[davinci\]). Sand was allowed to pour into the bucket until the wire broke. A small pit was created just below the bucket so that when the wire broke, it fell into the pit. The weight of the sand inside the bucket was used to measure the tensile strength of the wire. He found that the longer the wires are, the weaker they are. ![ Tensile strength experiment by Leonardo Da Vinci [@davinci][]{data-label="davinci"}](figure1.eps) ![Lateral view of the Castle Gate Rock sample.[]{data-label="cgrock"}](figure2.eps) Generally, one has always been interested in material stability. For example, when people build houses they try to make it in such a way so that it can withstand normal weather conditions. The Fiber Bundle Model was first introduced by Pierce [@pierce] in 1926 to test the strength of cotton yarns. Since then many people have worked on various aspects of it resulting in such a simple model to have a vast literature today. These models are perfect for studying their failure phenomena as a part of theoretical physics. Several successful experiments also have been carried out. The Discrete Fracture Network (DFN) model is used extensively in studying fractures. It was first introduced by Darcel et al [@darcel-davy] in 2003. It essentially captures the properties of a real fracture network. ![Experimental setup[]{data-label="exptsetup"}](figure3.eps) ![ Stress - Time curve[]{data-label="stresstime"}](figure4.eps) 2. Experimental Data ==================== The modified unconfined compressive strength (UCS) test was conducted on the Castlegate sandstone. The sample was cored perpendicular to bedding and cut to size of approximately 77.62 mm in diameter and 38.2 mm in length. The lateral view of core is shown in Fig. \[cgrock\]. The experiment was performed using a servo-controlled loading frame (Fig.\[exptsetup\]). The sample was placed between two pistons, a movable upper piston and immovable base, marked with $C$ on the figure. Stress was applied in the vertical direction only, and the axial deformation was measured with three LVDTs fixed around the sample. The failure, unlike the standard UCS test, was achieved by a stepwise loading with $5$ MPa intervals. Every new stress level was preceded by unloading to $5$ MPa (see Fig. \[stresstime\] for the stress path). The procedure was repeated until the rock failure (Fig. \[failure\]). The stress versus strain curve for this experiment is shown in Fig. \[stressstrain\](a). This figure has been re-plotted in Fig. \[stressstrain\](b) after eliminating the unloading process of the experiment. ![ Sample after failure[]{data-label="failure"}](figure5.eps) \ 2. Fiber Bundle Models ====================== A Fiber Bundle Model is basically a set of elastic fibers which are placed parallely one after the other as shown in Fig. \[fbm\](a). The fibers are clamped at both ends. The top end is fixed to a rigid support and at the lower end an external force is applied to elongate the bundle. This external force is distributed equally among all intact fibers. Each fiber has a unique breaking threshold ${b_i}$. All the fibers follow Hooke’s Law until the load acting on each fiber reaches their respective breaking thresholds (see Fig. \[fbm\](b)). The elastic constant is assumed to be unity. So the stress applied to each fiber is equal to the elongation caused in it. Each fiber is also assumed to be brittle which means that if the external load per fiber is equal to the threshold value of the fiber then it immediately breaks off. On the application of an external load to a fiber bundle, the fibers having threshold values lower than the acting load per fiber break. When a fiber breaks, it releases the stress carried by it. This is described as stress relaxation. The released stress will now be distributed among the remaining intact fibers. There exists many ways in which the released stress can be shared. Depending on the redistribution of released stress various models of the Fiber Bundle exist in the literature. If the released stress is distributed to all the remaining intact fibers, then such a model is called Equal Load Sharing Model (ELS). On the other hand, if the released stress is distributed to only the neighbouring intact fibers of the broken fiber, then the model is called Local Load Sharing Model (LLS). The LLS model is more realistic than ELS. However, to model the strength of fractured rocks, we have considered the simple ELS rule of load sharing. When the fibers break on the application of an external load, the stress acting on the remaining intact fibers get enhanced. This new enhanced stress may now become more than the threshold values of some of the intact fibers. This results into the breakage of more fibers. This process continues till a stable state is reached. A stable state is described as a state where the threshold values of all the fibers is less than the stress per fiber acting at that particular time. It can also be described as a state when all the fibers have broken which occurs only if the external load per fiber is greater than the critical stress. A review of the model can be found in [@pradhan-hansen]. 3. Modelling the fractures ========================== We have considered a two dimensional case of a fractured rock. We have used the following models for this. 3.1. Equal Load Sharing Model (ELS) ----------------------------------- The Equal Load Sharing Model is also known as the Global Load sharing (GLS) Model. It is the simplest and the oldest of all the Fiber Bundle Models. When external load is applied to the bundle then all the fibers having their breaking thresholds less than the external load per fiber will break. In this model the stress released by these broken fibers will be distributed globally and equally among all the remaining intact fibers. This means that if a fiber breaks then the load released by it may affect any other fiber which can be at an infinite distance from the breaking fiber. Thus each fiber has an infinite range of interaction. ELS is a mean field type model and it is easier to attack this model analytically than the other models. Around 60 years ago Daniel, 1945 [@daniels] had given some exact analytical results of this model. We have used this model for our analysis of the strength of fractured rocks. 3.2. Discrete Fracture Network (DFN) ------------------------------------ A Discrete Fracture Network (DFN) model was introduced by Darcel [*et al*]{} [@darcel-bour; @darcel-davy; @darcel-odling] in the year 2003. The model captures the essential features of fracture outcrops which occur naturally in nature. It can be constructed in two and three dimensions. The model basically incorporates the properties that the length of fractures that happen in nature broadly follow power law and the position of fracture centres are heavily fractal in nature. 3.3. Our Model -------------- We have considered two dimensional square lattice of size $L \times L$ where $L$ is the number of sites in each column or row. We have considered only one property of the Discrete Fracture Network Model for now which is the power law distributed lengths of the fractures. (In a future work we plan to include the fractal nature of the fracture centres). Each lattice site is considered to be a fiber having a particular strength. There is no periodic boundary condition in the horizontal and vertical directions. Each fiber (i.e. each site) is assigned a threshold value of strength that it can endure. These values are taken from a uniform distribution between $[0:1]$. The fractures are dropped in the lattice in the following manner. Two points are chosen randomly on the lattice and a straight line is drawn which connects these two points (see Fig. \[fracprocess\](a)). The length of the straight line $d$ is calculated. Since the fracture lengths follow a power law the probability of finding a fracture of length $d$ is proportional to $d^{-a}$ where $d$ is the length of a fracture and $a$ is the slope of the power law. This implies that $$P(d) = Const . d^{-a}$$ \ A random number $r$ is chosen from a uniform distribution between $[0:1]$. If $r$ is less than or equal to $P(d)$, then we keep the straight line or else we discard it and choose another set of two points. This ensures that the lengths of the straight lines follow a power law. To place the fractures on the lattice, we connect the chosen set of two points by passing through the sites in either a horizontal or vertical manner. This process is depicted in Fig. \[fracprocess\](b). The length of a fracture in this case is defined by the number of sites included in a particular fracture while moving from point $(i_1,j_1)$ to point $(i_2,j_2)$ given by $l=l_1+l_2$. The length distribution of these lattice fractures for both the lengths $d$ and $l$ are plotted in Fig. \[lengthdistributuion\](a). From the figure one can see that there is not much difference between the two power laws. \ However, it may happen that while placing the fractures, two fractures merge together to give one fracture. In that case the power law changes as shown in Fig. \[lengthdistributuion\](b). Here, an upper bound in the length of fractures has been maintained such that no long fractures are formed connecting one side of the lattice to the other. ![ Length distribution of fractures before loading (black) and after the $2-D$ rock has broken (red)[]{data-label="lengthdistributioninitialandfinal"}](figure10.eps) \ 4. Model Dynamics and Length Distribution ========================================= The definition of an avalanche as described by Hemmer [*et al*]{} [@hemmer-hansen; @hansen-hemmer; @kloster-hansen; @Sornette] is as follows. Let the number of sites or fibers broken due to an avalanche be given by $k$. The remaining number of intact sites are $L^2-k$. We first manually break the weakest site from the intact sites. The load released by this weakest site is then redistributed among all the other intact sites. If no other site breaks due to the enhanced stress per fiber, then it is called an avalanche of size $1$. On the other hand if more fibers break, then this triggers an avalanche of broken fibers of sizes more than $1$. After the system has reached a stable point we then find out the weakest site among the remaining intact sites and increase the external stress just enough such that only the weakest one breaks. Due to this another avalanche may or may not follow. For each increase in load we find out the avalanche size which is the number of sites that break (or fail) caused by the increased load. We carry out this process until we can find a horizontal path of broken sites which divides the system into a minimum of two distinct parts. We define the external load at this stage to be the strength of the rock.Fig. \[lengthdistributioninitialandfinal\] shows the length distribution of the fractures before starting the avalanche (black) and after the last avalanche (red). The slope of initial length distribution is $3.062$ and for the final length distribution, it is $1.778$. Fig. \[avalanchedynamics\] shows the total external load versus the strain that is applied to the bundle. In Fig. \[avalanchedynamics\](a) the upper bound is $5$. Thus a higher number of fractures have to be placed initially to get horizontal fracture connecting the left side of the lattice to the right side. In Fig. \[avalanchedynamics\](b) the upper bound is $10$ and so the number of initial fractures required is less. ![ Stress Vs number of initial fractures for lattice size $L=32,64,128$.[]{data-label="stressfrac"}](figure12.eps) \ \ \ ![ Fluctuation or Standard deviation with respect to number of initial fractures []{data-label="fluctuation"}](figure15.eps) 5. Strength of fractured rocks ============================== The strength of a fractured rock is defined as the external load at which a fracture is created which connects the left side of the lattice to the right side. The stress per fiber is calculated and plotted with respect to the number of initial fractures in Fig. \[stressfrac\] for different $L$ values. The sudden drops in the plot for $L=32$ and $L=64$ indicate that when the number of initial fractures are sufficiently increased then there already exists a large cluster of broken fibers which on further loading percolates from the left of the lattice to the right of the lattice. When the strength is of the order of $0$, it means that the initial number of fractures is so high that the rock is already broken. For $L=128$ the sudden drop is expected to appear if the fracture numbers are increased even more than $200$. Fig. \[initialfracpic\](a) shows an initial configuration of the placement of initial fractures without including the lattice sites. The upper bound in the length of the fracture was given to be $5$ units. Fig. \[initialfracpic\](b) shows the initial configuration of fractures considering the lattice sites. Fig. \[percpointpic\](a) shows the state of the lattice at the breaking point. Red dots indicate broken lattice sites. Green dots in Fig. \[percpointpic\](b) represent the largest cluster at breakdown. In Fig. \[percpointpic\](c) we have shown how the largest cluster evolved. Purple dots represent the largest cluster just before breakdown and (purple $+$ blue) dots represent largest cluster at breakdown. Fig. \[fluctuation\] shows the fluctuation of the strengths of the fractured rocks when the number of initial fractures are increased. 6. Conclusion ============= To model a fractured rock in $2-D$, a $2-D$ lattice of length $L$ is considered where each site is assumed to be a fiber. Each site has its own breaking threshold value drawn from a uniform distribution between $[0:1]$. Cracks are applied in the form of broken sites and equal load distribution is carried out until the sample breaks. A sample is assumed to be broken when a fracture is created which connects the left side of the lattice to the right side. The strength of the rock is defined to be the load per fiber at which the sample breaks. The strength was plotted with respect to the number of initial fractures. We notice sudden drops in the strength which is due to the large clusters of sites being formed in the lattice. These large clusters weaken the lattice. The length distributions of the fractures were also plotted. \ 7. Thoughts for Future Plans ============================ The model presented in this report is a very simplified one. Some modifications are needed in this model to make it more realistic. Initially we were breaking the diagonal fractures into horizontal and perpendicular lines as shown in Fig. \[fracprocess\]. Now, instead of carrying out the aforesaid process the diagonal fractures can be constructed in such a manner such that they fall on the lattice sites and remain diagonal as well as shown in Fig. \[nobreak\](a). Fig. \[nobreak\](b) shows the final state of the broken lattice when load is applied only around the fracture ends. We will be carrying out the same analysis for such a system as done here in the report. Then we will take into account the fractal dimension of the fracture centres while creating the DFN model and carry out the same analysis. We are also planning to include local load sharing dynamics into the model such that the load distribution is more localized. In this case the sites which are neighbours of the broken sites will have a greater probability to break which is more realistic in nature. Acknowledgement =============== The research work in this paper is a result of the scientific collaboration in the INDNOR project 217413/E20 funded by the Research Council of Norway. [90]{} F. T. Pierce, J. Text. Inst. [**17**]{},T355 (1926). C. Darcel, O. Bour, P. Davy, J, R. de Dreuzy, [*Connectivity properties of two-dimensional fracture networks with stochastic fractal correlation*]{}, Water Resources Research, 39(10),(2003). J, R. Lund, J. P. Byrne, Civil Eng. and Env. Syst. [**00**]{}, 1-8 (2000) S. Pradhan, A. hansen and B. K. Chakrabarti, [*Failure processes in elastic fiber bundles*]{}, Rev. Mod. Phys. [**82**]{}, 499 (2010). H. E. Daniels, Proc. R. Soc. London, Ser A [**183**]{}, 405 (1945). C. Darcel, O. Bour, P. Davy, [*Stereological analysis of fractal fracture networks*]{} Journal of Geophysical Research, 108(B9), (2003). O. Bour, P. Davy, C. Darcel, N. Odling, [*A statistical scaling model for fracture network geometry, with validation on a multiscale mapping of a joint network (Hornelen Basin, Norway)*]{}, Journal of Geophysical Research, 107(B6), 2002. P. C. Hemmer, A. Hansen, J. Appl. Mech. [**59**]{}, 909 (1992). A. Hansen, P. C. Hemmer, Phys. Lett. A [**184**]{}, 394 (1994). M. Kloster, A. Hansen, P. C. Hemmer, PRE [**56**]{}, 2615 (1997). D. Sornette, T. Phys I [**2**]{}, 2089 (1992).
--- abstract: 'We present a protocol for measuring the quadrature of a harmonic oscillator (HO). The HO is coupled to a qubit, with an interaction modulated by the qubit control and effectively proportional to the HO quadrature $I$. Repeated measurement of the qubit leads to gradually increasing information on the quadrature $I$, leading to squeezing. We derive an analytical formula for the quadrature variance, $(\Delta I)^2 = 1/(1+4\phi^2 s)$, with $\phi$ the product of interaction strength and interaction time and $s$ the number of repetitions of the measurement. We discuss the robustness of this scheme against decoherence. We find that this protocol could lead to significant squeezing in a realistic setup formed of a superconducting flux qubit used to measure an electrical or mechanical resonator.' author: - 'M. Canturk' - 'A. Lupascu' date: title: 'Quadrature readout and generation of squeezed states of a harmonic oscillator using a qubit-based indirect measurement' --- *Introduction.—* The quadratures of a quantum harmonic oscillator (HO) are operators defined as $I= a + a^\dag $ and $Q=-i(a - a^\dag)$, with $a$ ($a^\dag$) the HO annihilation (creation) operator. The variances for these operators are constrained by the uncertainty principle, which imposes $(\Delta I)^2 \, (\Delta Q)^2 \geq 1$. Squeezed states are characterized by a variance in one quadrature reduced below 1 at the expense of increased uncertainty in the other quadrature. Quadratures are constants of motion for a HO, which allows, in principle, their high precision measurement using a quantum non-demolition readout [@braginsky_quantum_1992]. Therefore, by monitoring one of the two quadratures, a signal acting on the HO can be detected with a precision only limited by the ability to prepare the chosen quadrature in a low uncertainty state, making quadratures useful for sensitive detection [@caves_measurement_1980]. Squeezed states have applications also in quantum measurements [@didier_fast_2015] and quantum information based on continuous variables [@lloyd_quantum_1999]. Recently, developments in the field of control of mechanical resonators have led to the experimental demonstration of preparation and detection of squeezed states [@wollman_quantum_2015; @pirkkalainen_squeezing_2015; @lei_quantum_2016; @lecocq_quantum_2015; @lei_quantum_2016]. In the field of superconducting circuits, squeezed states of superconducting electromagnetic resonators have become an essential ingredient in quantum limited amplifiers (see *e.g.* Ref. [@castellanos-beltran_amplification_2008]). Various methods have been proposed to implement squeezing in mechanical systems, including back-action evading schemes based on two-tone driving [@clerk_back-action_2008], engineered dissipation [@kronwald_arbitrarily_2013], parametric driving [@szorkovszky_mechanical_2011; @vinante_feedback-enhanced_2013], stroboscopic measurements [@ruskov_squeezing_2005], pulsed optomechanics [@vanner_pulsed_2011], and squeezed light injection [@jahne_cavity-assisted_2009]. In superconducting electromagnetic resonators, squeezing relies on non-linearities due to Josephson junctions and parametric amplification [@everitt_superconducting_2004; @zagoskin_controlled_2008]. Nevertheless, finding versatile and efficient methods to generate squeezed states remains a topic of growing importance. In this paper, we present a method to perform high fidelity quadrature measurements and generate squeezed states of a HO. The HO interacts with the qubit via a $(a+a^\dag)\sigma_z$ interaction, where $\sigma_z$ is a Pauli operator in the qubit energy eigenbasis. The qubit is controlled with resonant pulses, used to induce transitions between its energy eigenstates, separated by half the period of the HO. A superposition of qubit energy eigenstates acquires a phase, dependent on the quadrature $I$, which is detected in a Ramsey-type experiment. We show how repetition of this sequence leads to increasing information on the quadrature $I$ and a corresponding reduction in the uncertainty $\Delta I $ corresponding to squeezing. We discuss the application of this protocol to measurement of superconducting electromagnetic resonators and nano-mechanical resonators, taking into account non-idealities including decoherence and qubit detection errors. We note that our proposed scheme involves an effective modulation of the interaction between the HO and the qubit detector, bearing a connection with the generic modulation scheme of Thorne *et al.* [@thorne_quantum_1978]. The periodic interaction has similarities with stroboscopic measurements [@ruskov_squeezing_2005], with one important difference being that the interaction is continuous, leading to increased coupling strength. The same qubit control pulse scheme was proposed for ac-magnetic field coherent [@taylor_high-sensitivity_2008] and incoherent [@kolkowitz_coherent_2012; @bennett_measuring_2012] detection and shown to be amenable to classical quadrature measurements [@Bal_2012_QubitDetector]. In Ref. [@rao_heralded_2016], a similarly modulated interaction is used for heralded cooling and squeezing. In marked contrast with Ref. [@rao_heralded_2016], the choice we take for qubit detection implements quadrature measurement and leads to generation of low variance states for any measurement result. ![\[fig1\](color online). (a) Pulse sequence (top) used to control the qubit and modulation function for the qubit-HO interaction (bottom). (b) The quadrature average $\langle I \rangle$ as a function of the measurement step $n$, for a set of simulated trajectories. (c) The distribution of $\langle I \rangle$ and $(\Delta I)^2$ after $s=500$ measurement steps, extracted from 500 trajectories. (d) The average of qubit readout results over the last 50 points in each measurement trajectory with $s=500$, versus $\langle I \rangle$. In all the simulations $\phi=0.159$.](fig1.eps){width="86mm"} *Measurement protocol.—* We consider a system formed of a HO coupled to a qubit, with the Hamiltonian $H = H_{\t{HO}} + H_{\t{qb}} + H_{\t{qb,c}} + H_{\t{int}}$. We have $H_{\t{HO}} = \omega_\t{r} a^\dag a$, $H_{\t{qb}} = -\frac{\omega_\t{ge}}{2}\sigma_z$, $ H_{\t{qb,c}} = f(t)\sigma_x$, and $H_{\t{int}} = g (a + a^\dag)\sigma_z$, with $\omega_\t{r}$ the HO resonance frequency, $\sigma_z$ ($\sigma_x$) Pauli z(x) operators in the qubit energy eigenbasis, $\omega_\t{ge}$ the qubit transition frequency, $f(t)$ a qubit control term, and $g$ the HO-qubit coupling strength. The qubit is controlled with resonant pulses, *i.e.* by setting $f(t) = A(t) \cos (\omega_\t{ge} t + \varphi(t))$, with the amplitude $A(t)$ and the phase $\varphi(t)$ changing slowly as a function of time. We make a transformation to a rotating frame, described by the unitary operator $U_\t{rf} = e^{i(H_\t{HO} + H_{\t{qb}})t}$. In this frame the Hamiltonian is $H_\t{rf} = g(a e^{-i\omega_r t} + a^\dag e^{i\omega_r t})\sigma_z + \frac{A(t)\cos \varphi (t)}{2} \sigma_x - \frac{A(t)\sin \varphi (t)}{2} \sigma_y$, where we used the rotating wave approximation. ![\[fig2\] Top three panels: the probability of a measurement sequence with $n$ results $r=1$, the quadrature average, and the quadrature variance, respectively, versus measurement step $n$ obtained from Eqs. \[eq:probsofn\], \[eq:Iavn\], and \[eq:I2avn\]. Bottom panel: fidelity against a squeezed state versus $n$. We take $\phi=0.159$ and $s=64$.](fig2.eps){width="86mm"} The measurement protocol consists of repeating the procedure shown schematically in Fig. \[fig1\](a). The qubit and the HO start in a separable state ${|{g}\rangle} \otimes {|{\alpha}\rangle}$, where ${|{g}\rangle}$ (${|{e}\rangle}$) is the qubit ground (excited) state and ${|{\alpha}\rangle}$ is an arbitrary HO state. Next, the qubit is controlled using a Carr-Purcell-Meiboom-Gill type control sequence [@meiboom_modified_1958], consisting of the pulses $\left(\frac{\pi}{2}\right)_x - \left[ \left(\pi\right)_x \right]^{N_\t{p}} -\left(\frac{\pi}{2}\right)_y$, as shown schematically in Fig. \[fig1\](a). Each rotation $\theta_\beta$ is a rotation of angle $\theta$ around axis $\beta = x$ or $y$. The first control pulse changes the qubit state to $\frac{1}{{\sqrt{2}}}({|{g}\rangle} - i {|{e}\rangle})$. The evolution of the combined system during the time interval between the initial and final pulses is given by the unitary operator $U_e = U_c \mathcal{T}\exp{\left(-i \int_{0}^{T_\t{e}}dt\, H_\t{eff}(t)\right)}$, where the effective Hamiltonian $H_\t{eff} (t) = g \chi (t) (a e^{-i\omega_r t} + a^\dag e^{i\omega_r t})\sigma_z $ and $U_c = I_\t{qb}$, the identity operator for the qubit, for for $N_p$ even, and $U_c = e^{-i\pi/2 \sigma_x}$ for $N_p$ odd. After the final pulse, the qubit is measured projectively, and the measurement result $r=1$ ($-1$), corresponding to projection in the excited (ground) state, is recorded. Following measurement, the qubit is reset to its ground state, in preparation for the next repetition. The evolution of the coupled qubit-HO system between the two $\pi/2$ pulses in Fig. \[fig1\](a) is exactly described by the Hamiltonian $H_\t{avg} = \frac{2}{\pi} g \sigma_z I$, with the quadrature $I = (a+a^\dag)$, obtained by averaging the Hamiltonian $H_\t{eff} (t)$ over the complete duration of the interaction. Qualitatively speaking, the qubit superposition $\frac{1}{{\sqrt{2}}}({|{g}\rangle} - i {|{e}\rangle})$ prepared by the first $\pi/2$ pulse acquires a phase that depends on the quadrature $I$. The combination of the $(\pi/2)_y$ pulse and measurement in the energy eigenbasis constitutes a measurement in the $\sigma_x$ eigenbasis, which provides information on the quadrature $I$. *Analysis of the measurement process.—* Next, we present an analysis of the measurement process. We consider a series of $s$ repetitions of the protocol illustrated in Fig. \[fig1\](a). For repetition $i$ ($i=\overline{1,s}$), the starting state of the combined system is ${|{g}\rangle}\otimes {|{\alpha_{i-1}}\rangle}$. After interaction and immediately prior to measurement, the state becomes ${|{g}\rangle} \otimes D_\t{g}{|{\alpha_{i-1}}\rangle} + {|{e}\rangle} \otimes D_\t{e}{|{\alpha_{i-1}}\rangle}$, where $D_\t{g}=-\frac{1}{2}(D-iD^\dag)$ and $D_\t{e}=-\frac{1}{2}(D+iD^\dag)$. Here $D=\mathcal{D}(i\phi)$, with $\mathcal{D}(\beta) = e^{\beta a^\dag - \beta^* a}$ the displacement operator of amplitude $\beta$ [@walls_1995_1], and $\phi = \frac{2}{\pi}g T_e$. The measurement result $r=-1$ ($1$) occurs with probability $P_g = || D_\t{g}{|{\alpha_{i-1}}\rangle} ||^2$ ($P_e = || D_\t{e}{|{\alpha_{i-1}}\rangle} ||^2$) and induces a post-measurement state ${|{\alpha_i}\rangle} = D_\t{g}{|{\alpha_{i-1}}\rangle} / {\sqrt{P_g}}$ ($D_\t{e}{|{\alpha_{i-1}}\rangle} / {\sqrt{P_e}}$). By iteration, the probability to obtain a set of measurements such that $n$ of the $s$ results are $+1$, is given by $P_{(s-n,n)} = || D_e^n D_g^{s-n} {|{\alpha_0}\rangle} ||^2$ and the resulting state is $D_e^n D_g^{s-n} {|{\alpha_0}\rangle}/{\sqrt{P_{(s-n,n)}}}$. We note that the probability and the conditioned state are independent of the order in which the $n$ results of value $1$ are obtained, due to $\left[ D_g,D_e \right] = 0$. We first analyze the measurement action by stochastic numerical simulations. The HO is prepared in its vacuum state. We simulate a set of measurement sequences, each consisting of $s$ measurements. Within each sequence, we assign at each step a measurement result $r$, by drawing the random number $r=1$ ($-1$) with probability $P_e$ ($P_g$), and we also assign the corresponding conditioned state. In Fig. \[fig1\](b) we show, within each sequence, the evolution of the average quadrature $\langle I \rangle$ versus the measurement step. We observe that after undergoing fluctuations, $\langle I \rangle$ settles to a nearly constant value. In Fig. \[fig1\](c) we show the histogram of the average $\langle I \rangle$ and of the variance $(\Delta I)^2$. The distribution of $\langle I \rangle$ is consistent with the initial state probability, whereas $\Delta I$ is significantly reduced compared to the initial distribution. These features are a consequence of the quantum non-demolition type of interaction. Remarkably, the values taken by $\langle I\rangle$ are discrete, a feature that reflects the discrete nature of the information acquired from binary qubit readout results. In Fig. \[fig1\](d) we show the average of the last few measurement results versus the final $\langle I \rangle$ for each sequence. The strong correlation demonstrates that the qubit readout is a suitable meter for the quadrature $I$. The results in Fig. \[fig1\] correspond to $\phi=0.159$. We observe similar results for preparation for other values of $\phi$, with a general tendency for $\langle I \rangle$ to converge faster and for $(\Delta I)^2$ to decrease as $\phi$ increases. We also observe similar results when the HO is prepared in coherent or thermal states. We discuss next the properties of the measurement conditioned states. We consider the case in which the initial state of the HO is a coherent state of amplitude $\alpha_0$. The probability to detect the result $r=1$ for $n$ times out of $s$ repetitions, the corresponding average, and the corresponding average of the square of the quadrature are given respectively by $$\label{eq:probsofn} P_{(s-n,n)} = \frac{\big(-1\big)^{\frac{s}{2}-n}e^{-2\Re\{\alpha_0\}^2}}{2^{2s}} \sum_{k=0}^{2(s-n)}\sum_{\ell=0}^{2n} {2(s-n)\choose k}{2n\choose\ell} i^{(\ell-k)} e^{+2\big(\Re\{\alpha_0\}+i\phi(s-k-\ell)\big)^2},$$ $$\label{eq:Iavn} \left\langle I\right\rangle_{(s-n,n)} = \frac{\big(-1\big)^{\frac{s}{2}-n}e^{-2\Re\{\alpha_0\}^2}}{2^{2s}P_{(s-n,n)}} \sum_{k=0}^{2(s-n)} \sum_{\ell=0}^{2n} {2(s-n)\choose k}{2n\choose \ell} i^{\ell-k} 2\bigg(\Re\{\alpha_0\}+i\phi(s-k-\ell)\bigg) e^{+2\big(\Re\{\alpha_0\}+i\phi(s-k-\ell)\big)^2},$$ and $$\label{eq:I2avn} \left\langle I^2\right\rangle_{(s-n,n)} = \frac{\big(-1\big)^{\frac{s}{2}-n}e^{-2\Re\{\alpha_0\}^2}} {2^{2s}P_{(s-n,n)}} \sum_{k=0}^{2(s-n)} \sum_{\ell=0}^{2n} {2(s-n)\choose k}{2n\choose \ell} i^{\ell-k} \bigg(1+4\big(\Re\{\alpha_0\}+i\phi(s-k-\ell)\big)^2\bigg) e^{+2\big(\Re\{\alpha_0\}+i\phi(s-k-\ell)\big)^2}$$ (see [@si]). Using these expressions, we calculate and show in Fig. \[fig2\] the probability for each result, which is given by $P_{(s-n,n)}$ multiplied by the combinatorial factor ${s\choose n}$, the average, and the variance versus $n$. These results show that measurement conditioned states have reduced variance in the quadrature $I$. It is interesting to consider whether the resulting states are squeezed states, as generated by a squeezing operator $\mathcal{S}(\epsilon) = \exp{ \left( \frac{\epsilon ^*}{2}a^2 - \frac{\epsilon}{2}{a^\dag}^2 \right) }$ [@walls_1995_1]. In Fig. \[fig2\] we also show the fidelity of the measurement conditioned state with respect to the state $\mathcal{D}(\langle I\rangle)_{s-n,n}\mathcal{S}(-\log (\Delta I)_{s-n,n}){|{0}\rangle}$. We find that, besides having reduced variance, the states prepared by measurement have a very high fidelity with respect to states generated by the squeezing operator. We next consider the dependence of the variance on the number of measurement steps. For an initial vacuum state, the variance of the most likely state ($n=s/2$) as well as its average weighted over the probabilities of resulting states is shown in Fig. \[fig3\] for two values of $\phi$. Based on equations \[eq:probsofn\],\[eq:Iavn\], and \[eq:I2avn\], we derived an analytical approximation for the variance [@si], $$\label{eq:approxvariance} (\Delta I)^2_{s-n,n} = 1/(1 + 4 \phi^2 s),$$ which is in excellent agreement with the exact calculations, as shown in Fig. \[fig3\]. *The role of qubit dephasing.—* Given the fact that quadrature measurement relies on the detection of the phase of a qubit superposition, dephasing of a qubit induced by its environment should be considered. In the presence of dephasing, the projection operators $D_{\t{g}(\t{e})}$ become $D_{\t{g}(\t{e})}=-\frac{1}{2}(D-(+)e^{i\tilde{\phi}}iD^\dag)$, where $\tilde{\phi}$ is a random phase acquired by the qubit due to noise. The state conditioned by a given series of measurement results $r_1$, $r_2$,...,$r_s$ becomes a density matrix when averaged over noise realizations, and is given by $$\rho_\mathbf{r}=\frac{1}{2^{2s}}\sum_{\mathbf{q_1},\mathbf{q_2}}i^{\mathbf{r}(\mathbf{q_1}-\mathbf{q_2})}C_{\mathbf{q_1},\mathbf{q_2}}D^{s-t_1}{|{\alpha}\rangle}{\langle{\alpha}|}{D^\dag}^{s-t_2},$$ where $\mathbf{q_1}$ and $\mathbf{q_2}$ are vectors of length $s$ with components $0$ or $1$, $t_{1(2)}$ is the sum of the components of $\mathbf{q_{1(2)}}$, and $C_{\mathbf{q_1},\mathbf{q_2}}=\exp{\left( -\frac{1}{2} (\mathbf{q_1}-\mathbf{q_2}) \mathbf{\widetilde{W}}(\mathbf{q_1}-\mathbf{q_2}) \right)}$, with $\mathbf{\widetilde{W}}$ the correlation matrix for the noise $\tilde{\phi}$. We considered quadrature measurement with $g=1$ MHz, $\omega_\t{r}=2\pi\times200$ MHz, $N_\t{p}=50$, and noise in the qubit frequency $\omega_{ge}$ with a spectral density $A_\omega/{\lvert\omega\rvert}$. This type of noise spectral density is typical in superconducting qubits [@bylander_2011_noisePCQ]; we take a typical value $A_\omega=(1.2\times 10^7\,\t{rad/s})^2$. A comparison of the variance without and with dephasing is shown in Fig. \[fig4\]. This level of noise produces a negligible effect on quadrature squeezing up to $s=$18. Even with significantly larger noise, $A_\omega=(2.4\times 10^7\,\t{rad/s})^2$, squeezing is degraded by less than 5 %. ![\[fig3\](color online). Average variance (triangles) and variance for the symmetric measurement ($n=s/2$) (dots) versus the number of measurement steps $s$. The solid line is the approximation in Eq. \[eq:approxvariance\]. The left and right panels correspond to $\phi=0.08$ and $\phi=0.159$ respectively.](fig3.eps){width="86mm"} *Experimental implementation.—* We briefly discuss the prospects for experimental implementation. We consider a superconducting flux qubit used to measure either an electrical or a mechanical resonator. The flux qubit has $\omega_{ge}=2\pi\times 10.8$ GHz, an energy level splitting at the symmetry point $\Delta= 2 \pi \times 4\,\t{GHz}$, a persistent current $I_\t{p}=300$ nA, an energy relaxation time $T_{\t{1,qb}}=10\,\mu$s, an effective temperature $T_\t{qb}=50$ mK, and is subjected to intrinsic flux noise with a spectral density $A_\Phi/{\lvert\omega\rvert}$ with $A_\Phi = 1\,(\upmu \Phi_0)^2$ [@orgiazzi_2016_FluxQubitsPlanar; @Stern_2014_3DcavityFluxQubits; @Yan_2016_FluxQubitRevisited]. The HO has $\omega_\t{r}=200$ MHz, a quality factor $Q=10,000$, and a temperature $T_\t{HO}=15$ mK. A coupling strength $g=2\pi\times 2$ MHz is achievable by inductive coupling of an electrical superconducting resonator or by embedding a moving beam into the qubit arm, similarly to the superconducting interferometer setup in Ref [@etaki_motion_2008]. With the numbers given above, we find that a HO initially in its thermal state can be brought into a squeezed state with a variance reaching $(\Delta I)^2 = 0.4$. We note that the assumed value of $g$ is conservative. Larger coupling of the qubit to a mechanical HO is envisioned with optimized setups and coupling to an electrical HO can be straightforwardly be made over an order of magnitude larger than considered, leading to larger squeezing. We expect that further optimization of other parameters of the measurement protocol will also result in increased squeezing. ![\[fig4\](color online). Variance without noise (crosses) and with noise, with a spectral density $A_\omega=(1.2\times 10^7\,\t{rad/s})^2$ (empty dots) and $A_\omega=(2.4\times 10^7\,\t{rad/s})^2$ (empty squares) versus the number of measurements.](fig4.eps){width="75mm"} *Conclusions and outlook.—* Quadrature measurements and generation of squeezed states are very important in various fields, including quantum sensing, quantum optics, quantum information, and nanomechanics. The protocol for generation of squeezed states that we presented in this paper makes use of a very basic resource, a two level system with control and measurement. This aspect makes it attractive from a fundamental point of view and at the same time amenable to experimental implementations. Future work will address theoretical aspects of optimization of this protocol for optimal squeezing and tests of the experimental implementation. *Acknowledgements.—* We thank Martin Otto and Ali Yurtalan for preliminary studies of coupling of a mechanical resonator to a flux qubit, and Aashish Clerk and Eyal Buks for useful discussions. We acknowledge support from Gerald Schwartz and Heather Reisman Foundation, NSERC, Ontario Ministry of Research and Innovation, Industry Canada, and the Canadian Microelectronics Corporation. During part of this work, M.C. was supported for one year by the Scientific and Technological Research Council of Turkey and A.L. was supported by an Ontario Early Research Award. [32]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [**]{}, edited by  (, ) [****,  ()](\doibase 10.1103/RevModPhys.52.341) [****,  ()](\doibase 10.1103/PhysRevLett.115.203601) [****,  ()](\doibase 10.1103/PhysRevLett.82.1784) [****,  ()](\doibase 10.1126/science.aac5138) [****,  ()](\doibase 10.1103/PhysRevLett.115.243601) [****,  ()](\doibase 10.1103/PhysRevLett.117.100801) [****,  ()](\doibase 10.1103/PhysRevX.5.041037) [****,  ()](\doibase 10.1038/nphys1090) [****,  ()](http://stacks.iop.org/1367-2630/10/i=9/a=095010) [****,  ()](\doibase 10.1103/PhysRevA.88.063833) [****,  ()](\doibase 10.1103/PhysRevLett.107.213603) [****,  ()](\doibase 10.1103/PhysRevLett.111.207203) [****,  ()](\doibase 10.1103/PhysRevB.71.235407) [****,  ()](\doibase 10.1073/pnas.1105098108) [****,  ()](\doibase 10.1103/PhysRevA.79.063819) [****,  ()](\doibase 10.1103/PhysRevA.69.043804) [****,  ()](\doibase 10.1103/PhysRevLett.101.253602) [****, ()](\doibase 10.1103/PhysRevLett.40.667) [****,  ()](\doibase 10.1038/nphys1075) [****,  ()](\doibase 10.1126/science.1216821) [****,  ()](\doibase 10.1088/1367-2630/14/12/125004) [****,  ()](http://dx.doi.org/10.1038/ncomms2332) [****,  ()](\doibase 10.1103/PhysRevLett.117.077203) [****,  ()](\doibase 10.1063/1.1716296) @noop [**]{} (, ) @noop @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevB.93.104518) @noop [****,  ()]{} [****,  ()](\doibase 10.1038/ncomms12964) [****,  ()](\doibase 10.1038/nphys1057) **Supplementary material: Quadrature readout and generation of squeezed states of a harmonic oscillator using a qubit-based indirect measurement** Derivation of the expressions for probability, quadrature average and variance \[sec:si:analyt:pro:exp:var’\] ============================================================================================================= In this section we present a derivation of the expressions for $P_{(s-n,n)}$, $\langle I \rangle$, and $\langle I^2 \rangle$ in Eqs. (1)–(3) of the main text. The probability to obtain $n$ results $r=1$ in a series of $s$ measurement is $P_{(s-n,n)} = {\langle{\alpha_0}|}D^{\dagger (s-n)}_{g}D^{\dagger n}_{e}D^{n}_{e} D^{(s-n)}_{g}{|{\alpha_0}\rangle}$, with ${|{\alpha_0}\rangle}$ the initial harmonic oscillator (HO) state, taken to be a coherent state of complex amplitude $\alpha_0$. We have $P_{(s-n,n)} = i^{s-2n} {\langle{\alpha_0}|}D^{2n}_{e} D^{2(s-n)}_{g}{|{\alpha_0}\rangle}$, where we used $\left[ D_e, D_g \right]=0$, $D_g^\dagger = i D_g$, and $D_e^\dagger = -i D_e$. Using the binomial theorem, this expression can be expanded as $$P_{(s-n,n)} = \frac{i^{(s-2n)}}{2^{2s}} \sum_{k=0}^{2(s-n)} \sum_{\ell=0}^{2n} {2(s-n)\choose k}{2n\choose \ell} i^{\ell-k} {\langle{\alpha_0}|}D^{2(s-k-\ell)}{|{\alpha_0}\rangle} \label{equ:P(s-n,n):SI}.$$ When the last factor in (\[equ:P(s-n,n):SI\]) in the bra-ket notation is expanded further by employing the formulas $D^n \left|\alpha\right\rangle = e^{in\phi\Re\{\alpha\}}\left|(\alpha + n i\phi)\right\rangle$ and $\left\langle\alpha_{i} |\alpha_{j} \right\rangle = \exp\left( \alpha^*_{i}\alpha_{j} -\frac{\left|\alpha_{i}\right|^2}{2} -\frac{\left|\alpha_{j}\right|^2}{2}\right)$ [@walls_1995_1SI], the expression for $P_{(s-n,n)}$ given in Eq. (1) of the main text is obtained. The quadrature average $\left\langle I\right\rangle_{(s-n,n)} = {\langle{\alpha_0}|}D^{\dagger (s-n)}_{g}D^{\dagger n}_{e}(a+a^\dagger)D^{n}_{e} D^{(s-n)}_{g}{|{\alpha_0}\rangle}/P_{(s-n,n)} $. Using the relations above and $\left[a+a^\dagger , D_g \right] = \left[a+a^\dagger , D_e \right] = 0$, we obtain $\left\langle I\right\rangle_{(s-n,n)} = i^{s-2n}{\langle{\alpha_0}|} (a+a^\dagger)D^{2n}_{e} D^{2(s-n)}_{g}{|{\alpha_0}\rangle}/P_{(s-n,n)}$. This is expanded as $$\left\langle I\right\rangle_{(s-n,n)} P_{(s-n,n)} =\frac{\big(-1\big)^{\frac{s}{2}-n}}{2^{2s}} \sum_{k=0}^{2(s-n)} \sum_{\ell=0}^{2n} {2(s-n)\choose k}{2n\choose \ell} i^{\ell-k}{\langle{\alpha_0}|} \big(a+a^\dagger\big)D^{2(s-k-\ell)}{|{\alpha_0}\rangle}. \label{equ:<I>(s-n,n):SI}$$ After using $\left\langle \alpha_{i}\right|I\left|\alpha_{j}\right\rangle = \left(\alpha_{j} + \alpha^*_{i} \right) \left\langle \alpha_{i}\right.\left|\alpha_{j}\right\rangle$, equation (\[equ:&lt;I&gt;(s-n,n):SI\]) yields the final form of $\left\langle I\right\rangle_{(s-n,n)}$ in the main text. Similarly, we can expand $\left\langle I^2\right\rangle_{(s-n,n)} = {\langle{\alpha_0}|}D^{\dagger (s-n)}_{g}D^{\dagger n}_{e}(a+a^\dagger)^2D^{n}_{e} D^{(s-n)}_{g}{|{\alpha_0}\rangle}/P_{(s-n,n)}$ as $$\left\langle I^2\right\rangle_{(s-n,n)} P_{(s-n,n)} = \frac{\big(-1\big)^{\frac{s}{2}-n}}{2^{2s}} \sum_{k=0}^{2(s-n)} \sum_{\ell=0}^{2n} {2(s-n)\choose k}{2n\choose \ell} i^{\ell-k} {\langle{\alpha_0}|}\big(a+a^\dagger\big)^2D^{2(s-k-\ell)}{|{\alpha_0}\rangle} \label{equ:<I2>(s-n,n):SI}.$$ Using $\left\langle \alpha_{i} \right| I^2 \left|\alpha_{j}\right\rangle = \left(1 + \left(\alpha_{j} + \alpha_{i}^*\right)^2 \right) \left\langle \alpha_{i} \right| \left.\alpha_{j}\right\rangle$, we can obtain the final form of $\left\langle I^2\right\rangle_{(s-n,n)}$ in Eq.(3). We can establish the following relation between the probability, quadrature average, and quadrature square average: $$\frac{1}{P_{(s-n,n)}}\frac{d}{d\phi}\bigg(\phi P_{(s-n,n)}\bigg) = \langle I^2\rangle_{(s-n,n)} - 2\Re\{\alpha\} \langle I\rangle_{(s-n,n)} \label{equ:recurrence:SI}.$$ This relation will be used in the following section. derivation of an approximate formula for the variance \[sec:si:approx:varI\] ============================================================================= In this section we derive an expression for the quadrature variance. We assume a starting vacuum state, ${|{\alpha_0}\rangle}={|{0}\rangle}$ and we focus on $n=s/2$ ($s$ is taken even), which is the most likely result. We have $$P_{(\frac{s}{2},\frac{s}{2})} = \sum_{q=0}^{2s} e^{- 2\phi^{2}\big(s-q\big)^{2}} R\big(q\big), \label{equ:(1):SI}$$ where $$R\big(q\big)=\frac{i^{q} }{2^{2s}}\sum_{k=\max\{0,q-s\}}^{\min\{q,s\}}(-1)^{k}{s\choose k}{s\choose q-k}\label{equ:R(q):SI}.$$ The function $R(q)$ is maximum at $q=s$ and symmetric around $q=s$ (i.e. $R (s-a) = R(s+a)$ with $a\leq s$). We have: $$R (s-a) =\frac{(-1)^{-\frac{s-a}{2}}}{2^{2s}} {}_{2}\mathrm{F}_{1}\bigg(-s,-(s-a);(a+1);-1\bigg) = \frac{1}{2^{2s}} {s\choose \frac{s+a}{2}} \label{equ:R(s-a):SI:pre}.$$ The combinatorial factor ${s\choose \frac{s+a}{2}}$ in (\[equ:R(s-a):SI:pre\]) is well described by normal approximation [@boo:Kenneth2001], $$R(s-a) \approx \frac{1}{2^{s}} \sqrt{\frac{2}{s\pi}} e^{-\frac{a^2}{2s}} \label{equ:R(s-a):SI}$$ if ${\lverta\rvert}$ is not much larger than ${\sqrt{s}}$. With (\[equ:R(s-a):SI\]), equation (\[equ:(1):SI\]) becomes $$P_{\frac{s}{2},\frac{s}{2}} = \sqrt{\frac{2}{s\pi}}\frac{1}{2^{s}} \sum_{b=-\frac{s}{2}}^{\frac{s}{2}} e^{-\beta b^2} \label{equ:(1):SI:2}$$ where $\beta = \big(\frac{2}{s} + 8\phi^2\big)$. The sum in (\[equ:(1):SI:2\]) is well approximated in the limit $s\rightarrow\infty$ by the Jacobi theta function: $\vartheta_{3}\bigg(0,e^{-\beta}\bigg) = \sum_{b=-\infty}^{\infty} e^{-\beta b^2}$ which is approximated as $$\vartheta_{3}\bigg(0,e^{-\beta}\bigg) \backsimeq \sqrt{\frac{s\pi}{2}} \frac{1}{\sqrt{1 + 4\phi^2 s}}$$for $e^{-\beta}\rightarrow 1$. The final form of the asymptotic probability becomes $$P_{\frac{s}{2},\frac{s}{2}} = \frac{1}{2^{s}\sqrt{1 + 4\phi^2 s}} \label{equ:(1):SI:3}.$$ Using Eq. (\[equ:recurrence:SI\]) for $n=\frac{s}{2}$ and noting that $\langle I\rangle_{(s/2,s/2)} = 0$, the variance becomes $$\big(\Delta I\big)^2_{\frac{s}{2},\frac{s}{2}} = \left\langle I^2\right\rangle_{\frac{s}{2},\frac{s}{2}} = \frac{1}{P_{\frac{s}{2},\frac{s}{2}}}\frac{d}{d\phi}\bigg(\phi P_{\frac{s}{2},\frac{s}{2}}\bigg)=\frac{1}{1+4\phi^2 s} \label{equ:DeltaI(s/2,s/2):pre1},$$ which is result (4) in the main text. Models for decoherence ====================== Pure dephasing due to flux noise\[sec:si:decohere:dephase\] ----------------------------------------------------------- For a superconducting flux qubit, flux noise is the dominant source of dephasing. In the energy eigenbasis, the Hamiltonian of the qubit is $$H^\prime_\t{qb} = -\frac{\omega_{ge}+\delta\omega_{ge}(t)}{2}\sigma_z, \label{equ:Hq:noise}$$ where $\omega_{ge}=\sqrt{\Delta^2+\varepsilon^2}$, with $\Delta$ the so-called qubit gap and $\varepsilon=(2I_p\Phi_0/\hbar)(f-1/2)$, with $f=\Phi/\Phi_0$, where $\Phi$ is the externally applied magnetic flux and $\Phi_0=h/2e$ is the magnetic flux quantum, and $I_p$ the qubit persistent current [@orlando_1999_1]. The term $\delta\omega_{ge}(t)$ is a random component induced by intrinsic fluctuations of magnetic flux. The random component $\delta\omega_{ge}(t)$ in (\[equ:Hq:noise\]) is a stochastic process, which can be written as $$\delta\omega_{ge}(t) = \sum_{n=-\infty}^{\infty} a_n e^{i\omega_{n}t} \label{equ:deltaf:SI}$$ where $\omega_{n}=n\times\omega_\mathrm{min}$, with $\omega_\mathrm{min} = \frac{2\pi}{\overline{T}}$, where $\overline{T}$ is a time taken much longer than the duration of the simulated experiment $T$ (see Fig. \[fig:QuadDetect:SI\]). The coefficients $a_n$ are taken as random Gaussian variables. ![Quadrature detection using the qubit. Each $\mathrm{CPMG}_m$ in this schema is represented in Fig. 1.a in the main text. \[fig:QuadDetect:SI\]](SIfig1.eps){width="0.95\linewidth"} The low frequency noise process $\delta \omega_{ge}(t)$ is characterized by power spectral density (PSD) $S\left(\omega\right) =A_{\omega}/\left|\omega\right|$ where $\sqrt{A_{\omega}} = 2\pi (\varepsilon/\omega_{ge})(I_p/|e|)\sqrt{A_f}$, where the PSD of flux noise is assume to be $A_f/{\lvert\omega\rvert}$. We assume $\sqrt{A_f}=10^{-6}$ and $I_{p}=300$ nA, typical for a flux qubit. For simulations of noise trajectories, we restrict the noise frequency to the range $\omega_\mathrm{min}\leq \omega \leq\omega_\mathrm{max}$ where $\omega_\mathrm{min}= 2\pi/ (v\overline{T})$ and $\omega_\mathrm{max}=u N_{p} 2\pi/T_{e}$ with $u$ and $v$ are taken sufficiently large to reflect the relevant time scales and $N_{p}$ is number of pulses. We construct noise trajectories by using randomly generated complex Fourier coefficients $a_n$, related to the PSD by $S(\omega) = \frac{\overline{T}}{2\pi} \left\langle\left|a_n\right|^2\right\rangle$. Dissipative effects in the coupled system \[sec:si:decohere:lindblad\] ---------------------------------------------------------------------- We model decoherence of the coupled qubit-harmonic oscillator system using the master equation in Lindblad form: $$\begin{aligned} \dot{\rho} = - i\left[\mathcal{H}, \rho\right] + \kappa_{\downarrow} \mathcal{D}[a]\rho + \kappa_{\uparrow} \mathcal{D}[a^\dagger]\rho + \Gamma_{e\rightarrow g} \mathcal{D}[\sigma^{-}]\rho + \Gamma_{g\rightarrow e} \mathcal{D}[\sigma^{+}]\rho + \Gamma_{\varphi} \mathcal{D}[\sigma_{z}]\rho \label{equ:lindblad:SI}\end{aligned}$$ where $\rho(t)$ is the density matrix, $\mathcal{H}$ is the Hamiltonian, $\mathcal{D}[\mathcal{O}]$ is the damping factor defined by a mapping $$\mathcal{D}[\mathcal{O}]\rho = \mathcal{O} \rho(t) \mathcal{O} ^\dagger -\frac{1}{2}\left( \mathcal{O}^\dagger \mathcal{O} \rho + \rho \mathcal{O}^\dagger \mathcal{O} \right).$$ In equation (\[equ:lindblad:SI\]), $\kappa_{\downarrow} = \kappa (1+n_\mathrm{HO})$ and $\kappa_{\uparrow} = \kappa n_\mathrm{HO}$ are the decay rates with average photon number $n_\mathrm{HO}(\omega_{r}) = 1/\big(\exp(\hbar \omega_{r}/k_{B}T_{HO})-1\big)$ at frequency $\omega_{r}$ and a finite temperature $T_\mathrm{HO}$ ($k_B$ is the Boltzmann constant) and $\kappa = \omega_r/ Q$ is the decay rate of the resonator with a quality factor $Q$ [@gardiner_2000_1]. Besides, $\Gamma_{e \rightarrow g}$ and $\Gamma_{g\rightarrow e} = \Gamma_{e\rightarrow g} \exp\left(-\hbar \omega_{ge}/k_B T_\mathrm{qb}\right)$ are decay rates for the qubit at a finite temperature $T_{qb}$ with $\Gamma_{e \rightarrow g}+\Gamma_{g \rightarrow e}=1/T_{1,\mathrm{qb}}$ and $T_{1,\mathrm{qb}}$ is the relaxation time; $\Gamma_{\varphi}$ is the pure dephasing rate. The Hamiltonian $\mathcal{H}$ that governs the master equation (\[equ:lindblad:SI\]) is given by $$\mathcal{H} = \sum_{i=\{x,y,z\}} \bigg\{ a f_i^*(t) + a^\dagger f_i(t) + f_{c_i}(t) -\frac{1}{2}f_{\varepsilon_i}(t)\bigg\} \sigma_i + \frac{A(t)}{2}\frac{\Delta}{\omega_{ge}}\bigg\{ \cos\varphi(t) \sigma_x -\sin\varphi(t) \sigma_y \bigg\}, \label{equ:H:fluxnoise}$$ where $$\renewcommand{\arraystretch}{1.6} \begin{array}{lll} f_z = g\frac{\varepsilon}{\omega_{ge}} e^{i\omega_r t}, & f_y = - g\frac{\Delta}{\omega_{ge}} e^{i\omega_r t} \sin\left(\omega_{ge}t\right), & f_x = -g\frac{\Delta}{\omega_{ge}} e^{i\omega_r t} \cos\left(\omega_{ge}t\right),\\ f_{c_z} =-A \frac{\varepsilon}{\omega_{ge}} \cos\left(\omega_{ge}t+\varphi\right), & f_{c_y} = \frac{A}{2}\frac{\Delta}{\omega_{ge}}\sin\left(2\omega_{ge}t+\varphi\right), & f_{c_x} = \frac{A}{2}\frac{\Delta}{\omega_{ge}}\cos\left(2\omega_{ge}t+\varphi\right),\\ f_{\varepsilon_z}=\delta\omega_{ge}(t), &f_{\varepsilon_y}=\delta\omega_{ge}(t)\frac{\Delta}{\varepsilon}\sin(\omega_{ge}t),& f_{\varepsilon_x}=\delta\omega_{ge}(t)\frac{\Delta}{\varepsilon}\cos(\omega_{ge}t) \end{array}$$ with $\delta\omega_{ge}(t)$ is given in (\[equ:deltaf:SI\]). In this study, we omitted the influence of the last term in (\[equ:lindblad:SI\]) by setting $\Gamma_{\varphi}=0$, since the flux noise is the dominant source of dephasing and its contribution is embedded into the Hamiltonian (\[equ:H:fluxnoise\]). Derivation of conditional evolution with pure dephasing \[sec:si:dephase\] ========================================================================== With pure dephasing, a random phase $\tilde{\phi}_m$ is added to a qubit superposition, which has a different value for each repetition from $m=1$ to $s$. The noise is drawn from a proper distribution corresponding to the noise spectrum and taking into account the noise modulation due to the CPMG sequence (Fig. \[fig:QuadDetect:SI\]). Using the projection operator $D_{g(e)}=-\frac{1}{2}(D-(+)e^{i\tilde{\phi}}iD^\dag)$ and following the procedure in the main text, one can obtain pure state conditioned on measurement result $\mathbf{r}$: $${|{\alpha_\mathbf{r}}\rangle} = \frac{(-1)^s}{2^s}\prod_{m=s}^{1}\sum_{q_m=0}^{1} i^{r_m q_m} e^{i\tilde{\phi}_m q_m} D^{1-2q_m}{|{\alpha_0}\rangle}= \frac{(-1)^s}{2^s} \sum_{\mathbf{q}=0}^{1} i^{\mathbf{r}\cdot \mathbf{q}} e^{i\mathbf{\tilde{\phi}}\cdot\mathbf{q}} D^{s-2t}{|{\alpha_0}\rangle} \label{equ:purestate:SI}$$where ${|{\alpha_0}\rangle}$ is the initial state, $\mathbf{q}$ is a vector of length $s$ with components $q_i=0,1$ ($i=\overline{1,s}$), $\mathbf{\tilde{\phi}}$ is the vector of length $s$ formed of the values of the random phase $\tilde{\phi}_i$ ($i=\overline{1,s}$),and $t=\sum_{j=1}^{s} q_{j}$. Consequently, the density matrix in the main text is obtained from (\[equ:purestate:SI\]) by averaging over the noise process: $$\rho_\mathbf{r}=\frac{1}{2^{2s}}\sum_{\mathbf{q_1},\mathbf{q_2}}i^{\mathbf{r}\cdot(\mathbf{q_1}-\mathbf{q_2})} C_{\mathbf{q_1},\mathbf{q_2}} D^{s-t_1}{|{\alpha}\rangle}{\langle{\alpha}|}{D^\dag}^{s-t_2},$$ where $C_{\mathbf{q_1},\mathbf{q_2}} = \left\langle e^{i\mathbf{\tilde{\phi}}\cdot(\mathbf{q_1}-\mathbf{q_2})} \right\rangle=\exp{\left( -\frac{1}{2} (\mathbf{q_1}-\mathbf{q_2}) \mathbf{\widetilde{W}} (\mathbf{q_1}-\mathbf{q_2})\right)}$ with $\mathbf{\widetilde{W}}$ the correlation matrix for the noise $\tilde{\phi}_1,\tilde{\phi}_2,\ldots,\tilde{\phi}_s$. The correlation matrix can be written as $$\widetilde{W}_{ij} = \int_{t_i}^{t_{i} + T_e}dt \chi(t)\int_{t_j}^{t_{j} + T_e}dt'\chi(t')\overline{\big(\delta\omega_{ge}(t)\delta\omega_{ge}(t')\big)} = \int_{-\infty}^{\infty} d\omega S(\omega) e^{i\omega(t_j-t_i)} \big|\widetilde{\chi}(\omega)\big|^2 \label{equ:correlation:SI}$$ where $\widetilde{\chi}$ is Fourier transform of $\chi(t)$ depicted in Fig. 1.(a) in the main text. We find that to a good approximation, the correlation matrix (\[equ:correlation:SI\]) is diagonal, with elements $$\widetilde{W}_{ii} \approx 0.424 \times A_{\omega} \bigg(\frac{\pi}{\omega_r}\bigg)^2 N_{p},\;\;\;\;\forall\;i=1,\ldots,s \label{equ:correlation:Wii}.$$ [4]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [**]{} (, ) @noop [**]{},  ed., Springer Study Edition (, ) @noop [****,  ()]{} @noop [**]{} (, )
--- abstract: 'Gravitational radiation in plane-symmetric space-times can be encoded in a complex potential, satisfying a non-linear wave equation. An effective energy tensor for the radiation is given, taking a scalar-field form in terms of the potential, entering the field equations in the same way as the matter energy tensor. It reduces to the Isaacson energy tensor in the linearized, high-frequency approximation. An energy conservation equation is derived for a quasi-local energy, essentially the Hawking energy. A transverse pressure exerted by interacting low-frequency gravitational radiation is predicted.' author: - 'Sean A. Hayward' date: 19th May 2008 title: 'Energy of gravitational radiation in plane-symmetric space-times' --- Introduction ============ Gravitational radiation, as predicted by Einstein gravity, is indirectly observed in such examples as the Hulse-Taylor pulsar, and widely expected to be directly observed in the coming years, offering a new window to understand various astrophysical processes, such as binary inspiral and merger of black holes or neutron stars. However, the textbook theory of gravitational radiation mostly concerns weak radiation, either in the linearized approximation or at infinity in an asymptotically flat space-time [@MTW]. Comparatively little is known about strong-field radiation. One exception is plane gravitational radiation, where exact solutions describe radiation propagating in one direction. The simplest scenario to study interaction effects is the head-on collision of two such beams, as pioneered by Szekeres [@Sz1; @Sz2] and reviewed by Griffiths [@Gri]. More generally, one may study plane symmetric space-times, which in vacuum generally consist of gravitational radiation propagating in opposite directions and interacting [@apw]. Much is known about such space-times, including that the interaction is non-linear, that the key dynamical equations can be cast as a complex Ernst equation [@Ern], and that the cross-focusing of the radiation produces a caustic which is generically a curvature singularity, though there are non-generic exceptions [@CH; @cwbh; @snc]. This article introduces an effective energy tensor $\Theta$ for the gravitational radiation, taking a scalar-field form in terms of a complex potential $\Phi$. Then $\Theta$ enters the field equations in the same way as the matter energy tensor, in particular entering an energy conservation law. The Ernst equation is manifestly a wave equation for $\Phi$, generally with a non-linear source, which vanishes for collinear polarization. The method involves a conserved time vector $k^a$, a conserved energy-momentum density $j^a$, a corresponding energy $E$ and a first law for $E$ involving energy-supply and work terms. Surface gravity $\kappa$ is also defined and takes a quasi-Newtonian form. This is intended to complete the same programme of identifying physical quantities and equations which has previously been performed in spherical symmetry [@sph; @1st], cylindrical symmetry [@cyl] and a quasi-spherical approximation [@qs; @SH; @gwbh; @gwe]. These references will be assumed for comparison throughout the text without repeated citation, though the treatment here is self-contained. Metric variables and field equations ==================================== Cartesian coordinates $(z,y)$ on the planes of symmetry will be used, to allow easy comparisons with standard coordinates $(z,\varphi)$ in cylindrical symmetry and $(\vartheta,\varphi)$ in spherical symmetry and the quasi-spherical approximation. It is convenient to use null coordinates $x^\pm$ in the normal space, as they are adapted to gravitational radiation. Then the metric can be written locally as $$ds^2=-2e^{2\gamma}dx^+dx^-+A\left(e^{2\phi}\sec2\chi dy^2+2\tan2\chi dydz+e^{-2\phi}\sec2\chi dz^2\right) \label{met}$$ where $(A,\phi,\chi,\gamma)$ are functions of $(x^+,x^-)$. Here $A$ is the specific area, meaning that it is the area of a square coordinate patch $(0,1)\times(0,1)$ in the $(y,z)$ plane. It is invariant up to constant linear transformations of $y$ and $z$, under which it scales by a constant factor. The remaining freedom in $(y,z)$ is by rotations, under which $A$ is invariant. The functions $(\phi,\chi)$ encode the gravitational radiation, as will be seen below. They are invariant up to the above-mentioned transformations of $(y,z)$, which will be treated as fixed henceforth. The remaining function $\gamma$ is invariant up to functional rescalings $x^\pm\mapsto\tilde x^\pm(x^\pm)$, under which it transforms by additive functions of $x^+$ and $x^-$. The variables have been chosen so that the induced metric on the planes of symmetry takes a similar form to that used in the quasi-spherical approximation, with $(dz,dy)$ replaced by $(d\vartheta,\sin\vartheta d\varphi)$, and takes a similar form to that used in cylindrical symmetry. The Szekeres variables $(P,M,Q,W)$ are related by $$P=-\log A,\quad M=-2\gamma,\quad Q=-2\phi,\quad\sinh W=\tan2\chi \label{sz}$$ or $\cosh W=\sec2\chi$. The six independent components of the Einstein equation may be found directly, or by comparison with the Szekeres form, as $$\begin{aligned} &&2A\partial_\pm\partial_\pm A-(\partial_\pm A)^2 -4A\partial_\pm A\partial_\pm\gamma+4A^2\sec^22\chi\left((\partial_\pm\phi)^2+(\partial_\pm\chi)^2\right) =-16\pi A^2T_{\pm\pm}\label{ei}\\ &&\partial_+\partial_-A=8\pi AT_{+-}\label{ea}\\ &&2A\partial_+\partial_-\phi+\partial_+A\partial_-\phi+\partial_-A\partial_+\phi +4A\tan2\chi(\partial_+\phi\partial_-\chi+\partial_-\phi\partial_+\chi) =4\pi Ae^{2\gamma}\cos^22\chi(T^y_y-T^z_z)\label{ep}\\ &&2A\partial_+\partial_-\chi+\partial_+A\partial_-\chi+\partial_-A\partial_+\chi +4A\tan2\chi(\partial_+\chi\partial_-\chi-\partial_+\phi\partial_-\phi) =4\pi Ae^{2\gamma}\cos^22\chi(e^{2\phi}T^y_z+e^{-2\phi}T^z_y)\label{ec}\\ &&4A^2\partial_+\partial_-\gamma-\partial_+A\partial_-A +4A^2\sec^22\chi(\partial_+\phi\partial_-\phi+\partial_+\chi\partial_-\chi) =-8\pi A^2\left(2T_{+-}+e^{2\gamma}(T^y_y+T^z_z)\right)\label{eg}\end{aligned}$$ where $\partial_\pm=\partial/\partial x^\pm$, $T$ denotes the energy tensor of the matter with $T_{\pm\pm}=T(\partial_\pm,\partial_\pm)$, $T_{+-}=T(\partial_+,\partial_-)$, and the units are such that Newton’s gravitational constant is unity. The equations (\[ei\]) can be regarded as constraint equations on initial null hypersurfaces $\Sigma_\pm$ of constant $x^\mp$, as they are preserved in the $\partial_\mp$ directions due to the Bianchi identities or energy-momentum conservation. The other equations (\[ea\])–(\[eg\]) are then the evolution equations. In vacuum, $T=0$, it is well known that these equations describe the propagation and interaction of gravitational radiation in the opposite $\partial_\pm$ directions, and that the radiation may be encoded in $(\phi,\chi)$. The solution to (\[ea\]) is trivial and can be used to fix the rescaling freedom in $x^\pm$. One may give initial data for $(\phi,\chi)$ on $\Sigma_\pm$, corresponding to initial radiation profiles, with (\[ei\]) determining $\gamma$ on $\Sigma_\pm$. Then the main task is to solve (\[ep\])–(\[ec\]) simultaneously for $(\phi,\chi)$, after which the full solution follows from (\[eg\]) by quadrature for $\gamma$. The main equations (\[ep\])–(\[ec\]) can be written as a complex Ernst equation, corresponding physically to a non-linear wave equation, as will be verified below. Effective energy tensor for gravitational radiation =================================================== The next aim is to find an effective energy tensor $\Theta$ for the gravitational radiation, analogous to those found in cylindrical symmetry and the quasi-spherical approximation, and consistent with the Isaacson effective energy tensor in the high-frequency linearized approximation [@MTW]. In all cases, the components of the energy tensor are quadratic in first derivatives of the metric, in this case the $\partial_\pm$ derivatives of $(\phi,\chi)$, and such terms can be seen in the last term in parentheses on the left-hand side of each of (\[ei\]), (\[ep\])–(\[eg\]). The idea is to identify these terms as components of the desired $\Theta$, corresponding to the components of $T$ on the right-hand sides. The result is that one may introduce a complex potential $$\Phi=\phi+i\chi \label{Ph}$$ and define the effective energy tensor as $$\Theta_{ab}=\frac{2\nabla_{(a}\Phi\nabla_{b)}\bar\Phi -g_{ab}g^{cd}\nabla_c\Phi\nabla_d\bar\Phi} {8\pi\cosh^2(\Phi-\bar\Phi)}.\label{Th}$$ where $g$ is the space-time metric and $\nabla$ its covariant derivative operator. It is manifestly a tensor, taking a scalar-field form in terms of $\Phi$, with the same form, including the same denominator, as in the quasi-spherical approximation. Apart from this denominator, it is the energy tensor of a massless complex scalar field $\Phi$. Explicitly in terms of $(\phi,\chi)$, $$\Theta_{ab}=\frac {2\nabla_a\phi\nabla_b\phi+2\nabla_a\chi\nabla_b\chi -g_{ab}g^{cd}(\nabla_c\phi\nabla_d\phi+\nabla_c\chi\nabla_d\chi)} {8\pi\cos^22\chi}.$$ If $\chi=0$, it reduces to the energy tensor of a massless scalar field $\phi$, as in cylindrical symmetry, where the corresponding $\phi$ reduces to the Newtonian gravitational potential in the Newtonian limit. Here there are generally two polarizations of the radiation, as is familiar from the linearized approximation. Inspection of the metric (\[met\]) for small $\Phi$ identifies $\phi$ as encoding the “plus” polarization and $\chi$ as encoding the “cross” polarization. These properties justify the numerical factors chosen in the definitions of $(\phi,\chi)$ and partly motivated the chosen symbols. The non-trivial components of $\Theta$ follow explicitly as $$\begin{aligned} &&4\pi\Theta_{\pm\pm}=\sec^22\chi \left((\partial_\pm\phi)^2+(\partial_\pm\chi)^2\right)\label{Th1}\\ &&\Theta_{+-}=0\label{Th0}\\ &&4\pi\bot\Theta=e^{-2\gamma}\sec^22\chi (\partial_+\phi\partial_-\phi+\partial_+\chi\partial_-\chi)\bot g\label{Th2}\end{aligned}$$ where $\bot$ denotes projection onto the planes of symmetry and the transverse metric is given in $(y,z)$ coordinates by $$\bot g=A\pmatrix{e^{2\phi}\sec2\chi&\tan2\chi\cr\tan2\chi&e^{-2\phi}\sec2\chi}. \label{h}$$ It is then straightforward to verify that adding $\Theta$ to $T$ on the right-hand sides of the Einstein equations (\[ei\])–(\[eg\]) cancels the quadratic terms in $(\phi,\chi)$ on the left-hand sides. In abstract terms, the Einstein equation $G=8\pi T$ may be rewritten as $C=8\pi(T+\Theta)$ in terms of a truncated Einstein tensor $C$, whose components have a simpler form to those of the Einstein tensor $G$. The physical interpretation of $\Theta_{\pm\pm}/2$ is the energy density of gravitational radiation propagating in the $\partial_\mp$ direction. Apart from the non-linear modification due to the $\sec^22\chi$ factor in (\[Th1\]), it is the energy density of a complex scalar field $\Phi$. The numerical factor also corresponds to the energy density of electromagnetic radiation in Gaussian units, with $\phi$ corresponding to the electric potential and $\chi$ vanishing. The vanishing of $\Theta_{+-}$ (\[Th0\]) is familiar from cylindrical symmetry and the quasi-spherical approximation, and indicates that the gravitational radiation is workless. Note that this is generally not so for a similar effective energy tensor found in the context of black holes [@bhd2; @bhd3] and uniformly expanding flows [@gr; @BHMS]. The non-negativity of $\Theta_{\pm\pm}$ indicates that, as an energy tensor, $\Theta$ satisfies the dominant energy condition, meaning physically that gravitational radiation carries positive energy. The other non-zero terms (\[Th2\]) indicate that interacting gravitational radiation generally exerts transverse pressure and shear, proportional to the transverse metric. These terms vanish for radiation propagating in one direction only, where $\Phi$ is a function of $x^+$ (or $x^-$) only. They are commonly known as plane waves, but since this would appear to imply periodicity in some sense, this article uses the more general terminology of radiation. Conservation of energy ====================== To see how $\Theta$ further qualifies as an effective energy tensor, one may proceed by analogy with spherical symmetry, cylindrical symmetry and the quasi-spherical approximation. Here the definitions and equations will be stated first in a manifestly invariant way, then verified in coordinates. First introduce the specific area radius $$r=\sqrt{A/4\pi}.\label{r}$$ This is defined in order to compare with spherically symmetric space-times or the quasi-spherical approximation, so that one may easily treat astrophysical gravitational radiation as observed on or near Earth, since distant sources can be treated as points, producing roughly spherical wavefronts which can be treated as planes when observed. The Hodge operator $*$ defines the Hodge dual ${*}\alpha$ of a normal one-form, up to sign, by $$g^{-1}({*}\alpha,\alpha)=0, \quad g^{-1}({*}\alpha,{*}\alpha)=-g^{-1}(\alpha,\alpha).\label{dual}$$ Then a preferred time vector is defined by $$k=g^{-1}({*}dr) \label{k}$$ where the qualification “specific” is omitted here and henceforth. This vector is conserved: $$\nabla\cdot k=0. \label{nk}$$ The corresponding energy-momentum density is $$j=-g^{-1}((T+\Theta)\cdot k). \label{j}$$ Then $j$ is also conserved: $$\nabla\cdot j=0. \label{nj}$$ Here the standard physical interpretation is conservation of energy, and the role of $\Theta$ as an effective energy tensor is clear in that it appears additively with $T$ in $j$. Put another way, both $k$ and $j$ are Noether currents, and the corresponding Noether charges are area volume $$V=\textstyle{\frac43}\pi r^3\label{V}$$ and energy $E$, defining the latter. Specifically: $$Ag(k)={*}dV,\qquad Ag(j)={*}dE. \label{noe}$$ Integrating for $E$ and requiring it to vanish for flat space-time, $$E=-\textstyle{\frac12}rg^{-1}(dr,dr) \label{E}$$ which has a similar form to the Misner-Sharp energy in spherical symmetry and the modified Thorne energy in cylindrical symmetry. In fact, if the planes of symmetry are toroidally compacted by periodic identifications in $(y,z)$ at 0 and 1, so that $A$ is the area, then $E$ coincides with the Hawking energy [@Haw]. Note that $E>0$ for trapped surfaces, $E=0$ for marginal surfaces and $E<0$ for untrapped surfaces. In particular, $E$ vanishes for radiation propagating in one direction only. Thus it should not be interpreted as the energy of a wave in any sense. Taking the example of two colliding beams, where the surfaces in the interaction region are trapped if the null energy condition holds, one may interpret $E$ as measuring energy due to cross-focusing of radiation. In particular, it diverges at the caustic formed by such cross-focusing. Introduce the work density $$w=-\hbox{tr}\,T/2 \label{w}$$ and the energy flux $$\psi=(T+\Theta)\cdot g^{-1}(dr)+wdr \label{psi}$$ where the trace is in the normal space. Then conservation of energy (\[nj\]) can be written in the form of a first law: $$dE=A\psi+wdV \label{dE}$$ which has the same form as in spherical symmetry and the quasi-spherical approximation. Here the two terms can be interpreted as energy supply and work respectively, as in the first law of thermodynamics. Note again that $\Theta$ appears additively with $T$ in $\psi$ and (in a null sense) $w$, playing the role of an effective energy tensor. The corresponding definition of surface gravity is $$\kappa={*}d{*}dr/2 \label{kappa}$$ where $d$ is the exterior derivative of the normal space. Then the Einstein equations yield $$\kappa=\frac{E}{r^2}-4\pi rw \label{sg}$$ which again has the same form as that in spherical symmetry and the quasi-spherical approximation. Apart from the matter term, this has the form of Newtonian gravitational acceleration. In dual-null coordinates (\[met\]), the corresponding expressions are $$\begin{aligned} &&{*}\alpha=-\alpha_+dx^++\alpha_-dx^-\quad\hbox{where $\alpha=\alpha_+dx^++\alpha_-dx^-$}\\ &&k=e^{-2\gamma}(\partial_+r\partial_--\partial_-r\partial_+)\\ &&j=e^{-4\gamma}\big[\big((T_{--}+\Theta_{--})\partial_+r-T_{+-}\partial_-r\big)\partial_+ -\big((T_{++}+\Theta_{++})\partial_-r-T_{+-}\partial_+r\big)\partial_-\big]\\ &&E=e^{-2\gamma}r\partial_+r\partial_-r\\ &&w=e^{-2\gamma}T_{+-}\\ &&\psi_\pm=-e^{-2\gamma}(T_{\pm\pm}+\Theta_{\pm\pm})\partial_\mp r\\ &&\kappa=-e^{-2\gamma}\partial_+\partial_-r.\end{aligned}$$ Writing $4(4\pi)^{3/2}E=e^{-2\gamma}A^{-1/2}\partial_+A\partial_-A$ and using the Einstein equations (\[ei\])–(\[ea\]), a calculation yields $$\partial_\pm E=Ae^{-2\gamma}\big(\partial_\pm rT_{+-} -\partial_\mp r(T_{\pm\pm}+\Theta_{\pm\pm})\big).$$ Comparison with $$\begin{aligned} Ag(j)&=&Ae^{-2\gamma}\big[\big((T_{++}+\Theta_{++})\partial_-r-T_{+-}\partial_+r\big)dx^+ -\big((T_{--}+\Theta_{--})\partial_+r-T_{+-}\partial_-r\big)dx^-\big] \nonumber\\ &=&[-\partial_+Edx^++\partial_-Edx^-]={*}dE\end{aligned}$$ verifies (\[noe\]). Similarly, the calculation $$A(\psi_\pm+w\partial_\pm r) =Ae^{-2\gamma}\big(-\partial_\mp r(T_{\pm\pm}+\Theta_{\pm\pm})+\partial_\pm rT_{+-}\big) =\partial_\pm E$$ verifies (\[dE\]). The easiest way to verify the conservation equations (\[nk\]), (\[nj\]) is to use (\[noe\]) and exterior calculus: $$\begin{aligned} \nabla\cdot k&=&A^{-1}{*}d{*}(Ag(k))=A^{-1}{*}d{*}{*}dV=0\\ \nabla\cdot j&=&A^{-1}{*}d{*}(Ag(j))=A^{-1}{*}d{*}{*}dE=0\end{aligned}$$ since ${*}{*}=\pm1$ and $dd=0$. Finally, a calculation using the Einstein equation (\[ea\]) verifies (\[sg\]). Gravitational wave equation =========================== As is well known, the propagation equations (\[ep\])–(\[ec\]) for $(\phi,\chi)$ can be written as a single complex Ernst equation, usually given in terms of an Ernst potential $Z=e^{2\Phi}$ or $E=\tanh\Phi$ [@Gri]. The corresponding form for $\Phi$ is $$\nabla^2\Phi=2\tanh(\Phi-\bar\Phi)g^{-1}(\nabla\Phi,\nabla\Phi)\label{ernst}$$ where $\bot T=0$ for simplicity. This has the same form as that in the quasi-spherical approximation. The calculation is straightforward: $$\nabla^2\Phi=-e^{-2\gamma}\left(2\partial_+\partial_-\Phi +A^{-1}(\partial_+A\partial_-\Phi+\partial_-A\partial_+\Phi)\right)$$ and $$\begin{aligned} 2\tanh(\Phi-\bar\Phi)g^{-1}(\nabla\Phi,\nabla\Phi) &=&-4e^{-2\gamma}\tanh2i\chi(\partial_+\phi+i\partial_+\chi)(\partial_-\phi+i\partial_-\chi) \nonumber\\ &=&4e^{-2\gamma}\tan2\chi\left((\partial_+\phi\partial_-\chi+\partial_-\phi\partial_+\chi) +i(\partial_+\chi\partial_-\chi-\partial_+\phi\partial_-\phi)\right)\end{aligned}$$ then the result follows by comparing with (\[ep\])–(\[ec\]). Note that (\[ernst\]) is manifestly a wave equation for $\Phi$, equating $\nabla^2\Phi$ to a non-linear term in $\Phi$. This source term is highly non-linear, being quadratic in $\nabla\Phi$ and also involving $\tanh(\Phi-\bar\Phi)$. In the special case of collinear polarization $\chi=0$, the source term vanishes and the equation reduces to the wave equation for $\phi$, $\nabla^2\phi=0$. This can be written as an Euler-Poisson-Darboux equation, for which general solutions are available. The full Ernst equation has been studied by various methods both in plane symmetry and in the original context of stationary axisymmetric space-times; see e.g. the review of Griffiths [@Gri] and references therein. Linearized gravitational radiation ================================== To compare with the usual description of linearized gravitational radiation [@MTW], it is convenient to switch temporarily to Minkowski coordinates $(t,x,y,z)$ defined by $\sqrt2 x^\pm=t\pm x$. Expanding about the Minkowski metric $\eta=\hbox{diag}\{-1,1,1,1\}$ by $g=\eta+h$ consists of expanding about $(A,\phi,\chi,\gamma)=(1,0,0,0)$, so one can write $A=1+\alpha$ and use $(\alpha,\phi,\chi,\gamma)$ as perturbative fields, each assumed ${}\ll1$. Linearizing, the metric perturbation $h$ is given by $$-2\gamma(dt^2-dx^2)+(\alpha+2\phi)dy^2+2(\alpha+2\chi)dydz+(\alpha-2\phi)dz^2.$$ Then the trace of $h$ is $2\alpha+4\gamma$ and the trace-reversed metric perturbation $\bar h$ is given by $$\alpha(dt^2-dx^2)+(2\phi-2\gamma)dy^2+2(\alpha+2\chi)dydz+(-2\phi-2\gamma)dz^2.$$ Applying the transverse traceless gauge conditions, $\partial^a\bar h_{ab}=0$ yields constant $\alpha$, $\bar h_{0b}=0$ yields $\alpha=0$ and $\bar h^a_a=0$ yields $\gamma=0$. Then $h=\bar h$ is indeed transverse: in $(y,z)$ coordinates, $$h=\pmatrix{2\phi&2\chi\cr2\chi&-2\phi}.$$ This verifies the appropriateness of the transverse traceless gauge conditions in plane symmetry. Noting that the space-time strain is $h/2$, this also confirms that $\phi$ and $\chi$ encode the “plus” and “cross” polarizations respectively. In the high-frequency approximation, the Isaacson effective energy tensor $\bar\Theta$ for gravitational waves is defined by $$32\pi\bar\Theta_{ab}= \langle\partial_a h_{cd}\partial_bh^{cd}\rangle$$ where the angle brackets denote averaging over several wavelengths [@MTW]. Returning to dual-null coordinates, the explicit expressions are $$\begin{aligned} &&4\pi\bar\Theta_{\pm\pm}=\langle(\partial_\pm\phi)^2+(\partial_\pm\chi)^2\rangle\\ &&4\pi\bar\Theta_{+-}=\langle\partial_+\phi\partial_-\phi+\partial_+\chi\partial_-\chi\rangle\\ &&\bot\bar\Theta=0.\end{aligned}$$ Comparing with (\[Th1\]–\[Th2\]), one sees that the radiative components $\bar\Theta_{\pm\pm}$ agree with $\Theta_{\pm\pm}$, but the other components apparently do not. However, this is due to the averaging, as follows. First note that the gravitational wave equation (\[ernst\]) linearizes to the flat-space form $$\partial_+\partial_-\Phi=0$$ with general solution $$\Phi=\Phi_+(x^+)+\Phi_-(x^-)$$ as expected. Considering linear superpositions of Fourier modes in the high-frequency approximation, it suffices to consider solutions of the form $$\Phi_\pm=\phi_\pm\sin\sqrt2\omega_\pm x^\pm +i\chi_\pm\sin\sqrt2\nu_\pm x^\pm$$ for constant amplitudes $(\phi_\pm,\chi_\pm)$ and angular frequencies $(\omega_\pm,\nu_\pm)$. Then $$\begin{aligned} &&\partial_\pm\phi=\sqrt2\phi_\pm\omega_\pm\cos\sqrt2\omega_\pm x^\pm\\ &&\partial_\pm\chi=\sqrt2\chi_\pm\nu_\pm\cos\sqrt2\nu_\pm x^\pm\end{aligned}$$ and $$\begin{aligned} &&4\pi\Theta_{\pm\pm}=2\phi_\pm^2\omega_\pm^2\cos^2\sqrt2\omega_\pm x^\pm +2\chi_\pm^2\nu_\pm^2\cos^2\sqrt2\nu_\pm x^\pm\\ &&4\pi\bot\Theta =2\left(\phi_+\phi_-\omega_+\omega_-\cos\sqrt2\omega_+x^+\cos\sqrt2\omega_-x^- +\chi_+\chi_-\nu_+\nu_-\cos\sqrt2\nu_+x^+\cos\sqrt2\nu_-x^-\right)\delta\end{aligned}$$ where $\delta=\hbox{diag}\{1,1\}$. Since $\langle\cos^2\rangle=\frac12$ but $\langle\cos\rangle=0$, $\langle\bot\Theta\rangle=0$ and similarly $\bar\Theta_{+-}=0$. Then $$\begin{aligned} &&4\pi\langle\Theta_{\pm\pm}\rangle=4\pi\bar\Theta_{\pm\pm} =\phi_\pm^2\omega_\pm^2+\chi_\pm^2\nu_\pm^2\\ &&\langle\Theta_{+-}\rangle=\bar\Theta_{+-}=0\\ &&\langle\bot\Theta\rangle=\bot\bar\Theta=0\end{aligned}$$ or $$\langle\Theta\rangle=\bar\Theta$$ as expected. Note that the energy densities $\bar\Theta_{\pm\pm}/2$ have the expected form of squares of amplitudes times angular frequencies, with the same numerical factor $1/8\pi$ as for electromagnetic radiation in Gaussian units. On the other hand, for low-frequency waves, transverse pressure is generally present in $\bot\Theta$ even in the linearized approximation, for which $\Theta$ reduces to the energy tensor of a massless complex scalar field in flat space-time: $$8\pi\Theta_{ab}=2\partial_{(a}\Phi\partial_{b)}\bar\Phi -\eta_{ab}\eta^{cd}\partial_c\Phi\partial_d\bar\Phi.$$ The non-zero components (\[Th1\])–(\[Th2\]) reduce to $$\begin{aligned} &&4\pi\Theta_{\pm\pm}=(\partial_\pm\phi)^2+(\partial_\pm\chi)^2\\ &&4\pi\bot\Theta= (\partial_+\phi\partial_-\phi+\partial_+\chi\partial_-\chi)\delta\end{aligned}$$ and in particular the transverse shear vanishes, but transverse pressure generally remains. Recall that this is an effect for interacting radiation, vanishing for radiation propagating in one direction only. However, if two beams with similar amplitude and frequency are passing through one another, the transverse pressure is generally of the same order as the energy densities $\Theta_{\pm\pm}/2$. Although this has been derived here only for plane-symmetric radiation propagating in opposite directions, one may expect it to generalize to gravitational radiation from any two sources in different directions. Research supported by the National Natural Science Foundation of China under grants 10375081, 10473007 and 10771140, by Shanghai Municipal Education Commission under grant 06DZ111, and by Shanghai Normal University under grant PL609. [99]{} C W Misner, K S Thorne & J A Wheeler, Gravitation (Freeman 1973). P Szekeres, Nature [**228**]{}, 1183. P Szekeres, [J. Math. Phys.]{} [**13**]{}, 286 (1972). J B Griffiths, Colliding Waves in General Relativity (Oxford University Press 1991). S A Hayward, [Class. Quantum Grav.]{} [**7**]{}, 1117 (1990). F J Ernst, [Phys. Rev.]{} [**167**]{}, 1175 (1968). C J S Clarke & S A Hayward, [Class. Quantum Grav.]{} [**6**]{}, 615 (1989). S A Hayward, [Class. Quantum Grav.]{} [**6**]{}, 1021 (1989). S A Hayward, [Class. Quantum Grav.]{} [**6**]{}, L179 (1989). S A Hayward, [Phys. Rev.]{} [**D53**]{}, 1938 (1996). S A Hayward, [Class. Quantum Grav.]{} [**15**]{}, 3147 (1998). S A Hayward, [Class. Quantum Grav.]{} [**17**]{}, 1749 (2000). Corrigendum ibid 4159. S A Hayward, [Phys. Rev.]{} [**D61**]{}, 101503 (2000). H Shinkai & S A Hayward, [Phys. Rev.]{} [**D64**]{}, 044002 (2001). S A Hayward, [Class. Quantum Grav.]{} [**18**]{}, 5561 (2001). S A Hayward, [Phys. Lett.]{} [**A294**]{}, 179 (2002). S A Hayward, [Phys. Rev. Lett.]{} [**93**]{}, 251101 (2004). S A Hayward, [Phys. Rev.]{} [**D70**]{}, 104027 (2004). S A Hayward, [Class. Quantum Grav.]{} [**23**]{}, L15 (2006). H Bray, S A Hayward, M Mars & W Simon, [Comm. Math. Phys. Lett.]{} [**272**]{}, 119 (2007). S W Hawking, [J. Math. Phys.]{} [**9**]{}, 598 (1968).
--- abstract: 'Gravitational microlensing, in case of relevant finite source size effects, provides an unique tool for the study of stellar atmospheres through the enhancement of a characteristic polarization signal. Here, we consider a set of highly magnified events and show that for different types of source stars (as hot, late type main sequence and cool giants) showing that the polarization strength may be of $\simeq 0.04$ percent for late type stars and up to a few percent for cool giants.' address: - | Department of Mathematics and Physics [*Ennio De Giorgi*]{}, University of Salento, and INFN via per Arnesano73100Lecce Italy\ $^*$E-mail: nucita@le.infn.it - | Dipartimento di Fisica “E. R. Caianiello”, Università di Salerno, Via Ponte don Melillo, 84084 Fisciano (SA), Italy\ Istituto Internazionale per gli Alti Studi Scientifici (IIASS), Vietri Sul Mare (SA), Italy - 'Institute for Theoretical Physics, University of Zürich, Winterthurerstrasse 190, CH-8057 Zürich, Switzerland\' - | Institute of Theoretical and Experimental Physics,B. Cheremushkinskaya 25, 117259 Moscow, Russia\ Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 141980 Dubna, Russia author: - 'A.A. NUCITA$^*$, G. INGROSSO, F. De PAOLIS, F. STRAFELLA' - 'S. CALCHI-NOVATI' - 'Ph. JETZER' - 'A. F. ZAKHAROV' title: POLARIZATION PROFILES FOR SELECTED MICROLENSING EVENTS TOWARDS THE GALACTIC BULGE --- Introduction {#aba:sec1} ============ Gravitational microlensing, initially developed to search for MACHOs in [the]{} Galactic halo and near the Galactic disc [@Pacz86][@Macho93][@Eros93][@Ogle93], has become nowadays a powerful method to test the stellar atmosphere models and to study of the star limb-darkening profile, i.e. the variation of the intensity from the disc center to the limb. Furthermore, this technique allowed the discovery of exoplanetary systems by observing deviations in the light-curves expected for single-lens events [@Dominik10][@Gaudi10]. Microlenses can also spatially resolve a source star thanks to caustic structures created by a lens system [@SEF]. Caustics are formed by a set of closed curves, along which the point source magnification is formally infinite, with a steep increase in magnification in their vicinity. The aim of the present work is to consider the polarization variability of the source star light for real events, taking into account different polarization mechanisms according to the source star type. Indeed, variations in the polarization curves are similar to finite source effects in microlensing when color effects may appear due to limb darkening and color distribution across the disc. However, the light received from the stars is usually unpolarized, [since the flux from each stellar disc element is the same.]{} A net polarization of the stellar light may be introduced by some suitable asymmetry in the stellar disc as eclipses, tidal distortions, stellar spots, fast rotation and magnetic fields or also in the propagation through the interstellar medium. In the case of microlensing events, polarization in the stellar light may be induced by the proper motion of the lens star through the source star disc. In such case, different parts are magnified by different amounts giving rise not only to a time dependent gravitational magnification of the source star light but also to a time dependent polarization. Good candidates for polarization observationsw ould be events microlensing involving young, hot giant star sources. Indeed, these objects have electron scattering atmospheres needed for producing limb polarization through Thomson scattering [@Simmons95b]. However, the bulge of the Galaxy does not contain a large number of hot giant stars. However, polarization may be also induced by the scattering of star light off atoms, molecules and dust grains in the adsorptive atmospheres of evolved, cool stars as shown by Refs. and . These objects, that do not have levels of polarization as high as those predicted by the Chandrasekar model, may display an intrinsic polarization of up to several [ percent]{}, due to the presence of stellar winds that give rise to extended adsorptive envelopes. This is the case for many cool giant stars, in particular for the red giants. Such evolved stars constitute a significant fraction of the lensed sources towards the Galactic bulge, the LMC [@Alcock97] and the M31 galaxy[@Sebastiano10] making them valuable candidates for observing variable polarization during microlensing events. In Ref.   (but see also the references therein for more details) we calculated the polarization profile as a function of the time for a selected sample of both single events (11 highly magnified single-lens cases with identified source star type) and 6 exoplanetary microlensing events observed towards the Galactic bulge. In predicting the polarization light curves of each event we considered the nature of the source star, i.e. a late type main sequence and/or cool giant stars. Indeed, different polarization mechanisms take place in the stellar atmospheres, depending on the source star type: photon (Thomson) scattering on free electrons, coherent (Rayleigh) scattering off atoms and molecules, and photon scattering on dust grains, [ for hot, late type and cool giant stars]{} (with extended atmospheres), respectively. We note that high magnification (up to 12 mag in I band) events, the expected polarization signal can reach values as high as 0.04 percent at the peak in the case of [late type source stars]{} and up to a few [percent]{} in the case of cool giant source stars (red giants) with extended envelopes. For these events the primary lens crosses the source star disc ([*transit*]{} events) and relatively large values of $P$ are thereby produced due to large finite source effects and the large magnification gradient throughout the source star disc, with the time duration of the polarization peak varying from 1 h to 1 day (depending on the source star radius and the lens impact parameter). Similar values of polarization may also be obtained in exoplanetary events when the source star crosses the primary or the planetary caustics. While in the former case (as for single-lens events) the peak of the polarization signal always occurs at symmetrical points with respect to the instant $t_0$ of maximum magnification, in the latter case the polarization signal may occur at any (and generally unpredictable) time during the event. As a last remark, we note that the available instrumentation may already detect a polarization signal down to a degree of few percent: the FORS2 camera on the ESO VLT telescope is capable to measure the polarization signal for a 12 mag source star with a precision of 0.1 [percent]{} in 10 min integration time, and for a 14 mag star in a 1h. Hence, polarization measurements in highly magnified microlensing events may offer the unique opportunity to probe stellar atmospheres of Galactic bulge stars and, given sufficient observational precision, may in principle provide independent constraints on the lensing parameters also for exoplanetary events. [99]{} Alcock, C., Allen, W. H., Allsman, R. A. et al., [*ApJ*]{}, [**491**]{}, 436, (1997). Calchi Novati, S., [*Gen. Rel. Grav.*]{}, [**42**]{}, 2101, (2010). Paczyński, B., 1986, [*ApJ*]{}, [**304**]{}, 1, (1986). Alcock, C., Akerloff, C. W., Allsman, R. A., et al., [*Nature*]{}, [**365**]{}, 621, (1993). Aubourg, E., Bareyre, P., Brehin, S., et al., [*Nature*]{}, [**365**]{}, 623, (1993). . Dominik, M., [*Gen. Rel. Grav.*]{}, [**42**]{}, 2075, (2010). Gaudi, B. S., Refereed chapter in EXOPLANETS, edited by S. Seager, Tucson, AZ: University of Arizona Press, p. 79, (2010). Schneider, P., Ehlers, J. & Falco E. E., Gravitational Lensing, Springer, Berlin, (1992). Witt, H. J., & Mao, S., [*ApJ*]{}, [**430**]{}, 505, (1994). Simmons, J. F. L., Willis J. P., Newsam, A. M., [*A&A*]{}, [**293**]{}, L46, (1995). Simmons J. F. L., Bjorkman J. E., Ignace R., Coleman I. J., [*MNRAS*]{}, [**336**]{}, 501, (2002). Ignace, R., Bjorkman, E. & Bryce, H. M., [*MNRAS*]{}, [**366**]{}, 92, (2006). Ingrosso, G., Calchi Novati, S., De Paolis, F., Jetzer, Ph., et al., [*MNRAS*]{}, [**426**]{}, 1496, (2012).
![image](cover.ps){width="8.5in"} [  We present a compelling case for a *systematic* and *comprehensive* study of the resolved and unresolved stellar populations, ISM, and immediate environments of galaxies throughout the local volume, defined here as $D\!<\!20$Mpc. This volume is our cosmic backyard and the smallest volume that encompasses environments as different as the Virgo, Ursa Major, Fornax and (perhaps) Eridanus clusters of galaxies, a large number and variety of galaxy groups (e.g., Sculptor, M81, M83, CVnI and II clouds, M51, M101, M74, NGC5866, M104, and M77 groups), and several cosmic void regions. In each galaxy, through a pan-chromatic ($\sim$160–1100nm) set of broad-band and diagnostic narrow-band filters, ISM structures and individual luminous stars to ${\mbox{\rlap{\hbox{\lower3pt\hbox{\ensuremath{\sim}}}}\raise2pt\hbox{\ensuremath{>}}}}$1mag below the TRGB should be resolved on scales of $<$5 pc (at $D{\mbox{\rlap{\hbox{\lower3pt\hbox{\ensuremath{\sim}}}}\raise2pt\hbox{\ensuremath{<}}}}20$Mpc, $\lambda$$\sim$800nm, for $\mu_I{\mbox{\rlap{\hbox{\lower3pt\hbox{\ensuremath{\sim}}}}\raise2pt\hbox{\ensuremath{>}}}}24$magarcsec$^{-2}$ and $m_{I}^{\hbox{\tiny TRBG}}{\mbox{\rlap{\hbox{\lower3pt\hbox{\ensuremath{\sim}}}}\raise2pt\hbox{\ensuremath{<}}}}27.5$mag). Resolved and unresolved stellar populations would be analyzed through color-magnitude and color-color diagram fitting and population synthesis modeling of multi-band colors and would yield physical properties such as spatially resolved star formation histories. The ISM within and around each galaxy would be analyzed using key narrow-band filters that distinguish photospheric from shock heating and provide information on the metallicity of the gas. Such a study would finally allow unraveling the global and spatially resolved star formation histories of galaxies, their assembly, satellite systems, and the dependences thereof on local and global environment within a truly representative cosmic volume. The proposed study is not feasible with current instrumentation but argues for a wide-field (${\mbox{\rlap{\hbox{\lower3pt\hbox{\ensuremath{\sim}}}}\raise2pt\hbox{\ensuremath{>}}}}$250 arcmin$^2$), high-resolution (${\mbox{\rlap{\hbox{\lower3pt\hbox{\ensuremath{\sim}}}}\raise2pt\hbox{\ensuremath{<}}}}$002–0065\[300–1000nm\]), ultraviolet–near-infrared imaging facility on a 4m-class space-based observatory.\ [*Keywords*: galaxies: nearby — galaxies: stellar populations — galaxies: ISM — galaxies: satellites — galaxies: origins/assembly — ISM: star formation — ISM: feedback — near-field cosmology — stellar archeology — star formation]{} ]{} [**A**]{}t the present epoch, most of the baryonic matter that condensed into galaxies is locked into stars. The stellar populations of galaxies not only record the history of baryonic matter, , through chemical abundances and stellar spatial distributions, but also its rate of evolution via the star formation process. The visible forms of galaxies are shaped by a series of complex processes which convert dissipative interstellar matter into nearly collisionless stars. Despite the success of current theoretical models in following the growth of dark matter structures, significant problems remain in understanding how the baryonic components of galaxies develop. These include, for example, the low numbers of visible dwarfs and satellite galaxies relative to the predicted swarms of low mass dark matter halos around giant systems and the comparatively high angular momenta and old ages of galactic disks. Whether these difficulties represent fundamental issues with the hierarchical dark matter model, a lack of understanding of star formation processes and their feedback on galactic scales, or a lack of representative data spanning the full range in cosmic environment is yet unclear. To advance our understanding of the star formation and chemical enrichment histories of the stellar systems within the $D<20$Mpc local volume, one would need access to the vacuum UV through near-IR wavelength regime. The UV is uniquely sensitive to hot sources such as massive young stars, low-mass accreting protostars, and certain types of old, highly evolved stars. Deep UV observations shortward of 365nm of A and F-type stars, for example, are particularly important for tracking metal enrichment, star formation histories, and galaxy disk evolution. In older ($>$5Gyr) ------------------------------------------------------------------------ \ *A Systematic Study of the Stellar Populations and ISM in Galaxies out to the Virgo Cluster: near-field cosmology within the local universe* stellar populations, helium-burning stars in advanced evolutionary phases have surface temperatures $>$10,000K, making them UV-bright. These hot objects are not only important in their own right, but also provide key information on mass loss during the red giant branch (RGB) evolution which precedes the hot phases. Stellar mass loss is a central problem in stellar astrophysics and is related to a number of other important processes, such as dust production, X-ray emission, and accretion flows. Many key diagnostics of interstellar gas and dust (ISM) are found also only at wavelengths shortward of 400nm, including the 217.5nm peak in the dust extinction law, and a number of important plasma emission lines (, [$\lambda\lambda$]{}372.7nm, [$\lambda$]{}279.9nm and [$\mbox{\textrm{Ly}}\alpha$]{}). The 150–250nm region is also one of the *darkest* parts of the natural sky background above the Earth’s atmosphere, permitting the detection of extremely faint sources. *Wide-field, high-resolution vacuum-UV imaging would open up a new window on this last under-explored corner of normal stellar evolution*. Stellar populations contain the histories of evolution of the baryonic components of galaxies. Accessing this information is complicated by the presence of multiple stellar population components projected along each sightline, effects of interstellar dust on observed spectral energy distributions, and the relatively low brightnesses of outer regions of galaxies relative to the sky. Multi-band UV through near-IR ($\sim$200–1100nm) measurements from space are required to derive extinction corrected stellar SEDs with sufficient precision to distinguish differences in metallicity and age. Unraveling the star formation histories of nearby galaxies (and spatial variations therein) *in detail* requires one to resolve individual stars to ${\mbox{\rlap{\hbox{\lower3pt\hbox{\ensuremath{\sim}}}}\raise2pt\hbox{\ensuremath{>}}}}$1mag below the Tip of the RGB (TRGB). Although [*HST*]{} would in theory be capable of accessing the TRGB out to $\sim$12Mpc, in practice *very* few studies have been able to push beyond 7Mpc because of [*HST*]{}’s limited aperture (exposure times comparable to those in the Deep and Ultradeep Fields would be required). A particularly novel opportunity enabled by a larger aperture and similar or higher resolution would be to resolve red K-giant stars within star streams known to exist within the Virgo Cluster in the form of structure in the diffuse intra-cluster light. This would allow unraveling for the first time the 3D structure, kinematics and galaxy assembly history within the Virgo Cluster. *Space-based wide-field high-resolution optical–near-IR imaging would open up a new window of discovery space that remains inaccessible to or exceedingly inefficient with next-generation giant ground-based telescopes*. Previous space-based UV–near-IR imaging facilities, however, emphasized either low spatial resolution and wide fields (, [*GALEX*]{}; strictly UV, minimal filter set \[$n$=2\]) or high resolution and small fields (, [*HST*]{}). For a study of both the resolved stellar populations *and* its dependence on the global structure and evolution of nearby galaxies, one would need to combine:\ - a large field of view (FoV) that is well-matched to the angular sizes of nearby galaxies and their satellite systems; - sensitivity to detect at ${\mbox{\rlap{\hbox{\lower3pt\hbox{\ensuremath{\sim}}}}\raise2pt\hbox{\ensuremath{>}}}}$5$\sigma$ individual RGB stars to ${\mbox{\rlap{\hbox{\lower3pt\hbox{\ensuremath{\sim}}}}\raise2pt\hbox{\ensuremath{>}}}}$1mag below the Tip of the RGB (TRGB; $M_{I}^{\hbox{\tiny TRBG}}$$\simeq$$-$4.0mag, $m_{I}^{\hbox{\tiny TRBG}}{\mbox{\rlap{\hbox{\lower3pt\hbox{\ensuremath{\sim}}}}\raise2pt\hbox{\ensuremath{<}}}}$27.5mag); - high angular resolution that allows resolving individual luminous RGB stars at linear scales of $<$5pc out to $\sim$20Mpc (at $\lambda$$\sim$800nm and $\mu_I{\mbox{\rlap{\hbox{\lower3pt\hbox{\ensuremath{\sim}}}}\raise2pt\hbox{\ensuremath{>}}}}24$magarcsec$^{-2}$); and - a sufficiently rich complement of UV–near-IR broad-, medium- and narrow-band filters to provide physically meaningful diagnostics on both stars and ISM. [r]{}[0.50]{} Color-magnitude and color-color diagrams obtained by [*HST*]{} and large ground-based telescopes of the resolved stellar populations within nearby galaxies enabled enormous leaps forward in our understanding of the stellar mass distributions and star-formation histories within our own and nearby ($D\,{\mbox{\rlap{\hbox{\lower3pt\hbox{\ensuremath{\sim}}}}\raise2pt\hbox{\ensuremath{<}}}}\,$6Mpc) galaxies. Pushing that capability out to 20Mpc, with large fields of view sampled with sufficient sensitivity and sampled at linear scales $<$5pc would provide access to galaxies within a large and varied number of galaxy groups as well as the nearest clusters (Ursa Major and Virgo). For higher-surface brightness regions and for serendipitously observed more distant galaxies, where individual stars cannot be resolved, the constituent stellar populations can still be deduced from their integrated light, since the integrated UV energy distributions of stellar populations evolve strongly over timescales up to 3Gyr. The high-sensitivity region is shortward of [$\sim$]{}400nm, where the confluence of hydrogen absorption lines in hotter stars and the Balmer Jump and metallic absorption features in cooler ones begin to strongly affect the gross spectral structure. The UV will provide the parameters needed to measure star formation rates and break age–metallicity degeneracies in disecting composite stellar populations, and recover star-formation histories. The UV allows direct detection of the massive stars responsible for most of the ionization, photo-dissociation, kinetic-energy input, and element synthesis in galaxies. These processes are responsible for much of the astrophysics of the universe. By contrast, most other methods of studying massive star populations yield only indirect measures since they rely on re-processing of the UV photons by the surrounding medium (regions or dust clouds). Furthermore, since the production of Lyman-continuum photons by young populations rapidly declines after [$\sim$]{}5Myr, these other methods probe star formation only over a short period, which constitutes a tiny fraction (0.05%) of the lifetime of a galaxy. By comparison, the short-wavelength continuum below 400nm remains a sensitive indicator of star-formation histories for ages up to 100$\times$ greater. [**Key scientific themes that have arisen from recent advances**]{} [***Near Field Cosmology: the oldest stellar populations.***]{}  It is useful to ask *where* the oldest stars are located. We know that in the Milky Way they reside in the spheroidal halo, in the LMC and SMC they have the largest radial scale of any stellar population, and they usually are the least centrally concentrated stellar component of dwarf spheroidals. However, even in nearby galaxies these results only apply in a mean sense. With the growing realization of the importance of interactions in the lives of galaxies, as demonstrated by the discovery of tidal debris streams and plumes in, e.g., the Milky Way, M31, M81, and NGC4013, the old star distribution merits reexamination. Are older stars asymmetrically distributed in the outer regions of galaxies, as expected if they were contributed by dissolving satellites? Data for inclined galaxies also will provide information on globular cluster systems, bulge vs. disk stellar populations, disk vertical structures, dust lane forms, warps, and a variety of other features on intermediate galactic scales. Each of these connects in useful ways to the evolutionary history and thus provides an empirical base for application of the expertise of the astronomical community. [***Star Formation and its Products.***]{}  The existing combination of ground-based and [*HST*]{} imaging provides an excellent base from which to design investigations of the nature and extent of star forming sites. Investigations of connections between drivers, if any, for star formation — spiral arms, interactions, etc., as well as basic galactic properties — are essential for understanding how feedback operates. A survey of the local $D$$<$20Mpc volume provides the range of galaxy types, luminosities, cosmic environment, and the sensitivity and statistics to support a complete study of the association of compact clusters and regions of star formation. From programs like [*SINGS*]{} and other recent and ongoing ground-based surveys, global star formation rates and time scales are anticipated to be known. We then can compare the small scale characteristics of star formation, an intrinsically local process, with the overall galactic environment. Are these statistically connected and, if so, how? By combining deep mid-UV and narrow-band [$\mbox{\textrm{H}}\alpha$]{} observations, it becomes possible to also address the escape fraction of ionizing radiation in a variety of galaxies. This question is particularly important in low metallicity dwarf galaxies which may have traits in common with the types of objects responsible for finishing reionization of the universe at redshifts $z>$6. [***Are Galactic Disks Growing?***]{}  As already demonstrated by [*GALEX*]{}, young stars have high contrast against the sky in the mid-UV. This spectral range therefore opens the way for mapping star formation in low-density environments, including the outer disks of galaxies. A next-generation wide-field UV–near-IR space observatory must offer major advantages in sensitivity and resolution over the pioneering results from [*GALEX*]{}. Hence, we would be able to determine ages and photometric stellar masses for small star forming complexes of the type that appear to populate the outer disks of galaxies, ranging from small irregulars to giant spirals. From these, star formation rates per unit area and, thus, disk growth rates can be estimated. [***Galactic Centers.***]{}  Centers of galaxies are dumping grounds. Baryonic material that ends up in the central zone of a galaxy has experienced substantial dissipation and loss of angular momentum. Yet it is not uncommon to find high-density stellar and gas systems coexisting within 1kpc of the centers of galaxies. Centered in this zone are the nuclei themselves, many of which harbor massive black holes. We would be able to systematically chart the stellar properties of nuclear environments. Where and in what ways are stars formed (clusters, scaled OB associations, spiral arms, rings, clumps)? How does star formation relate to the properties of nuclei on small scales and on the other side to the surrounding main disk? How are bars, both large and nuclear, related to the structure and activity levels in nuclei? [***A Survey of Nearby Galaxies.***]{}  We propose to learn how galaxies work, through studies of their stars, ISM, and immediate environments, and to build the definitive UV–near-IR photometric imaging database of galaxies within our local slice of the Universe. This would result in a 21st century digital ‘Hubble Atlas’ of nearby galaxies and their surroundings that will provide a standard for testing our understanding of how galaxies attained their present forms and how their stellar components will likely evolve into the future. The resolved and unresolved stellar populations would be analyzed through color-magnitude and color-color diagram fitting, providing accurate and uniform TRGB distances, and through population synthesis modeling of multi-filter broad- and medium-band photometry. The ISM in each galaxy would be observed through key narrow-band filters ([$\mbox{\textrm{H}}\alpha$]{}, [$\mbox{\textrm{H}}\beta$]{}, , , ; possibly , , or ) that allow identifying the ionized gas, estimate its metallicity and variations therein, and for each region determine whether ionization is dominated by photospheric or shock heating. Direct measurements of the extinction toward that ionized gas through the Balmer decrement ([$\mbox{\textrm{H}}\alpha$]{}/[$\mbox{\textrm{H}}\beta$]{}) will allow measuring the currently ongoing, high-mass star formation in each star formation region. [**Key advances in observation needed**]{} *Resolution* — ${\mbox{\rlap{\hbox{\lower3pt\hbox{\ensuremath{\sim}}}}\raise2pt\hbox{\ensuremath{<}}}}$002–0065 \[300–1000nm\] resolution is required in order to resolve luminous RGB stars out to the distance of the Virgo Cluster, and to resolve the relevant scales for star formation feedback processes within the ISM (shocks, outflows, bubbles, shells) within galaxies. *Wavelength agility* — access to both vacuum-UV and near-IR; no wavelength regime alone will suffice for a comprehensive understanding of the star-formation and assembly histories of galaxies. *Wide-field focal plane arrays* — these are presently not at sufficiently high TRL; investment is needed to improve yields, provide cheaper devices and high-throughput assembly and testing to enable economies of scale. Such an investment would not just benefit the science proposed here. *Coatings* — an investment in improving the relatively poor broad-band performance of optical coatings of telescope mirrors in the UV, with typical reflectances below 85% (Al+MgF$_2$) and 65% (Al+LiF), directly results in a large increase in throughput for a given telescope aperture, or more affordable missions for a given sensitivity requirement. *Dichroics* — most photons collected by telescopes are rejected by bandpass filters. Dichroic(s) potentially double (or even triple) the observing efficiency of astronomical observatories (, [*Spitzer*]{}/IRAC) and allow tuning downstream optics and detectors for more optimal performance, avoiding compromises inherent in forcing performance over more than an octave in frequency. [**Enabling science investigations**]{} The proposed science in the present white paper does not stand alone, but must build on a strong understanding of the physics of the star formation process in various environments. Gaining such understanding requires observational detail that can only be attained within our own Galaxy. From that basis one has to step out to galaxies spanning a range of metallicity and star formation activity within our Local Group to provide observational tracers with calibrations that are directly and solidly rooted in physics. We refer the reader to the Science White Paper by P. Scowen “Understanding Global Galactic Star Formation”. Also, investment in human capital and in ground-based supporting and path-finding programs, including operational support, should not be ignored, as the overal science return of this and many ‘high-end’ programs critically depends on it. [**Four central questions to be addressed**]{} - What is the spatially resolved star formation history of a comprehensive and representative subset of the galaxies encompassed within the local $D$$<$20Mpc volume? To what extent and how does it depend on morphological type class, mass, and cosmic environment? Does this fossil record confirm in detail the broad picture inferred from the evolution of the cosmic star formation density? What does it tell us about the formation and survival rates of solar systems like our own in different galaxies? - What is the mass assembly history of galaxies within the varied cosmic environments encompassed within the local $D$$<$20Mpc volume, the smallest representative slice of the Universe? This overarcing question will likely also include the questions: do galaxies grow from the inside out or is this too simple a picture; and why do galaxy disks have such high angular momenta; and why are they so old? - Can we unravel the true 3-D structure and internal kinematics of galaxy groups and the Ursa Major and Virgo Clusters via reliable TRGB distances and fossil star streams/intra-cluster light? Can we meaningfully constrain how baryons found their way from the IGM into the galaxies, galaxy groups and into clusters of galaxies within the local Universe? - Why does there at least seem to be a dearth of satellite galaxies around the primary galaxies within our Local Group compared to predictions from $\Lambda$CDM numerical simulations? Is the Local Group result confirmed throughout the local volume, and if so, what fundamental factor is missing in the simulations? [**Area of unusual discovery potential for the next decade**]{} Combination of a large collecting area, very wide field of view, high angular resolution, wavelength agility and/or multiplexing advantage would allow orders of magnitude more efficient UV–optical observations of the star formation and many other processes and, moreover, open up a new domain in discovery space near and far. Injection into L2 (or Earth Drift-Away) orbits allows provide dynamical and thermal stability, and increases (doubling) in efficiency over LEO orbits and, hence, lower cost per hour of observation (all other variables being equal). Large focal plane array (douzens to hundreds of individual CCD or CMOS detectors) and dichroic camera (simultaneous observation in two or more channels of the same field of view) technology is better matched to the collimated beams provided by optical telescope assemblies and less wasteful in terms of collected photons, maximizing science output and especially benefitting survey science with a lasting legacy beyond the nominal duration of a mission. Survey science allows discovery of very rare objects amongs billions and billions, the positions an properties of which may not be knowable a priori. [[**References**]{}\ Corbin, M., Kim, H., Jansen, R., Windhorst, R., & CidFernandes, R. 2008, ApJ 675, 194\ Scowen, P., Jansen, R., Beasley, M., 2009, *“Understanding Global Galactic Star Formation”*, Science\ $\null\qquad$White Paper submitted to the *Astro2010* Decadal Survey, Feb 15. ]{}
--- abstract: 'When investigating the dynamical properties of complex multiple-component physical and physiological systems, it is often the case that the measurable system’s output does not directly represent the quantity we want to probe in order to understand the underlying mechanisms. Instead, the output signal is often a linear or nonlinear function of the quantity of interest. Here, we investigate how various linear and nonlinear transformations affect the correlation and scaling properties of a signal, using the detrended fluctuation analysis (DFA) which has been shown to accurately quantify power-law correlations in nonstationary signals. Specifically, we study the effect of three types of transforms: (i) linear ($y_i=ax_i+b$); (ii) nonlinear polynomial ($y_i=ax_i^k$); and (iii) nonlinear logarithmic \[$y_i=\mbox{log}(x_i+\Delta)$\] filters. We compare the correlation and scaling properties of signals before and after the transform. We find that linear filters do not change the correlation properties, while the effect of nonlinear polynomial and logarithmic filters strongly depends on (a) the strength of correlations in the original signal, (b) the power $k$ of the polynomial filter, and (c) the offset $\Delta$ in the logarithmic filter. We further apply the DFA method to investigate the “apparent” scaling of three analytic functions: (i) exponential \[$\mbox{exp}(\pm x+a)$\], (ii) logarithmic \[$\mbox{log}(x+a)$\], and (iii) power law \[$(x+a)^{\lambda}$\], which are often encountered as trends in physical and biological processes. While these three functions have different characteristics, we find that there is a broad range of values for parameter $a$ common for all three functions, where the slope of the DFA curves is identical. We further note that the DFA results obtained for a class of other analytic functions can be reduced to these three typical cases. We systematically test the performance of the DFA method when estimating long-range power-law correlations in the output signals for different parameter values in the three types of filters and the three analytic functions we consider.' author: - Zhi Chen - Kun Hu - Pedro Carpena - 'Pedro Bernaola-Galvan' - 'H. Eugene Stanley' - 'Plamen Ch. Ivanov' title: Effect of nonlinear filters on detrended fluctuation analysis --- Introduction {#secintr} ============ Many physical and biological systems under multi-component control mechanisms exhibit scale-invariant features characterized by long-range power-law correlations in their output. These scaling features are often difficult to quantify due to the presence of erratic fluctuations, heterogeneity, and nonstationarity embedded in the output signals. This problem becomes even more difficult in certain cases: (i) when we cannot probe directly the quantity of interest in experimental settings, i.e., the measurable output signal is a linear or nonlinear function of the quantity of interest; (ii) when measuring devices impose a linear or nonlinear filter on the system’s output; (iii) when we are interested not in the output signal but in a specific component of it, which is obtained through a nonlinear transform (e.g., the magnitude or the sign of the fluctuations in the signal); (iv) when comparing the dynamics of different systems by applying nonlinear transforms to their output signals; or (v) when pre-processing the output signal by means of linear or nonlinear filters before the actual analysis. Thus, to understand the intrinsic dynamics of a system, in such cases it is important to correctly analyze and interpret the dynamical patterns in the system’s output. Conventional two-point correlation, power spectrum, and Hurst analysis methods are not suited for nonstationary signals, the statistical properties of which change with time [@non; @hurst1; @mandelbrot1]. To address this problem, detrended fluctuation analysis (DFA) method was developed to accurately quantify long-range correlations embedded in a nonstationary time series [@CKDFA1; @taqqu95]. This method provides a single quantitative parameter — the scaling exponent $\alpha$ — to quantify the scale-invariant properties of a signal. One advantage of the DFA method is that it allows the detection of long-range power-law correlations in noisy signals with embedded polynomial trends that can mask the true correlations in the fluctuations of a signal. Recent comparative studies have demonstrated that the DFA method outperforms conventional techniques in accurately quantifying correlation properties over a wide range of scales . The DFA method has been widely applied to DNA , cardiac dynamics , human electroencephalographic (EEG) fluctuations [@Robinson03], human motor activity [@huphysica04] and gait [@hos; @Hausdorff01; @Ashkenazy02; @Scafetta03; @West03], meteorology [@Ivanovameteo1999_12; @Ivanova03], climate temperature fluctuations , river flow and discharge [@Montanari2000; @Matsoukas2000], electric signals [@Siwy02; @Varotsos1; @Varotsos2], stellar x-ray binary systems [@Moret], neural receptors in biological systems [@bahareuph2001], music [@Jennings2004], and economics . In many of these applications the main problem is to differentiate scaling features in a system’s output which are inherent to the underlying dynamics, from the scaling features which are an artifact of nonstationarities or different types of transforms and filters. In two previous studies we have examined how different types of nonstationarities such as superposed sinusoidal and power-law trends, random spikes, cut-out segments, and patches with different local behavior affect the long-range correlation properties of signals [@kunpre2001; @zhipre2002]. Here we use the DFA method to investigate how the scaling properties of noisy correlated signals change under linear and nonlinear transforms. Further, (i) we test to see under what types of transforms (filters) it is possible to derive information about the scaling properties of the signal of interest before the transformation, provided we know the correlation behavior of the transformed (filtered) signal, and (ii) we probe the “apparent” scaling of three common transformation functions after applying the DFA method — exponential, logarithmic and polynomial. We also evaluate the limitations of the DFA method under linear and nonlinear transforms. Specifically, we consider the following: \(1) [*Correlation properties of signals after transforms of the type*]{}: $\{x_i\} \Longrightarrow \{f(x_i)\}$, where $\{x_i\}$ is a stationary signal with [*a priori*]{} known correlation properties. \(i) [*Linear transform*]{}: $\{x_i\} \Longrightarrow \{ax_i+b\}$. Transforms of this type are often encountered in physical systems. For example: (a) from the fluctuations in the acceleration of a particle (measurable quantity), one can derive information about how the force (quantity of interest) acting on this particle changes in time without directly measuring the force: $\{a(t_i)\} \Longrightarrow \{F(t_i)=ma(t_i)\}$; (b) in pnp-transistors a difficult to directly measure base (input) current $I_B$ (quantity of interest) is amplified hundreds of times, so that small fluctuations in $I_B$ may lead to significant (and measurable) changes in the collector (output) signal $I_C$ (measurable quantity): $\{I_C(t_i)\} \Longrightarrow \{I_B(t_i)=I_C(t_i)/\beta\}$, and (c) changes in the volume $V$ (quantity of interest) of an ideal gas can be determined from fluctuations in the temperature (measurable quantity) provided the pressure is kept constant: $\{T(t_i)\} \Longrightarrow \{V(t_i)=\frac{nR}{P}T(t_i)\}$. \(ii) [*Nonlinear polynomial transform*]{}: $\{x_i\} \Longrightarrow \{ax_i^k\}$, where $k\ne 1$ and takes on positive integer values. For example: (a) from fluctuations in the current $I$ (measurable quantity) one can extract information about the behavior of the power lost as heat $P$ (quantity of interest) in a resistor: $\{I(t_i)\} \Longrightarrow \{P(t_i)=RI^2(t_i)\}$; (b) measuring the temperature $T$ fluctuations of a radiating body the Stefan’s law defines the power emitted per unit area: $\{T_i\} \Longrightarrow \{\epsilon_i=\sigma T_i^4\}$. Further, linear and nonlinear polynomial filters are also used to renormalize data series representing an identical quantity measured in different systems before performing correlation analysis, e.g., (i) normalizing heart rate recordings from different subjects to zero mean and unit standard deviation (linear filters), or (ii) extracting the absolute value (nonlinear filter) of the heartbeat fluctuations in datasets obtained from different subjects [@Yosef2001]. In this study we consider two examples of nonlinear polynomial filters — quadratic and cubic filters — which represent the class of polynomial filters with even and odd powers, and we investigate how these filters change the correlation properties of signals. Since polynomial filters with even power wipe out the sign information in a signal, we expect quadratic and cubic filters to have a different effect. A recent study by Y. Ashkenazy [*et al.*]{} [@Yosef2001] shows that the magnitude of a signal (without sign information) exhibits different correlation properties from that of the original signal. Thus it is necessary to investigate how quadratic and cubic filters change the scaling properties of correlated signals. \(iii) [*Logarithmic filter*]{}: $\{x_i\} \Longrightarrow \{\mbox{log}(x_i+\Delta)\}$, is also widely used in renormalizing datasets obtained from different sources before comparative analysis. For example, to compare the dynamics of price fluctuations $X(i)$ of different company stocks, which may have a different average price, one often first obtains the relative price returns $R(i)=\mbox{log}[X(i+1)/X(i)]$ before performing correlation analysis [@Liu99; @economicsbook_gene]. It is assumed that upon taking the returns one does not alter the information contained in the original signal. To test this assumption we compare the correlation properties of the signal before and after a logarithmic filter. \(2) [*Correlation properties of transformation functions*]{} When analyzing the correlation properties of a signal after a given transform, it may be valuable to know what is the DFA result for the transformation function itself. In addition, it is often the case that noisy signals are superposed on trends which can be approximated by a certain function. Previous studies have demonstrated that the DFA result of a correlated signal with a superposed trend is a superposition of the DFA result for the signal and the DFA result for the analytic function representing the trend [@kunpre2001; @zhipre2002]. Here we investigate separately the results of the DFA for three functions which are very often encountered in physical and biological processes: (i) [*exponential*]{}, (ii) [*logarithmic*]{} and (iii) [*power law*]{}. The layout of this paper is as follows: In Sec. \[secpuren\], we describe how we generate signals with desired long-range power-law correlations and introduce the DFA method used to quantify correlations in nonstationary signals. In Sec. \[secfilters\], we compare the correlation and scaling properties of signals before and after linear and nonlinear polynomial transforms. In Sec. \[seclogsignals\], we consider the effect of nonlinear logarithmic filter on the long-range correlation properties of stationary signals. In Sec. \[sectrends\], we investigate the performance of the DFA method on three analytic functions — exponential, logarithmic, and power-law — which are often encountered as trends in physical and biological time series. We systematically examine the crossovers in the scaling behavior of correlated signals resulting from the transforms and trends discussed in Secs. \[secfilters\]-\[sectrends\], the conditions of existence of these crossovers and their typical characteristics. We summarize our findings in Sec. \[Conclusion\]. Methods {#secpuren} ======= We analyze two types of signals: \(1) stochastic stationary signals {$x_{i}$} ($i=1,2,3,...,N_{\mbox{\scriptsize max}}$) with different type of correlations (uncorrelated, correlated, and anti-correlated) and surrogate signals obtained from {$x_{i}$} after linear and nonlinear transforms. We use an algorithm based on the Fourier transform to generate stationary signals {$x_{i}$} with long-range power-law correlations as described in [@CKfourier; @MFFM; @zhipre2002]. The generated signals {$x_{i}$} have zero mean and unit standard deviation. \(2) Exponential, logarithmic, and power-law functions which often represent transformations or trends in physical and biological data. We use the detrended fluctuation analysis (DFA) method  to quantify the correlation and scaling properties of these signals. The DFA method is described in detail elsewhere [@kunpre2001; @zhipre2002]. Briefly, it involves the following steps: (i) we integrate the signal after subtracting the global average; (ii) we then divide the time series into boxes of length $n$ and perform, in each box, a least-square polynomial fit of order $\ell$ to the integrated signal to remove the local trend in each box; (iii) in each box we calculate the root-mean-square fluctuation function $F(n)$ quantifying the fluctuations of the integrated signal along the local trend; (iv) we repeat this procedure for different box sizes (time scales) $n$. A power-law relation between the average root-mean-square fluctuation function $F(n)$ and the box size $n$ indicates the presence of scaling: $F(n) \sim n^{\alpha}$. The scale $n$ for which this scaling holds represents the length of the correlation. The fluctuations in a signal can be characterized by the scaling exponent $\alpha$, a self-similarity parameter which quantifies the strength of the long-range power-law correlations in the signal. If $\alpha=0.5$, there is no correlation and the signal is uncorrelated (white noise); if $\alpha < 0.5$, the signal is anti-correlated; if $\alpha >0.5$, the signal is correlated. Since we use a polynomial fit of order $\ell$, we denote the algorithm as DFA-$\ell$. Further, we note that for stationary signals {$x_{i}$} with long-range power-law correlations, the value of the scaling exponent $\alpha$ is related to the exponent $\beta$ in the power spectrum $S(f)=f^{-\beta}$ of signals {$x_{i}$} by $\beta=2\alpha-1$ . Since the power spectrum is the Fourier transform of the autocorrelation function, one can find the following relationship between the autocorrelation exponent $\gamma$ and the power spectrum exponent $\beta$: $\gamma=1-\beta=2-2\alpha$, where $\gamma$ is defined by the autocorrelation function $C(\tau)=\tau^{-\gamma}$ and should satisfy $0<\gamma<1$ [@janphysica2001]. The upper threshold for the value of the scaling exponent $\alpha$ is related to the order $\ell$ of the DFA method: $\alpha \leq \ell+1$ for DFA-$\ell$ [@kunpre2001]. In addition, integrating the signal before applying the DFA method will increase the value of the scaling exponent $\alpha$ by 1, thus the upper threshold will become $\alpha+1 \leq \ell+1$ for DFA-$\ell$. Therefore, after integrating correlated signals with the scaling exponent $\alpha>\ell$, one needs to apply the DFA method with an order of polynomial fit higher than $\ell$. We also note that for anti-correlated signals, the scaling exponent obtained from the DFA-$\ell$ method overestimates the true correlations at small scales [@kunpre2001]. To avoid this problem, one needs first to integrate the original anti-correlated signal and then to apply the DFA-$\ell$ method [@kunpre2001; @zhipre2002]. The correct scaling exponent $\alpha$ can then be obtained from $F(n)/n$ \[instead of $F(n)$\] [@Yosef2001; @kunpre2001; @zhipre2002]. For that reason we first integrate and then apply the DFA method when considering anti-correlated signals. =0.9 =0.9 =0.89 =0.9 Effects of linear and nonlinear polynomial transforms {#secfilters} ===================================================== In this section, we study the effect of linear and nonlinear polynomial transforms (filters) on the scaling properties of stationary signals {$x_{i}$} with long-range power-law correlations. Specifically, we consider two types of nonlinear transforms — quadratic and cubic — as an example of even and odd polynomial filters. We generate the signals {$x_{i}$} with linear fractal properties and with [*a priori*]{} build-in correlations characterized by a DFA scaling exponent $\alpha$ [@CKDFA1; @kunpre2001; @zhipre2002]. We compare how the exponent $\alpha$ changes after the transform. We first test to see if these transforms affect the properties of uncorrelated signals (white noise). We find that the linear, quadratic, and cubic filters do not change the scaling properties of white noise — the curves of the detrended fluctuation function $F(n)$ for the filtered signals {$f(x_i)$} collapse on the scaling curve of the original signal {$x_{i}$}, and the scaling exponent $\alpha=0.5$ remains unchanged \[Fig. \[filters\](a)\]. For signals with correlations we find that the linear and nonlinear polynomial filters have a different effect. In particular, for both correlated ($\alpha>0.5$) and anti-correlated ($\alpha<0.5$) signals {$x_{i}$} we find that the scaling properties remain unchanged after the linear filter. In contrast, the quadratic and cubic filters change the scaling behavior of both correlated and anti-correlated signals \[Fig. \[filters\](b-d)\]. Specifically, for [*anti-correlated*]{} signals, we find that: (i) after the quadratic filter the scaling behavior is dramatically changed to uncorrelated (random) behavior with $\alpha=0.5$ at all scales; (ii) after the cubic filter the scaling (correlation) function $F(n)$ of anti-correlated signals is also changed and exhibits a crossover from anti-correlated behavior at small scales to uncorrelated behavior at larger scales \[Fig. \[filters\](b)\]. We note that the quadratic filter removes the sign information in a signal, thus completely eliminating the anti-correlations in a signal. In contrast, the effect of the cubic filter is not as strong as the effect of the quadratic filters, since a cubic filter preserves the sign information and the anti-correlations at small scales. For [*correlated*]{} signals we find that after both quadratic and cubic filters, the scaling behavior is unchanged at small and intermediate scales. At large scales we observe a crossover to weaker correlations which is less pronounced when increasing the strength of the correlations (higher values of $\alpha$) in the signal {$x_{i}$} \[Fig. \[filters\](c-d)\]. For signals with very strong correlations ($\alpha>1$), we find that the scaling behavior remains almost unchanged after nonlinear polynomial filters. We also find that the quadratic filter leads to a more pronounced crossover at large scales compared to the cubic filter for all positively correlated signals. logarithmic filter {#seclogsignals} ================== =0.9 =0.9 =0.9 =0.9 In addition to nonlinear polynomial transforms, logarithmic transforms are often used in preprocessing procedures when there is a need to renormalize output signals obtained from different systems before comparing their correlation properties . In this section, we investigate the effect of logarithmic filters on the scaling properties of stationary signals with long-range power-law correlations. We first generate stationary correlated signals {$x_{i}$} with a zero mean and unit standard deviation, and with [*a priori*]{} known and controlled correlation properties quantified by DFA scaling exponent $\alpha$. To ensure that all values in the signal are positive, before the logarithmic transform, we shift $\{x_{i}\} \Longrightarrow \{x_i+\Delta\}$, where $\Delta=-x_{min}+\epsilon$, $x_{min}$ is the minimal value in the series {$x_{i}$} and $\epsilon$ is a positive constant. This linear transform does not alter the correlation properties of {$x_{i}$}, as demonstrated in Sec. \[secfilters\], Fig. \[filters\]. Next we integrate the signal after the logarithmic transform $\{\mbox{log}_{10}(x_{i}-x_{min}+\epsilon)\}$ and we perform a DFA-2 analysis. For uncorrelated (white noise) signals after the logarithmic filter, we find no change in the scaling properties and the correlation exponent remains $\alpha=0.5$ in the entire range of scales \[Fig. \[logsg1\](b)\]. However, we find that the scaling properties of signals with certain degree of correlation change significantly. Specifically, for anti-correlated signals ($\alpha<0.5$) we observe a crossover to uncorrelated (white noise) behavior at large scales. This crossover becomes more pronounced (and shifted to smaller scales) when increasing the strength of anti-correlations (decreasing $\alpha$) \[Fig. \[logsg1\](b)\]. This crossover behavior is caused by negative spikes in the signal following the logarithmic transform \[Fig. \[logsg1\](a)\]. A similar effect was previously reported for stationary correlated signals with superposed random spikes [@zhipre2002]. For correlated signals ($\alpha>0.5$), we find a threshold value for the correlation exponent $\alpha_{th}\approx 1.3$, below which the scaling properties of the signal remain unchanged after the logarithmic filter. Above $\alpha_{th}$ there is a reduction in the strength of the positive correlations, i.e., the value of the estimated exponent after the logarithmic filter is much lower compared to the correlation exponent $\alpha$ in the original signal \[Fig. \[logsg1\](d)\]. =0.9 Since the logarithmic filter is a nonlinear transform which diverges for values of the signal {$x_i-x_{min}+\epsilon$} close to zero, we next test how the scaling properties of the signal depend on the value of the offset parameter $\epsilon$. We consider anti-correlated and correlated signals with fixed values of $\alpha$ and varied $\epsilon$. For strongly anti-correlated signals we find that even for large values of $\epsilon$, there is a crossover to uncorrelated behavior in the scaling curve $F(n)$ at large scales (note that $\epsilon$ is the minimal value of the signal {$x_i-x_{min}+\epsilon$}). This crossover shifts to smaller scales with decreasing $\epsilon$ \[Fig.\[logsg2\](a)\]. Further, we find that for decreasing $\epsilon$, the scaling curves $F(n)$ converge to a single curve, indicating random uncorrelated behavior in the range of large and intermediate scales. For anti-correlated signals with $\alpha=0.1$ we find that this convergence is reached for $\epsilon<0.1$ \[Fig.\[logsg2\](a)\]. For signals with strong positive correlations ($\alpha>\alpha_{th}$), we also observe a change in the scaling behavior which becomes more pronounced when $\epsilon$ decreases. However, in contrast to the anti-correlated signals, the deviation from the expected accurate scaling starts at intermediate scales and extends to smaller scales with decreasing $\epsilon$ \[Fig.\[logsg2\](b)\]. For signals with very strong correlations, e.g. $\alpha=2$, the deviation from the accurate scaling is observed only for $\epsilon<0.1$, while for $\epsilon>0.1$, there is no effect on the scaling \[Fig.\[logsg2\](b)\]. This is in contrast to the situation observed for signals with strong anti-correlations ($\alpha=0.1$) where the logarithmic filter alters the scaling behavior even for much larger values $\epsilon>10$ \[Fig.\[logsg2\](a)\]. Finally we study the relation between the scaling exponent $\alpha$ of the original “input” signal and the estimated exponent $\alpha_{out}$ of the “output” signal after the logarithmic filter. We find that for correlated signals within given range for the value of the scaling exponent $\alpha \in [0.4,1.3)$, there is no change in the scaling properties after the logarithmic transform. However, for signals with correlation exponents $\alpha<0.4$ and $\alpha>1.3$, we find that the logarithmic transform can dramatically change the scaling behavior and this effect also strongly depends on the value of the offset parameter $\epsilon$ \[Fig.\[logsg3\]\]. Therefore, the logarithmic filter is not recommended for anti-correlated signals and signals with very strong positive correlations — applying this filter will mask the true correlations in the original signals. results of the DFA for transformation functions {#sectrends} =============================================== =0.9 =0.9 In this section we investigate the scaling properties of three functions: [*exponential*]{}, [*logarithmic*]{}, and [*power-law*]{}. These functions are often used in signal processing as transforms of various stochastic correlated signals and also appear as trends superposed on noisy signals derived from physical and biological systems. In previous work [@kunpre2001; @zhipre2002] we have demonstrated that the scaling behavior of a correlated signal with a superposed trend is superposition of the scaling behavior of the correlated signal and the “apparent” scaling behavior obtained from the DFA method for the analytic function representing the trend. Therefore, understanding the results of the DFA for certain analytic functions becomes a necessary step to quantify the scaling behavior of system’s outputs where correlated fluctuations are superposed with different trends. =0.98 =0.9 \(i) [*We first consider the exponential function in the form*]{}: $y=\mbox{exp}(cx+a)$, where $0<x\leq 1$, $x=i/N_{max}$, $i=1,..., N_{max}$, $N_{max}=2^{17}$, the parameter $c=\pm 1$, the offset parameter $a$ is a positive constant. We show the result of the DFA method in Fig. \[trends3\]. We find that the slope of the detrended fluctuation function $F(n)$ vs. the scale $n$ obtained from the DFA method does not depend on the values of the parameters $c$ and $a$ (there is only a vertical shift in $F(n)$ for different values of $a$ and $c$) \[Fig. \[trends3\](a)\]. Instead, we find that the DFA scaling exponent $\alpha$ depends only on the order $\ell$ of polynomial fit in the DFA method — $\alpha=\ell+1$ — suggesting that the results of the DFA method do not depend on the details of the exponential function \[Fig. \[trends3\](b)\]. An analytic derivation for the fluctuation function $F(n)$ and the value of the scaling exponent $\alpha$ obtained from DFA-1 is presented in Appendix A. =0.94 =0.9 \(ii) [*We next consider the performance of the DFA method on a logarithmic function of the general form*]{}: $y=\mbox{log}_{10}(x+a)$, where $0<x\leq 1$, $x=i/N_{max}$, $i=1,..., N_{max}$, $N_{max}=2^{17}$ and the offset parameter $a$ is a positive constant. Specifically, we investigate the dependence of the DFA scaling exponent $\alpha$ on the value of the offset parameter $a$. We find that for very small values of $a$, the DFA scaling exponent is $\alpha=1.5$. With increasing $a$, we observe a crossover in $F(n)$ at intermediate scales $n$ — from $\alpha=1.5$ at large scales to $\alpha=3$ at small scales for DFA-2 \[Fig. \[trends2\](a)\]. For larger values of $a$, we observe a scaling behavior in $F(n)$ characterized by a single exponent $\alpha=3$ in the entire range of scales $n$ \[Fig. \[trends2\](a)\]. In Fig. \[trends2\](b) we present the dependence of the DFA scaling exponent $\alpha$ \[obtained in the fitting range $n \in (30,3000)$\] on the offset parameter $a$ for different DFA order $\ell$. We find that for $a<10^{-5}$ the exponent $\alpha$ does not depend on the order $\ell$ of the DFA method and takes on a single value $\alpha=1.5$. In contrast, for large values of $a>10^{-2}$, the exponent $\alpha$ depends only on the order $\ell$ of the DFA method and takes on values $\alpha=\ell+1$. This behavior is identical with the behavior obtained for the exponential function in Fig. \[trends3\](b). For intermediate values of $a$, we observe a crossover in the scaling behavior of the fluctuation function $F(n)$ from $\alpha=1.5$ to $\alpha=\ell+1$. \(iii) [*Finally, we consider the general power-law function*]{}: $y=(x+a)^{\lambda}$, where $0<x\leq 1$, $x=i/N_{max}$, $i=1,..., N_{max}$, $N_{max}=2^{17}$, the power $\lambda$ takes on real values and the offset parameter $a$ is a positive constant. As in the case of the logarithmic function, we find again that the DFA scaling exponent $\alpha$ depends on the value of the offset parameter $a$ \[Fig. \[trends1\](a)\]. For certain fixed values of $\lambda$ and with increasing $a$, we observe a gradual transition in the fluctuation function $F(n)$ from a scaling behavior spanning over a broad range of scales $n$ characterized by a small value of the exponent $\alpha$ to a crossover at intermediate scales $n$ for larger values of $a$, and finally to a scaling spanning over all scales $n$ with exponent $\alpha=3$ for large values of $a$ for DFA-2. In a previous study [@kunpre2001] we have found a specific relationship between the DFA exponent $\alpha$ and the value of the power $\lambda$ for the case of power-law function with offset parameter $a=0$: $\alpha=\ell+1$ for $\lambda> \ell-0.5$; $\alpha\simeq \lambda+1.5$ for $-1.5<\lambda< \ell-0.5$; $\alpha=0$ for $\lambda<-1.5$, where $\ell$ is the order of polynomial fit in the DFA-$\ell$ method. Our current analysis shows that this behavior is even more complicated when $a>0$ \[Fig. \[trends1\](b)\]. Specifically, we find that for values of $a<10^{-5}$ the scaling exponent $\alpha$ \[obtained in the fitting range $n \in (30,3000)$\] depends only on the value of the power $\lambda$: $\alpha\simeq \lambda+1.5$. In contrast, for large values of the offset parameter $a>10^{-2}$, we find that the exponent $\alpha$ depends only on the order $\ell$ of the DFA method, and takes on values $\alpha=\ell+1$, which is similar to the results obtained for the general exponential and logarithmic functions in this range of $a$ \[Fig. \[trends3\](b) and Fig. \[trends2\](b)\]). For intermediate values of $a$ and for $-1.5 <\lambda< \ell-0.5$, we observe a crossover in the scaling behavior of the fluctuation function $F(n)$ from $\alpha\simeq \lambda+1.5$ to $\alpha=\ell+1$. Further, we find that for $\lambda> \ell-0.5$, the DFA-$\ell$ scaling exponent remains constant $\alpha=\ell+1$, and does not depend on the values of the offset parameter $a$ — we note that for $\lambda=0.41$ (close to $\lambda=0.5=\ell-0.5$ for DFA-1) the dependence of $\alpha$ on $a$ is close to a horizontal line \[Fig. \[trends1\](b)\]. [**Analytic arguments**]{} Our results show that for large values of the offset parameter $a$, the detrended fluctuation function $F(n)$ for all three analytic functions — exponential, logarithmic, and power-law — exhibits identical slope, where the DFA scaling exponent $\alpha$ does not depend on the particular functional form but only the order $\ell$ of the DFA method: $\alpha=\ell+1$ \[Fig. \[trends3\](b), \[trends2\](b) and \[trends1\](b)\]. The reason for this common behavior is that (i) for large values of $a$, in each DFA box of a given length $n$, all three functions can be expanded in converging Taylor series, allowing for a perfect fit by a finite order polynomial function, and (ii) that, due to this convergence, the same polynomial function can be used when shrinking the box length $n$. In contrast, for very small values of the offset parameter $a$, the DFA results for all three functions are distinctly different and does not depend on the order $\ell$ of the DFA method. Below we give some general analytic arguments for the dependence of the DFA exponent $\alpha$ on the offset parameter $a$ presented in Figs. \[trends3\], \[trends2\] and \[trends1\]. \(i) [*General exponential function*]{}: $y=\mbox{exp}(x+a), 0<x\leq 1$. First, we substitute the variable $x$ by $z=x+a$: $y=e^z, z \in (a,1+a]$. Next, we consider a DFA box starting at the coordinate $z^{\prime}=s$ and ending at $z''=s+t$, where $t$ is proportional to the number of points $n$ in the box — $t=(1+a-a)n/N_{max}=n/N_{max}$. For any value of $z \in (s,s+t)$ we can expand the function in a Taylor series: $$\begin{aligned} e^z=\mbox{exp}(s+z_0)|_{0<z_0<t} =e^s\left[1+z_0+\frac{z_0^2}{2!}+...\right]. \label{eqnq1}\end{aligned}$$ Since this expansion converges, a finite polynomial function can accurately approximate the exponential function in each DFA box. We note that the DFA-$\ell$ method applied to above polynomial functions gives the scaling exponent $\alpha=\ell+1$ (see [@kunpre2001]). Thus, for any exponential function we find that the DFA scaling does not depend on the value of the offset parameter $a$ and depends only on the order $\ell$ of the polynomial fit in the DFA-$\ell$ procedure \[Fig. \[trends3\](b)\]. (ii)[*General logarithmic function*]{}: $y=\mbox{log}_{10}(x+a), 0<x\leq 1$. First, we substitute the variable $x$ by $z=x+a$: $y=\mbox{log}_{10}(z), z \in (a,1+a]$. Next, we consider a DFA box starting at the coordinate $z^{\prime}=s$ and ending at $z''=s+t$, where $t$ is proportional to the number of points $n$ in the box — $t=n/N_{max}$. For any value of $z \in (s,s+t)$ the Taylor expansion is: $$\begin{aligned} \mbox{log}_{10}(z)&=&\mbox{log}_{10}(s+z_0)|_{0<z_0<t}\nonumber \\ &\sim& \mbox{ln}(1+z_0/s)\nonumber \\ &=&\frac{z_0}{s}-\frac{1}{2}\left (\frac{z_0}{s}\right )^2+... \frac{(-1)^{m-1}}{m}\left (\frac{z_0}{s}\right )^m+.... \label{eqnq2}\end{aligned}$$ This series is converging only when $z_0/s<1$, i.e., $z_0<s$. Since $z_0 \in (0,t)$, the condition for convergence in any DFA box $(s,s+t)$ partitioning the function is $t<s$. From $t=n/N_{max}$ and $s \in [a,1+a-t]$, we find that if $a>n/N_{max}$, the logarithmic function in all DFA boxes is converging, and thus each box can be approximated by a polynomial function, leading to scaling exponent $\alpha=\ell+1$ — depending only on the order $\ell$ of the DFA-$\ell$ method \[Fig. \[trends2\]\]. When $t>s$, for certain values of $z_0 \in (0,t)$, the series in Eq. (\[eqnq2\]) is diverging. Since $s \in [a,1+a-t]$, for $s=a<t=n/N_{max}$, we find that the logarithmic function is divergent in the first DFA box $(a,a+t)$, leading to deviation in the DFA scaling for small values of $a$ \[Fig. \[trends2\]\]. \(iii) [*General power-law function*]{}: $y=(x+a)^{\lambda}, x\in(0,1]$. First, we substitute the variable $x$ by $z=x+a$: $y=z^{\lambda}, z \in (a,1+a]$. Next, we consider a DFA box starting at the coordinate $z^{\prime}=s$ and ending at $z''=s+t$, where $t$ is proportional to the number of points $n$ in the box — $t=n/N_{max}$. For any value of $z \in (s,s+t)$ the Taylor expansion is: $$\begin{aligned} z^{\lambda}&=&(s+z_0)^\lambda|_{0<z_0<t}\nonumber \\ &\sim& \left(1+\frac{z_0}{s}\right)^\lambda \nonumber \\ &=&1+\lambda\frac{z_0}{s}+\frac{\lambda(\lambda-1)}{2!}\left (\frac{z_0}{s}\right)^2+.... \label{eqnq3}\end{aligned}$$ Similar to the case of the logarithmic function, this series is converging only when $z_0/s<1$. Following the same arguments as for the logarithmic function we find that when $a>n/N_{max}$, the power-law function is converging in any DFA box, and thus can be approximated by a polynomial function, leading to the scaling exponent $\alpha=\ell+1$ \[Fig. \[trends1\]\], which is identical to the case of exponential and logarithmic function. In contrast, for $a<n/N_{max}$, the power-law function is divergent in the first DFA box $(a,a+t)$, as in the case of the logarithmic function, leading to a deviation in the scaling of $F(n)$ for small values of $a$ \[Fig. \[trends1\]\]. While in the case of logarithmic function this divergence leads to a fixed scaling exponent $\alpha=1.5$, for power-law functions the value of the scaling exponent $\alpha$ depends also on the power $\lambda$ \[Fig. \[trends1\]\]. We note that the above arguments can be used to estimate the results of the DFA method for other functions. For all functions which can be expanded in convergent Taylor expansion of a polynomial form in each DFA box partitioning the function, the DFA method leads to identical scaling results with the exponent $\alpha=\ell+1$, which is a notable inherent limitation of the method. When there is divergent behavior in some or all of the DFA boxes partitioning a function, the DFA scaling exhibits crossover behavior to different values of the scaling exponent $\alpha$ which depends on the functional form and the specific parameters of the function. Conclusions {#Conclusion} =========== In summary, our study shows that linear transforms do not change the scaling properties of a signal. However, the correlation properties of a signal change after applying a polynomial filter. Moreover, such change depends on the type of correlations (positive or anti-correlations) in the signal, as well as on the power (odd or even) of the polynomial filter. For the logarithmic filter we find that the scaling behavior of the transformed signal remains unchanged only when the original signal satisfies certain type of correlations (characterized by scaling exponent within a given range). Comparing the “apparent” scaling behavior of the exponential, logarithmic, and power-law functions we find that within certain range for the values of the parameters, the DFA fluctuation function $F(n)$ exhibits an identical slope, and that the DFA results of a class of other analytic functions can be reduced to these three cases. We attribute this behavior to specific limitations of the DFA method. Therefore, careful tests are necessary to accurately estimate the correlation properties of signals after nonlinear transforms. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by NIH Grant HL071972 and NIH/National Center for Research Resources (Grant No. P41RR13622) and by the Spanish Ministerio de Ciencia y Tecnologia (grants BFM2002-00183 and BIO2002-04014-CO3-02). DFA-1 in exponential functions ============================== We consider an exponential function of the type $\exp (cx+a) $, where the parameters $c$ and $a$ take on real values. The first step of the DFA method is to integrate the signal \[Sec.\[secpuren\]\]: $$\int_{0}^{x}\exp (\frac{cy}{N}+a)dy=\allowbreak N\frac{e^{\frac{cx}{N} +a}-e^{a}}{c}, \label{d2}$$ where $N$ is the length of the signal and $x\in (0,N]$. We divide the variable in the exponential by $N$, so that ($x/N$) is in the interval $(0,1]$, as considered in Sec. \[sectrends\]. The next step of the DFA method is to divide the integrated signal into boxes of length $n$. For DFA-1, the squared detrended fluctuation function in the $k-$th box, $F^{2}(n,k)$, is $$F^{2}(n,k)=\frac{1}{n}\int_{(k-1)n}^{kn}\left[ N\frac{e^{\frac{cx}{N} +a}-e^{a}}{c}-(b_{k}x-d_{k})\right] ^{2}dx, \label{d3}$$ where the parameters $b_{k}$ and $d_{k}$ are obtained by a linear fit to the integrated signal using least squares in the $k-$th box. These two parameters can be obtained analytically, although their expressions are too long. To obtain the squared detrended fluctuation function for the entire signal partitioned in non-overlapping boxes of length $n$, we sum over all boxes and calculate the average value: $$F^{2}(n)=\frac{1}{N/n}\sum_{k=1}^{N/n}F^{2}(n,k)=\frac{1}{N/n} \sum_{k=1}^{N/n}\frac{1}{n}\int_{(k-1)n}^{kn}\left[ N\frac{e^{\frac{cx}{N} +a}-e^{a}}{c}-(b_{k}x-d_{k})\right] ^{2}dx. \label{d4}$$ Here, the index $k$ in the sum ranges from $1$ to $N/n$ (there are $ N/n$ boxes of length $n$ in the signal of length $N$). Using the analytical expressions for $b_{k}$ and $d_{k}$, $F^{2}(n)$ can be presented analytically in the form: $$F^{2}(n)=g(n)\cdot h(n), \label{d5}$$ where $$g(n)=\left\{ -8Nc^{2}n^{2}\left( 1+e^{\frac{cn}{N}}+e^{\frac{2cn}{N}}\right) +c^{3}n^{3}\left( e^{\frac{2cn}{N}}-1\right) +24N^{2}\left[ -\left( e^{\frac{ cn}{N}}-1\right) ^{2}N-cn+cne^{\frac{2cn}{N}}\right] \right\} \label{d6}$$ and $$h(n)=\frac{e^{2a}\left( e^{2c}-1\right) N^{2}}{2c^{6}\left( e^{\frac{2cn}{N} }-1\right) n^{3}}. \label{d7}$$ Due to the complexity of $g(n)$ and $h(n)$, the expression of $ F^{2}(n) $ is very complicated. However, as $n<N$ (and usually, $n\ll N$), one can expand $F^{2}(n)\,$in powers of $n$ to obtain: $$F^{2}(n)\simeq \allowbreak \frac{c\left( e^{2c}-1\right)e^{2a}}{1440N^{2}} n^{4}. \label{d8}$$ Finally, for the detrended fluctuation function $F(n)$ we obtain: $$F(n)\simeq \allowbreak \sqrt{\frac{c\left( e^{2c}-1\right) }{1440}} \frac{e^{a}}{N}n^{2}. \label{d9}$$ Thus the DFA-1 scaling exponent is $\alpha =2$ (in agreement with the numerical simulation in Sec. \[sectrends\], Fig. \[trends3\]). In general, we can obtain in a similar way that $\alpha =\ell +1$, when DFA-$\ell$ with an order $\ell$ of polynomial fit is used. [99]{} , (Gordon & Breach, New York, 1981). H. E. Hurst, Trans. Am. Soc. Civ. Eng. [**116**]{}, 770 (1951). B. B. Mandelbrot and J. R. Wallis, Water Resources Res. [**5**]{}, No. 2, 321 (1969). C.-K. Peng, S. V. Buldyrev, S. Havlin, M. Simons, H. E. Stanley, and A. L. Goldberger, Phys. Rev. E [**49**]{}, 1685 (1994). M. S. Taqqu, V. Teverovsky, and W. Willinger, Fractals [**3**]{}, 785 (1995). C.-K. Peng, S. V. Buldyrev, A. L. Goldberger, S. Havlin, M. Simons, and H. E. Stanley, Phys. Rev. E [**47**]{}, 3730 (1993). S. M. Ossadnik, S. B. Buldyrev, A. L. Goldberger, S. Havlin, R. N. Mantegna, C.-K. Peng, M. Simons, and H. E. Stanley, Biophys. J. [**67**]{}, 64 (1994). G. M. Viswanatha, C.-K. Peng, H. E. Stanley, and A. L. Goldberger, Phys. Rev. E [**55**]{}, 845 (1997). J. W. Kantelhardt, E. Koscielny-Bunde, H. H. A. Rego, S. Havlin, and A. Bunde, Physica A [**295**]{}, 441 (2001). K. Hu, P. Ch. Ivanov, Z. Chen, P. Carpena and H. E. Stanley, Phys. Rev. E [**64**]{}, 011114 (2001). R. N. Mantegna, S. V. Buldyrev, A. L. Goldberger, S. Havlin, C.-K. Peng, M. Simons, and H. E. Stanley, Phys. Rev. Lett. [**73**]{}, 3169 (1994). R. N.  Mantegna, S. V. Buldyrev, A. L. Goldberger, S. Havlin, C.-K. Peng, M. Simons, and H. E.  Stanley, Phys. Rev. Lett. [**76**]{}, 1979 (1996). P. Carpena, P. Bernaola–Galván, P. Ch. Ivanov, and H. E. Stanley, Nature (London) [**418**]{}, 955 (2002). K. K. L. Ho, G. B. Moody, C.-K. Peng, J. E. Mietus, M. G. Larson, D. Levy, and A. L. Goldberger, Circulation [**96**]{}, 842 (1997). P. Ch. Ivanov, A. Bunde, L. A. Nunes Amaral, S. Havlin, J. Fritsch-Yelle, R. M. Baevsky, H. E. Stanley, and A. L. Goldberger, Europhys. Lett. [**48**]{}, 594 (1999). S. M. Pikkujamsa, T. H. Makikallio, L. B. Sourander, I. J. Raiha, P. Puukka, J. Skytta, C.-K. Peng, A. L. Goldberger, and H. V. Huikuri, Circulation [**100**]{}, 393 (1999). S. Havlin, S. V. Buldyrev, A. Bunde, A. L. Goldberger, P. Ch. Ivanov, C.-K. Peng, and H. E. Stanley, Physica A [**273**]{}, 46 (1999). H. E. Stanley, L. A. Nunes Amaral, A. L. Goldberger, S. Havlin, P. Ch. Ivanov, and C.-K. Peng, Physica A [**270**]{}, 309 (1999). P. Ch. Ivanov, L. A. Nunes Amaral, A. L. Goldberger, and H. E. Stanley, Europhys. Lett. [**43**]{}, 363 (1998). P. A. Absil, R. Sepulchre, A. Bilge, and P. Gerard, Physica A [**272**]{}, 235 (1999). K. Hu, P. Ch. Ivanov, M. F. Hilton, Z. Chen, R. T. Ayers, H. E. Stanley and S. A. Shea, Proc. Natl. Acad. Sci. U.S.A, [**101**]{}, 18223 (2004). D. Toweill, K. Sonnenthal, B. Kimberly, S. Lai, and B. Goldstein, Crit. Care Med. [**28**]{}, 2051 (2000). A. Bunde, S. Havlin, J. W. Kantelhardt, T. Penzel, J. H. Peter, and K. Voigt, Phys. Rev. Lett. [**85**]{}, 3736 (2000). T. T. Laitio, H. V. Huikuri, E. S. H. Kentala, T. H. Makikallio, J. R. Jalonen, H. Helenius, K. Sariola-Heinonen, S. Yli-Mayry, and H. Scheinin, Anesthesiology [**93**]{}, 69 (2000). Y. Ashkenazy, P. Ch. Ivanov, S. Havlin, C.-K. Peng, A. L. Goldberger, and H. E. Stanley, Phys. Rev. Lett. [**86**]{}, 1900 (2001). P. Ch. Ivanov, L. A. Nunes Amaral, A. L. Goldberger, M. G. Rosenblum, H. E. Stanley, and Z. R. Struzik, Chaos [**11**]{}, 641 (2001). J. W. Kantelhardt, Y. Ashkenazy, P. Ch. Ivanov, A. Bunde, S. Havlin, T. Penzel, J.-H. Peter, and H. E. Stanley, Phys. Rev. E [**65**]{}, 051908 (2002). R. Karasik, N. Sapir, Y. Ashkenazy, P. Ch. Ivanov, I. Dvir, P. Lavie, and S. Havlin, Phys. Rev. E [**66**]{}, 062902 (2002). J. C. Echeverria, M. S. Woolfson, J. A. Crowe, B. R. Hayes-Gill, G. D. H. Croaker, and H. Vyas, Chaos [**13**]{},467 (2003). J. W. Kantelhardt, S. Havlin, and P. Ch. Ivanov, Europhys. Lett. [**62**]{}, 147 (2003). P. A. Robinson, Phys. Rev. E [**67**]{}, 032902 (2003). K. Hu, P. Ch. Ivanov, Z. Chen, M. F. Hilton, H. E. Stanley, and S. A. Shea, Physica A [**337**]{}, 307 (2004). J. M. Hausdorff, C.-K. Peng, Z. Ladin, J. Wei, and A. L. Goldberger, J. Applied Physiol. [**78**]{}, 349 (1995). J. M. Hausdorff, Y. Ashkenazy, C.-K. Peng, P. Ch. Ivanov, H. E. Stanley, and A. L. Goldberger Physica A [**302**]{}, 138 (2001). Y. Ashkenazy, J. M. Hausdorff, P. Ch. Ivanov, and H. E. Stanley, Physica A [**316**]{}, 662 (2002). N. Scafetta, L. Griffin, and B. J. West, Physica A [**328**]{}, 561 (2003). B. J. West and N. Scafetta, Phys. Rev. E [**67**]{}, 051917 (2003). K.  Ivanova and M.  Ausloos, Physica A [**274**]{}, 349 (1999). K. Ivanova, T. P. Ackerman, E. E. Clothiaux, P. Ch. Ivanov, H. E. Stanley, and M. Ausloos, J. Geophys. Res. [**108**]{},4268 (2003). E. Koscielny-Bunde, A. Bunde, S. Havlin, H. E. Roman, Y. Goldreich, and H. J. Schellnhuber, Phys. Rev. Lett. [**81**]{}, 729 (1998). P. Talkner and R. O.  Weber, Phys. Rev. E [**62**]{}, 150 (2000). A. Kiraly and I. M. Janosi, Phys. Rev. E [**65**]{}, 051102 (2002). J. F. Eichner, E. Koscielny-Bunde, A. Bunde, S. Havlin, and H. J. Schellnhuber, Phys. Rev. E [**68**]{}, 046133 (2003). K. Fraedrich and R. Blender, Phys. Rev. Lett. [**90**]{}, 108501 (2003). M. Pattantyus-Abraham, A. Kiraly, and I. M. Janosi, Phys. Rev. E [**69**]{}, 021110 (2004). A. Montanari, R. Rosso, and M. S. Taqqu, Water Resour. Res. [**36**]{}, (5) 1249 (2000). C. Matsoukas, S. Islam, and I. Rodriguez-Iturbe, J. Geophys. Res., \[Atmos.\] [**105**]{}, 29165 (2000). Z. Siwy, M. Ausloos, and K. Ivanova, Phys. Rev. E [**65**]{}, 031907 (2002). P. A. Varotsos, N. V. Sarlis, and E. S. Skordas, Phys. Rev. E [**67**]{}, 021109 (2003). P. A. Varotsos, N. V. Sarlis, and E. S. Skordas, Phys. Rev. E [**68**]{}, 031106 (2003). M. A. Moret, G. F. Zebende, E. Nogueira, and M. G. Pereira, Phy. Rev. E [**68**]{}, 041104 (2003). S. Bahar, J. W. Kantelhardt, A. Neiman, H. H. A. Rego, D. F. Russell, L. Wilkens, A. Bunde, and F. Moss, Europhys. Lett. [**56**]{}, 454 (2001). H. D. Jennings, P. Ch. Ivanov, A. M. Martins, P. C. da Silva, and G. M. Vishwanathan, Physica A [**336**]{}, 585 (2004). N. Vandewalle and M. Ausloos, Phy. Rev. E [**58**]{}, 6832 (1998). Y. Liu, P. Gopikrishnan, P. Cizeau, M. Meyer, C.-K. Peng, and H. E. Stanley, Phys. Rev. E [**60**]{}, 1390 (1999). I. M. Janosi, B. Janecsko, and I. Kondor, Physica A [**269**]{}, 111 (1999). M. Ausloos, N.  Vandewalle, P.  Boveroux, A.  Minguet, and K.  Ivanova, Physica A [**274**]{}, 229 (1999). M. Roberto, E. Scalas, G. Cuniberti, and M. Riani, Physica A [**269**]{}, 148 (1999). P.  Grau-Carles, Physica A [**287**]{}, 396 (2000). M. Ausloos and K. Ivanova, Phys. Rev. E [**63**]{}, 047201 (2001). M. Ausloos and K. Ivanova, Int. J. Mod. Phys. C [**12**]{}, 169 (2001). Z. Chen, P. Ch. Ivanov, K. Hu, and H. E. Stanley, Phys. Rev. E [**65**]{}, 041107 (2002). , (Cambridge University Press, New York, 2000). C.-K. Peng, S. Havlin, M. Schwartz, and H. E. Stanley, Phys. Rev. A [**44**]{}, 2239 (1991). H. A. Makse, S. Havlin, M. Schwartz, and H. E. Stanley, Phys. Rev. E [**53**]{}, 5445 (1996).
--- abstract: 'We review the introduction of likelihood functions and Fisher information in classical estimation theory, and we show how they can be defined in a very similar manner within quantum measurement theory. We show that the stochastic master equations describing the dynamics of a quantum system subject to a definite set of measurements provides likelihood functions for unknown parameters in the system dynamics, and we show that the estimation error, given by the Fisher information, can be identified by stochastic master equation simulations. For large parameter spaces we describe and illustrate the efficient use of Markov Chain Monte Carlo sampling of the likelihood function.' author: - 'S[ø]{}ren Gammelmark' - 'Klaus M[ø]{}lmer' bibliography: - 'biblio.bib' title: Bayesian parameter inference from continuously monitored quantum systems --- Introduction ============ Sensors and measurement devices are affected by the presence or strength of physical effects that influence their dynamics in a detectable way. A proper statistical treatment of measurement data is important when inferring results from complex experiments. With the growing use of quantum systems for high precision measurements, a whole research domain of quantum metrology has emerged. Limitations to measurement precision from quantum mechanical uncertainties have been investigated and protocols to use measurements to optimally distinguish differently prepared quantum state have been developed, cf. [@braunstein_statistical_1994; @wiseman_quantum_1996; @Boixo2008; @Grond2011; @Lucke2011; @Giovannetti2004]. The continuous observation of a quantum system involves leakage of information via coupling of the system to a suitable meter, and an archetypal example is that of measurements of photons emitted from a quantum light source. Laser spectroscopy thus involves the excitation of a quantum system, and detection of the fluorescence signal as function of laser frequency permits a fit, e.g., to a Lorentzian distribution and thus yields information about the resonance frequency and linewidth. The resonance curve, however, represents only a part of the acquired data, as it omits details concerning the temporal dynamics and noise properties of the detection signal. With the emergence of stochastic Schrödinger and master equations, which determine the quantum state conditioned on the full, noisy detection signals, it has been a natural next step to develop strategies to extract information from continuously probed systems. Immediate applications then concern sensing of the magnitude of perturbations acting on the system, such as the magnetic field probed in atomic magnetometers [@Wasilewski2010; @Shah2010], and near field effects, e.g., from nuclear spins, probed by a single NV-center in diamond [@Kolkowitz2012; @Zhao2012]. Theoretical strategies have been proposed to continuously update parameter estimates based on the acquired data in a Bayesian manner on equal footing with the conditioned quantum state of the probe system, [@Gambetta2001; @Negretti2012]. The latter method is particularly useful, if the system can be approximated by Gaussian states [@Petersen2005; @Geremia2003]. The purpose of the present paper is to provide a formal link between some of the central ideas in classical estimation theory and stochastic master equations, and to identify efficient and systematic means to estimate unknown parameters from quantum measurement records. We will, in particular, discuss and demonstrate methods applicable in cases where the parameter space is too large to permit a recursive Bayesian update procedure. The methods are general, but for concreteness, we will consider light emitting quantum systems, and we will present explicit analyses and results for direct photon detection and for homodyne detection of the emitted radiation. In photon counting, the measurement signal is a discrete process $N_t$, characterized by click events at specific times, and the density matrix $\rho_t$ of the emitter conditioned on the detection signal until time $t$ satisfies the non-linear filter equation $$\begin{gathered} d\rho_t = \left[ -i{\left[ H, \rho_t \right]} - \frac{1}{2}{\left\{ c^\dagger c, \rho_t \right\}} + \operatorname{Tr}(c^\dagger c \rho_t)\rho_t \right] dt + \\ \left[ \frac{c \rho_t c^\dagger}{\operatorname{Tr}(c^\dagger c \rho_t) } - \rho_t \right] dN_t, \label{eq:JumpFilter}\end{gathered}$$ where the differential measurement result $dN_t$ is a Poisson increment, which is either $0$ (no click event) or $1$ (detector click event). For the special case of a two level atom with upper (lower) states $\ket{e(g)}$ and with upper state lifetime $1/\gamma$, the conditional expectation $\operatorname{\mathbb{E}}[ dN_t | N_t ] = \operatorname{Tr}(c^\dagger c \rho_t) dt$, where $c=\sqrt{\gamma}\ket{g}\bra{e}$. The time-evolution of $\rho_t$ is therefore piecewise continuous (when $dN_t = 0$), but interrupted by jumps $\rho_t \mapsto c\rho_t c^\dagger / \operatorname{Tr}(c^\dagger c\rho_t)$ at discrete times (when $dN_t = 1$). It is also possible to perform field amplitude measurements by homodyne and heterodyne detection. The measurement signal $Y_t$ is then a continuous function of time, and, e.g., when homodyne detection is performed on the fluorescence emitted by the decaying two-level atom, the conditioned density matrix satisfies an Itô stochastic differential equation $$\begin{gathered} d\rho_t = \left[-i{\left[ H, \rho \right]} - {\left\{ c^\dagger c, \rho \right\}}/2 + c \rho c^\dagger \right] dt + \\ ({\mathcal{M}}(\rho_t) - \operatorname{Tr}({\mathcal{M}}(\rho_t))\rho_t) (dY_t - \operatorname{Tr}({\mathcal{M}}(\rho_t)) dt), \label{eq:DiffusionFilter}\end{gathered}$$ where ${\mathcal{M}}(\rho) = c\rho + \rho c^\dagger$. The differential measurement result $dY_t$ satisfies $dY_t = \operatorname{Tr}({\mathcal{M}}(\rho_t)) dt + dW_t$, where $dW_t$ is a Wiener increment with zero mean and variance $dt$. In a generic experiment, all terms in the stochastic master equation can be parametrized by a vector of classical variables $\theta \in \mathbb{R}^n$, such as laser-atom detunings, which may in turn depend on the unknown values of externally applied fields, decay rates, temperature, etc. In order to solve Eqs. (\[eq:JumpFilter\], \[eq:DiffusionFilter\]), candidate values for these parameters need to be specified, and the goal of parameter estimation by continuous quantum measurements is to identify the best candidate values for the parameters $\theta$, given the actual measurement record ($N_t$ or $Y_t$). In Sec. II, we review general parameter estimation concepts relevant to this manuscript: Bayesian inference, likelihood functions, and the Fisher information. In Sec. III, we show how likelihood functions and the Fisher information can be efficiently obtained from the solution of the stochastic master equation of continuously monitored quantum systems. In Sec. IV, we introduce the Markov chain Monte Carlo method for efficient sampling of the likelihood function in large search spaces, and we give numerical examples which illustrate the application and the results of our methods. In Sec. V, we present a conclusion and outlook. Bayesian inference {#sec:BayesianInference} ================== Our theory of estimation is based on Bayes rule, $$\begin{aligned} P(\theta| D) = \frac{P(D | \theta ) P(\theta)}{P(D)}, \label{eq:BayesRule}\end{aligned}$$ where $P(\theta | D)$ is the probability density of the parameters $\theta$, given the observed data $D$. Informally, this object contains all the information about the system parameters $\theta$ contained in the observed data $D$. From this distribution we can calculate any estimate of interest, including the mean value, the mode and quantiles. An important advantage of calculating the full probability density $P(\theta| D)$ is that it explicitly contains information about the uncertainty of the estimates. Bayes rule relates the conditional probability density to the probability of observing the data given the parameters $P(D|\theta)$ and the existing prior information about the parameters $P(\theta)$. The difficulty in using Eq. (\[eq:BayesRule\]) stems from the denominator being a weighted integral over all possible parameter values $P(D) = \int d\theta P(D|\theta)P(\theta)$. This integral is high-dimensional when several parameters are estimated, and the integrand can vary many orders of magnitude. To determine $P(\theta|D)$ in practice, we therefore need a method for calculating $P(D|\theta)$ and a numerically efficient method of calculating $P(D)$. Likelihood functions {#sec:Likelihood} -------------------- Since the data $D$ is a measurement record, i.e., a function of time, its probability density, or *likelihood*, $P(D|\theta)$ is difficult to define. Apart from its use in the Bayesian update rule (\[eq:BayesRule\]), it is common to maximize the likelihood with respect to the parameters $\theta$ and thus to estimate the true value of the parameter by the value for which the likelihood for generating the data is highest. Instead of maximizing $P(D|\theta)$ with respect to $\theta$, one may maximize the value of any strictly increasing function $f$ of $P(D|\theta)$. The logarithm is commonly used, and the resulting function is then denoted the *log-likelihood* function. It is also possible to divide $P(D|\theta)$ with any strictly positive function $P_0(D)$, without changing the location of the maximum with respect to $\theta$. Thus any function $f( P(D|\theta) / P_0(D) )$ can be used to determine the maximum likelihood estimate of $\theta$. We will use the term *likelihood function* for any function $L(D|\theta) = P(D|\theta) / P_0(D)$ and logarithmic likelihood for $l(D|\theta) = \log L(D|\theta)$. The likelihood function $L(D|\theta)$ can to a large extent be chosen to have a convenient form. Since $P(D) = \int d\theta P(D|\theta) P(\theta) = P_0(D) \int d\theta L(D|\theta) P(\theta)$ we can rewrite Eq. (\[eq:BayesRule\]) as $$\begin{aligned} P(\theta|D) = \frac{ P(D|\theta) P(\theta)}{P_0(D) \int d\theta L(D|\theta) P(\theta) }= \frac{L(D|\theta) P(\theta)}{ \int d\theta L(D|\theta) P(\theta) }.\end{aligned}$$ In the following we shall calculate the likelihood associated with continuously monitored light emitting quantum systems, where the function $P_0(D)$ is the probability density for either a Poisson- or a Wiener-process. Fisher information {#sec:FisherTheory} ------------------ A reasonable question to ask is, how accurate is the Bayesian estimate on average, and what is the fundamental limit on how accurate it is possible to estimate $\theta$? The answer to this question is given by the Fisher Information matrix. The Fisher Information matrix is defined in terms of the probability density for the data given some parameter $\theta$ and $P(D|\theta)$ as $$\begin{aligned} I = \operatorname{\mathbb{E}}\left[ \left( \frac{\partial \log P(D|\theta)}{\partial\theta} \right)^2 \right], \label{eq:BasicFisher}\end{aligned}$$ where the expectation is over all possible realizations of the data $D$. The Cramér-Rao bound [@cramer_mathematical_1954] states, that any estimator for $\theta$, $\hat\theta(D)$ has a variance larger than $1/I(\theta_0)$, where $\theta_0$ is the true value of $\theta$. If one uses a uniform prior, the Fisher Information of $P(D|\theta)$ is the reciprocal of the width of $P(\theta|D)$ averaged over the possible measurement records. If a non-uniform prior is included, the reciprocal width of $P(\theta|D)$ is then, qualitatively, the sum of the Fisher Information for $\theta$ and the reciprocal width of the prior. As described in section \[sec:Likelihood\], we can use any likelihood function in place of the conditional probability $P(D|\theta)$. The same result holds for the Fisher information. That is, in Eq. (\[eq:BasicFisher\]) we can use any likelihood function $L(D|\theta)$ instead of the probability density. For multiple variables, the Fisher information is $$\begin{aligned} I_{ij} &= \operatorname{\mathbb{E}}\left[ \frac{\partial \log L(D|\theta) }{\partial\theta_i} \frac{\partial \log L(D|\theta) }{\partial\theta_j} \right], \label{eq:GeneralFisher} \\ &=\operatorname{\mathbb{E}}\left[ L(D|\theta)^{-2} \frac{\partial L(D|\theta) }{\partial \theta_i} \frac{\partial L(D|\theta) }{\partial \theta_j} \right] \label{eq:GeneralFisher2}\end{aligned}$$ where $L(D|\theta) $ is a likelihood function for observing $D$ given the parameters $\theta$. The Cramér-Rao bound now states (for any unbiased estimator $\hat\theta$), that $\operatorname{\mathbb{E}}[ (\hat\theta_i - \theta_i^0) (\hat\theta_j - \theta_j^0) ] \geq ( I(\theta)^{-1} )_{ij} $. Continuously monitored quantum systems {#sec:Continously-monitored-systems} ====================================== The jump and diffusion quantum filter equations (\[eq:JumpFilter\], \[eq:DiffusionFilter\]) are special cases of the general transformation of open quantum system density matrices subject to the random back action of measurements. If, at a given instant of time, a measurement occurs with outcome $m \in M$, there is an effect-operator $\Omega(m)$ associated with each outcome, so that the state, conditioned on the outcome $m$ reads, $$\begin{aligned} \rho|m = \frac{\Omega(m) \rho \Omega^\dagger(m)}{\operatorname{Tr}(\Omega^\dagger(m) \Omega(m) \rho)}. \label{eq:FullConditional}\end{aligned}$$ The probability (density) for observing the result $m$ is $$\begin{aligned} p(m) = \operatorname{Tr}(\Omega^\dagger(m) \Omega(m) \rho), \label{eq:usualp}\end{aligned}$$ and the effect operators obey the relation $$\begin{aligned} \int dm \Omega^\dagger(m)\Omega(m) = {\mathds{1}}. \end{aligned}$$ The normalization factors in Eq. (\[eq:FullConditional\]) are exactly the probabilities (\[eq:usualp\]) to obtain the corresponding measurement outcome, and a similar probabilistic interpretation holds for the non-linear terms including the coefficients $\operatorname{Tr}(c^\dagger c\rho_t)\rho_t$ and $\operatorname{Tr}({\mathcal{M}}(\rho_t))\rho_t$, in Eqs. (\[eq:JumpFilter\], \[eq:DiffusionFilter\]). This implies that if the stochastic density matrix equation is solved without incorporating the renormalization factors, the decreasing trace of $\rho$ with time yields the likelihood for the actual detection record to occur. In the quantum jump master equation, the jump probability, and hence the decrease in density matrix norm associated with a single jump is proportional to the duration of the infinitesimal time step $dt$ chosen for the simulation. This causes an undesired and inconvenient dependence of the likelihood function on $dt$ and on the number of click events, that we can, however, eliminate by a simple extension of the theory [@wiseman_quantum_1996] similar to [@Goetsch1994]. We introduce an arbitrary positive function $p_0(m)$ and rescale the effect operators $\Omega(m)\rightarrow \Omega(m)/\sqrt{p_0(m)}$ so that they now obey the modified normalization condition, $$\begin{aligned} \int_M dm p_0(m) \Omega^\dagger(m) \Omega(m) = {\mathds{1}},\end{aligned}$$ Eq. (\[eq:FullConditional\]) still holds, but the probability distribution for the different outcomes factors $$\begin{aligned} p(m) = p_0(m) \operatorname{Tr}(\Omega^\dagger(m) \Omega(m) \rho), \label{eq:RadonNikodym}\end{aligned}$$ and we have the freedom to choose the un-normalized conditional states, $$\begin{aligned} \tilde\rho|m = \Omega(m) \rho \Omega^\dagger(m),\end{aligned}$$ whose trace depends now explicitly on the chosen function $p_0(m)$. The expectation value, denoted by $\operatorname{\mathbb{E}}$, of any function $f(m)$ is given by $$\begin{gathered} \operatorname{\mathbb{E}}[ f(m) ] = \int_M dm p(m) f(m) \\ = \int_M dm p_0(m) \operatorname{Tr}(\tilde\rho|m) f(m) \equiv \operatorname{\mathbb{E}}_0[ \operatorname{Tr}(\tilde\rho|m) f(m) ],\end{gathered}$$ where $\operatorname{\mathbb{E}}_0$ is to be understood as the expectation with respect to the reference probability $p_0$. In the following, we will suppress the dependence on the measurement outcomes and simply write $\tilde\rho$ rather than $\tilde\rho|m$ for the conditioned density matrix. The trace of the conditioned state is renormalized by a factor that depends on the specific detection record and which does not change its relative dependence on different values of the unknown parameters $\theta$. It thus still serves as a likelihood function for the Bayesian determination of their values. Our scaling with the function $p_0(m)$ in Eq. (\[eq:RadonNikodym\]) is indeed equivalent to the scaling allowed in the definition of the likelihood function, $L(D|\theta) = P(D|\theta)/P_0(D)$, in Sec. \[sec:Likelihood\]. The “ostensible probability” $p_0$ [@wiseman_quantum_1996] provides a reference measure $p_0(m) dm$ on the set of measurement outcomes, and for our application it serves as a convenient unit for the effect operators $\Omega(m)$. The relative entropy of $p$ with respect to $p_0$ is given directly in terms of $\operatorname{Tr}(\tilde\rho)$ as $S(p|p_0) = \operatorname{\mathbb{E}}[\log(\operatorname{Tr}(\tilde\rho))]$. Describing continuous measurements as the $N\rightarrow \infty$ limit of a process of $N$ times repeated measurements, it is natural to consider a general reference probability distribution on $M^N$, such that the probability of the measurement record factors, $$\begin{gathered} p(m_1, \ldots m_N) = p^{(N)}_0(m_1, \ldots m_N) \\ \operatorname{Tr}( \Omega(m_N) \ldots \Omega(m_1) \rho \Omega^\dagger(m_i) \ldots \Omega^\dagger(m_N) ),\end{gathered}$$ where convenient reference probabilities $p^{(N)}_0({\boldsymbol{m}}) = p_0(m_1) \ldots p_0(m_N)$ for the jump-type and diffusion-like measurements will be given below. Jump type equation ------------------ ![image](rotatingspin_trajectory.pdf) For the jump-type measurements there are for each small time interval $dt$ two possible detector outcomes, $dN_t = 0$ and $dN_t = 1$. We use our freedom to choose $p_0$ as the probability for a Poisson process with rate $\lambda$, i.e. $p_0(dN_t = 1) = \lambda dt$ and $p_0(dN_t = 0) = 1 - \lambda dt$, and the correspondingly normalized measurement effect operators $$\begin{aligned} \begin{split} \Omega_0 &= {\mathds{1}}- i H dt - \frac{1}{2} (c^\dagger c - \lambda) dt \\ \Omega_1 &= \frac{c}{\sqrt{\lambda}} \end{split}\end{aligned}$$ The probability for a detector click is $p_0(dN_t = 1)\operatorname{Tr}( \Omega_1^\dagger \Omega_1 \rho_t) = \operatorname{Tr}(c^\dagger c \rho_t) dt$ as expected from a rate process, and the expected number of events is $\operatorname{\mathbb{E}}[dN_t|N_t] = \operatorname{Tr}(c^\dagger c \rho_t) dt$, while the reference expected value is $\operatorname{\mathbb{E}}_0[dN_t|N_t] = \lambda dt$. The un-normalized conditional quantum state can be expressed as follows $$\begin{gathered} d\tilde\rho_t = \left[ -i{\left[ H, \tilde\rho_t \right]} - \frac{1}{2}{\left\{ c^\dagger c, \tilde\rho_t \right\}} + \lambda\tilde\rho_t \right] dt \\ + dN_t \left[ \frac{c \tilde\rho_t c^\dagger}{\lambda} - \tilde\rho_t \right], \label{eq:JumpLinear}\end{gathered}$$ while explicit normalization leads to Eq. (\[eq:JumpFilter\]). The dynamics of $\tilde\rho_t$ is governed by the Hamiltonian $H$ and the operator $c$ which in turn depend on the parameters $\theta$. The likelihood of a specific sequence of detection events at times $t_1, \ldots t_N < t$ is simply $L( t_1, \ldots t_N | \theta) = \operatorname{Tr}(\tilde\rho_t)$, and Eq. (\[eq:JumpLinear\]) thus provides a differential equation for the likelihood function $L_t = \operatorname{Tr}(\tilde\rho_t)$ $$\begin{aligned} d L_t &= (\lambda L_t - \operatorname{Tr}(c^\dagger c \tilde\rho_t)) dt + dN_t \left[ \frac{ \operatorname{Tr}(c^\dagger c \tilde\rho_t) }{\lambda} - L_t \right], \label{eq:like-lin}\end{aligned}$$ where we have suppressed $L_t$’s dependence on $\theta$. The solutions $\rho_t$ of Eq. (\[eq:JumpFilter\]) and $\tilde{\rho}_t$ of Eq. (\[eq:JumpLinear\]) obey $\tilde{\rho}_t=\operatorname{Tr}(\tilde\rho_t) \rho_t = L_t \rho_t$ which can be inserted in (\[eq:like-lin\]) to yield, $$\begin{aligned} d L_t &= (\lambda - \operatorname{Tr}(c^\dagger c \rho_t)) L_t dt + dN_t \left[ \lambda^{-1} \operatorname{Tr}(c^\dagger c \rho_t) - 1 \right] L_t. \label{eq:JumpNotLog}\end{aligned}$$ This shows that even though the likelihood is formally defined by the trace of the un-normalized conditioned density matrix, $L_t$ can be calculated from the normalized state $\rho_t$ satisfying Eq. (\[eq:JumpFilter\]). For numerical purposes it is convenient to work with $l_t = \log L_t$ which satisfies $$\begin{aligned} dl_t = (\lambda - \operatorname{Tr}(c^\dagger c \rho_t)) dt + dN_t \log(\operatorname{Tr}(c^\dagger c\rho_t)/\lambda). \label{eq:JumpLikelihood}\end{aligned}$$ Diffusion equation ------------------ For diffusion type measurements, describing, e.g., homodyne detection of light, the set of outcomes in a small time interval $dt$ is the real numbers. We will here use the probability of a Wiener increment $dW_t$, i.e. a normal distribution with mean zero and variance $dt$ as our reference probability $p_0^W$. The effect of observing a result $dY_t$ is $$\begin{aligned} \Omega(dY_t) = {\mathds{1}}- i H dt - \frac{1}{2} c^\dagger c dt + c dY_t,\end{aligned}$$ and the probability for observing a given value $dY_t$ is $$\begin{aligned} p(dY_t) = p_0^W(dY_t) (1 + \operatorname{Tr}((c + c^\dagger)\rho_t) dY_t).\end{aligned}$$ We can calculate $\operatorname{\mathbb{E}}[dY_t | Y_t] = \operatorname{\mathbb{E}}_0[ (1 + \operatorname{Tr}((c + c^\dagger)\rho_t) dY_t) dY_t | Y_t] = \operatorname{Tr}((c + c^\dagger)\rho_t) dt$ and $\operatorname{\mathbb{E}}[dY_t^2|Y_t] = \operatorname{\mathbb{E}}_0[ (1 + \operatorname{Tr}((c + c^\dagger)\rho_t) dY_t) dY_t^2 | Y_t] = dt$, which implies $$\begin{aligned} dY_t = \operatorname{Tr}((c + c^\dagger)\rho) dt + dW_t, \label{eq:DiffusionMeasurementResult}\end{aligned}$$ where $dW_t$ is a Wiener increment with respect to the full probability distribution $p$ (while $dY_t$ is a Wiener increment with respect to $p_0$). The un-normalized stochastic differential equation becomes $$\begin{gathered} d\tilde\rho_t = \left[ -i{\left[ H, \tilde\rho_t \right]} - \frac{1}{2}{\left\{ c^\dagger c, \tilde\rho_t \right\}} + c \tilde\rho_t c^\dagger \right] dt \\ + (c \tilde\rho_t + \tilde\rho_t c^\dagger) dY_t, \label{eq:DiffusionLinear}\end{gathered}$$ and it leads to the likelihood $L_t = \operatorname{Tr}(\tilde\rho_t)$ satisfying $$\begin{aligned} dL_t = \operatorname{Tr}( {\mathcal{M}}(\tilde\rho_t ) ) dY_t. \end{aligned}$$ As above, we can also express $L_t$ in terms of the normalized solution to Eq. (\[eq:DiffusionFilter\]), $$\begin{aligned} dL_t = \operatorname{Tr}( {\mathcal{M}}(\rho_t) ) L_t dY_t, \label{eq:DiffusionNotLog}\end{aligned}$$ and the log-likelihood $l_t = \log L_t$ satisfies $$\begin{aligned} dl_t = \operatorname{Tr}({\mathcal{M}}(\rho_t)) (dY_t - \operatorname{Tr}({\mathcal{M}}(\rho_t)) dt). \label{eq:DiffusionLikelihood}\end{aligned}$$ Fisher information {#fisher-information} ------------------ Using $\operatorname{Tr}(\tilde\rho_t)$ as our likelihood function, we can apply Eq. (\[eq:GeneralFisher2\]) to calculate the Cramér-Rao bound for estimating the unknown parameters in the system dynamics under both types of measurements. Define the matrices $$\begin{aligned} \rho^i_t = \frac{1}{\operatorname{Tr}(\tilde\rho_t)} \partial_i \tilde\rho_t,\end{aligned}$$ where the derivative is with respect to the $i$’th component of the vector of parameters $\theta$. The expectation value of $\operatorname{Tr}(\rho^i_t)\operatorname{Tr}(\rho^j_t)$ with respect to the probability distribution $p$ (i.e. the actual probability for generating a trajectory) will then be the $ij$-component of the Fisher Information matrix for the continuously monitored quantum system. We can therefore evaluate the Fisher information matrix numerically by simulating the stochastic master equation a large number of times and determine the expectation value Eq. (\[eq:GeneralFisher2\]). In practice, for the jump-type measurement, this requires solution of Eq. (\[eq:JumpFilter\]) together with a simultaneous evaluation of the matrices $\rho^i_t$, which can, in turn, be determined from the inhomogeneous jump type master equation $$\begin{gathered} d\rho^i_t = \left[ -i{\left[ H, \rho^i_t \right]} - \frac{1}{2}{\left\{ c^\dagger c, \rho^i_t \right\}} + \operatorname{Tr}(c^\dagger c \rho^i_t)\rho^i_t \right] dt \\ + \left[ -i{\left[ \partial_i H, \rho_t \right]} - \frac{1}{2}{\left\{ \partial_i(c^\dagger c), \rho^i_t \right\}} \right] dt \\ + dN_t ( c\rho^i_t c^\dagger + (\partial_i c)\rho_t c^\dagger + c\rho_t (\partial_i c^\dagger) - \rho^i_t ), \label{eq:FisherEqJump}\end{gathered}$$ where the stochastic term $dN_t$ takes the same value as in Eq. (\[eq:JumpFilter\]), and where the derivative of the Hamiltonian and damping terms with respect to $\theta_i$ are assumed known. Similarly, for the diffusion-type measurement $$\begin{gathered} d\rho^i_t = \left[-i{\left[ H, \rho^i_t \right]} - {\left\{ c^\dagger c, \rho^i_t \right\}}/2 + c \rho^i_t c^\dagger \right] dt\\ + \left[ -i{\left[ \partial_i H, \rho_t \right]} -{\left\{ \partial_i(c^\dagger c), \rho_t \right\}}/2 + (\partial_i c)\rho_t c^\dagger + c \rho_t (\partial_i c^\dagger )\right] dt\\ ({\mathcal{M}}(\rho^i_t) + (\partial_i{\mathcal{M}})(\rho_t) - \operatorname{Tr}({\mathcal{M}}(\rho_t)) \rho^i_t) (dY_t - \operatorname{Tr}({\mathcal{M}}(\rho_t)) dt), \label{eq:FisherEqDiffusion}\end{gathered}$$ where the Wiener increment $dY_t - \operatorname{Tr}({\mathcal{M}}(\rho_t)) dt = dW_t$ takes the same value as in Eq. (\[eq:DiffusionFilter\]). The Fisher information provides an average quantifier of the asymptotic uncertainty in the estimation problem. With Eqs. (\[eq:JumpFilter\], \[eq:FisherEqJump\]) and Eqs. (\[eq:DiffusionFilter\], \[eq:FisherEqDiffusion\]) we have shown how the Fisher information can be calculated by simulating many independent sequences of the stochastic master equation for the two different types of measurement. These simulations have to be carried out for the candidate values of the parameters to yield the precision expected for an estimate based on a typical experimental run. As illustrated by comparison of other such precision measures in [@Gambetta2001], different measurement schemes have different resolving power, and in future work, we plan to address these differences in more detail, e.g., by comparing the Fisher information derived for the jump-type and for different diffusion-type measurements. We also note, that if the field/meter degrees of freedom could be left unmeasured, the full entangled density matrix of the quantum system and the quantized radiation field would depend on the unknown parameters. Thus the general quantum Cramér-Rao bound derived by Braunstein and Caves [@braunstein_statistical_1994] to determine a parameter, encoded in a quantum state, yields the ultimate accuracy with witch the parameters in our state dynamics can be inferred using any type of measurements. Identifying that accuracy, and investigating how closely it is approached by quantum jump and quantum diffusion measurements of the emitted light presents an interesting challenge for further studies. Numerical investigation ======================= In this section we will illustrate the theory outlined in the previous sections with a few characteristic examples. One approach for investigating $P(\theta|D)$ is to compute the likelihood function $L(D|\theta)$ on a grid. Using such a calculation, posterior expectation values of $\theta$ can be calculated by numerical integration. A numerical maximization routine can also be used to find the maximum of $L(D|\theta)$ and thus provide a maximum likelihood estimate of the parameters. The uncertainty is given by the Fisher information found by solution of the stochastic master equation with samples of simulated detection records. The posterior probability density may have many local maxima and it can be difficult to find the global maximum of $L(D|\theta)$ using standard maximization techniques. If the parameter space is very large, more efficient methods for sampling the likelihood function exist. To sample a function with an un-normalized probability density $\pi(x)$, one can apply a random process for the candidate values in the form of a Markov chain, where the values jump in an appropriately chosen manner so that they attain the correct relative probabilities. The transition probability $t(x_1 \to x_2)$ must hence be chosen such that it asymptotically reproduces the relative probability density $\pi(x)$. The requirement for the transition rule $t$ is then that the only function that satisfies $\int dx f(x) t(x \to x') = f(x')$ is proportional to our desired $\pi(x)$. A generic way to construct such a Markov chain is the Metropolis-Hastings algorithm [@press2007numerical; @Gilks1995] which is used in many areas of science, and we provide a brief review of our application of the method. The basic idea is to compare the relative probability densities of a randomly chosen candidate value $x_2$ with the one of the current value $x_1$. The value $x_2$ may be chosen randomly or, more conveniently, according to a *proposal chain* $q(x_1 \to x_2)$, e.g., in the neighborhood of $x_1$. A correct sampling of the probability density is obtained by accepting $x_2$ with the probability $$\begin{aligned} \alpha(x_1, x_2) = \min\left(1, \frac{\pi(x_2)q(x_2 \to x_1)}{\pi(x_1) q(x_1 \to x_2)}\right), \label{eq:MCMCjumpP}\end{aligned}$$ and otherwise retaining the value $x_1$. If the proposal chain is able to explore the entire parameter space this Markov chain will have $\pi(x)$ as un-normalized stationary distribution. A nice feature of the Metropolis-Hastings sampling method, is that it uses only ratios between different arguments of the functions $\pi$ and $q$. This implies, that we can use the un-normalized probabilities $\pi(x)$, and for our purpose, we can use the likelihood functions found by solving (\[eq:JumpNotLog\]) or (\[eq:DiffusionNotLog\]) with the parameter values $\theta= x_1$ and $\theta=x_2$). In summary, to sample the posterior density for the estimated parameters $P(\theta|D)$ for a continuous quantum measurement using Metropolis-Hastings we select a random $\theta$ from the prior distribution $P(\theta)$, and we proceed as follows: 1. Determine candidate $\theta_c$ according to some proposal distribution $q(\theta \to \theta_c)$. 2. Calculate the likelihood or, equivalently, the log-likelihood $l_T^c$ for the data until the final time $T$, using the candidate $\theta_c$ and Eqs. (\[eq:JumpNotLog\], \[eq:DiffusionNotLog\]) or Eqs. (\[eq:JumpLikelihood\], \[eq:DiffusionLikelihood\]) depending on the type of measurement. 3. Calculate $\alpha(\theta, \theta_c) = \min(1, \exp(l_T^c - l_T) q(\theta_c \to \theta)/q(\theta \to \theta_c)$, where $l_T$ is the log-likelihood for the previous parameter $\theta$. 4. Accept candidate with probability $\alpha(\theta, \theta_c)$, otherwise keep $\theta$. These steps are repeated a large number of times, and the parameters sampled are then representative and can be used for determination of any property of the distribution $P(\theta|D)$. In the simulations presented below, the proposal distribution $q(\theta \to \theta_c)$ was chosen as a multivariate normal distribution centered at $\theta$ with a variance selected to achieve a reasonable acceptance rate of $10\%$ to $50\%$. Many techniques exist for investigating the convergence rate and the correlation length of the Markov chain generated by the above technique [@Gilks1995]. In the simple examples studied in the present manuscript, the convergence rate and correlation length are readily identified, but a more careful analysis of these issues is necessary when applying the technique to an experimental situation with many parameters and uncertainties. Examples ======== Two-level atom -------------- ![(Color online) The left panels show histograms of Markov Chain Monte Carlo sampled distributions of the parameters, $\Omega,\ \Delta,\ \gamma$ in our two-level atom model. The prior knowledge of the parameters assumes normal distributions, shown by the dashed lines, with mean values $\mu_\Omega = 2.0$, $\mu_\Delta = 3.0$, $\mu_\kappa = 1.0$ and standard deviations $\sigma_\Omega = 0.8$, $\sigma_\Delta = 1.0$ and $\sigma_\kappa = 0.5$. The right panels display the correlations between the different pairs of sampled parameters. []{data-label="fig:TwoLevelMCMC"}](rotatingspin_posterior_full.pdf) Consider a coherently driven two-level atom that decays by spontaneous emission of photons. The atom is described by the Hamiltonian $H = (\Omega/2) \sigma^x + (\Delta/2) \sigma^z$ and by a jump operator $c = \sqrt{\gamma} \sigma^-$, where $\gamma$ is the effective decay rate, ${\boldsymbol{\sigma}} = (\sigma^x, \sigma^y, \sigma^z)$ is the vector of Pauli spin-matrices, and $\sigma^-$ denotes the Pauli lowering operator. The measurement record is the times at which photons are detected with a photodetector. The top part of Figure \[fig:RotspinExampleTrajectory\] shows an example trajectory, assuming known values $\gamma = 0.55$, $\Omega = 1.3$ and $\Delta = 1.43$ for the atomic and field parameters (in dimensionless units, e.g., relative to the decay rate of another excited state in the same atom). The continuous curves show the components of the Bloch vector ${\boldsymbol{r}}=\operatorname{Tr}(\rho{\boldsymbol{\sigma}})$, and they display continuous evolution disrupted at discrete times, where discontinuous quantum jumps of the state occur associated with the detector clicks. In the bottom part of Figure \[fig:RotspinExampleTrajectory\], we have assumed that $\gamma$ and $\Omega$ are known, and we evaluate the probability distribution for the detuning parameter on a grid, assuming a prior normal distribution for $\Delta$ with a standard deviation $\sigma_\Delta = 1.0$ and mean value $\mu_\Delta = 2.0$. The $\Delta$-distribution is conditoned on the same detection record as applied in the upper part of the Figure, and we observe how the no-click periods cause a continuous change of the posterior density, while the discrete jumps are accompanied by more abrupt changes, until the distribution is well converged. The importance of the use of the whole signal and not only the mean photodetection rate, is easily understood by the observation that following each quantum jump, the atomic density matrix describes a transient damped Rabi oscillation, and the temporal probability distribution for the subsequent jump event is periodically modulated. Since the period of the transient modulation depends explicitly on $\Omega$ and $\Delta$ the actual occurrence of the next jump strongly favors (disfavors) certain values of $\Delta$ and causes the conditional increase (decrease) in the probability density at those values. With a single unknown parameter, it is possible to compute the likelihood function on a fine grid, but if we pass to the larger parameter space of more unknown variables, we have recourse to more advanced search methods. In Fig. \[fig:TwoLevelMCMC\], we show the results of running the Markov chain Monte Carlo-algorithm on the trajectory in Fig. \[fig:RotspinExampleTrajectory\] with all three parameters $\Omega,\ \Delta,\ \gamma$ treated as unknown. We assume normal distributed priors, shown with the dashed lines in the left panels of Fig. \[fig:TwoLevelMCMC\], and the histograms show the values for the three parameters sampled by the Markov chain. Since the trajectory is quite short, there is not sufficient information to perfectly infer the values of the parameters, and the joint densities of pairs of variables in the right panels indicate that two islands of likely values of the set of parameters are not resolved by the measurements. We have compared the distribution of time differences between click event in the rather short detection record, shown in Fig. \[fig:RotspinExampleTrajectory\], with the expected transient Rabi oscillation dynamics and we find that they are, indeed, compatible with the different values for the pair of parameters $\Omega,\ \Delta$, occurring with comparable probabilities in the upper right panel in Fig. \[fig:TwoLevelMCMC\]. With a few more “lucky clicks”, however, the distribution will favor one choice, and due to the correlations between our estimates for all three parameters, they may then rather rapidly all converge to the correct values. We have also calculated the Fisher information matrix for a photon counting experiment by applying the simulation methods described above, and we obtain the results shown in Fig. \[fig:SpinHalfFisher\]. The Fisher information matrix was calculated by simulating the stochastic master equation and the associated equations for the $\rho^i_t$ for different choices of the parameters $(\Omega, \Delta) \in [-3/2, 3/2] \times [-3/2, 3/2]$, while the decay rate was assumed to be known and equal to $\gamma = 0.55$ in our dimensionless units and the initial atomic state was unexcited. We recall, that the Fisher information, evaluated at the estimated values $\Omega$, $\Delta$ gives the uncertainties of these two quantities as well as their covariance. It is not surprising that the sensitivity of the photon detection method depends on the actual values of the parameters. The fact that $\Omega$ and $\Delta$ enter the problem as coefficients on non-commuting spin components in the atomic Hamiltonian suggests that spin uncertainty relations may result in limitations on their joint determination, see also [@Petersen2005]. Such a fundamental limitation may be reflected by the apparent anti-correlation of the occurrence of large and small values of the Fisher information matrix elements $I_{\Omega,\Omega}$ and $I_{\Delta,\Delta}$ in Fig. \[fig:SpinHalfFisher\]. The relative entropy between the signal probability $p$ and a Poisson reference distribution $p_0$, $S(p|p_0) = \operatorname{\mathbb{E}}[\log L_t]$, i.e., the $p$-expectation value of $\log L_t$, is shown in Fig. \[fig:SpinHalfFisher\](d). The reference distribution $p_0$ has been chosen as a Poisson process with a rate set by the stationary emission rate $\lambda_{st} = \Omega^2 \gamma / (\gamma^2 + 4\Delta^2 + 2 \Omega^2 )$. The relative entropy is close to zero in large regions of the $\Omega$, $\Delta$ parameter space, indicating that the emission process is not very different from a Poisson process. In the regions with $|\Omega|\geq 0.5$, $\Delta \approx 0$, the dynamics deviate significantly from a Poisson process due to the Rabi oscillations in the photon waiting-time distribution, and the ensemble of trajectories have a higher entropy. ![(Color online) Fisher Information matrix components for photodetection of a decaying two-level atom with decay rate $\gamma = 0.55$ initially in the unexcited state up to $T = 40$. The two upper panels show the diagonal elements $I_{\Omega,\Omega}$ and $I_{\Delta,\Delta}$ and the lower left panel shows the off-diagonal element $I_{\Omega,\Delta}$ of the Fisher information matrix. The lower right panel, shows the relative entropy between the signal probability distribution $p$ and $p_0$, where $p_0$ is a Poisson process of with the intensity of the stationary emission rate for the two-level atom $\lambda_{st} = \Omega^2 \gamma / (\gamma^2 + 4\Delta^2 + 2 \Omega^2 )$, and $\gamma = 0.55$, see text. []{data-label="fig:SpinHalfFisher"}](SpinHalfFisher.pdf){width="\columnwidth"} Bi-modal two-level atom ----------------------- ![ (Color online) Simulated signal from a bimodal two-level atom, undergoing jumps and coherent evolution with two alternating sets of parameters, $a$ and $b$. The solid black curve is the (binned) observed signal while the red dashed curve shows the mean expected counts for the atom subject to the current set of parameters.The values used for this trajectory are $\Omega_a = 1.1$, $\Delta_a = 1.3$, $\gamma_a = 1.6$, $\Omega_b = 2.2$, $\Delta_b = 0.2$, $\gamma_b = 2.4$ and transition rates $W(a \to b) = 0.03$ and $W(b \to a) = 0.08$. []{data-label="fig:TwoLevel-DoubleWellSignal"}](DoubleWellSignal.pdf) ![ (Color online) Marginal distributions for the eight unknown parameters in our bimodal two-level atomic system. All prior distributions were taken to be uniform on the shown intervals as indicated by the black dashed line. The estimation was based on the actual click events of the trajectory, partly shown in Figure \[fig:TwoLevel-DoubleWellSignal\]. []{data-label="fig:TwoLevel-DoubleWellMCMC"}](DoubleWellMCMC.pdf) Imagine now a situation, where the two-level atom is not subject to dynamics with a fixed set of unknown parameters, but it may jump randomly between two fixed sets of values. Such jumps may occur due to changes in a binary variable in the surrounding environment. e.g., the quantum states $|a\rangle$ and $|b\rangle$ of a nearby atom, spin or mesoscopic qubit degree of freedom, or due to the atom moving spatially between two different positions in a laser field configuration. We will assume these state changes are purely classical, i.e. we neglect all coherences between the configurations or positions $|a\rangle$ and $|b\rangle$, and we assume that both the Rabi-frequency, the detuning and the decay rate of the two-level atom have different values for the two states. We describe the system using a conditional master equation where we include the environmental states $|a\rangle$ and $|b\rangle$ of the atoms in a block-diagonal density matrix, $\rho = \rho_a {\otimes}\ket a \bra a + \rho_b {\otimes}\ket b\bra b$, where $\rho_a$ ($\rho_b$) is the density matrix for the atom associated with the environmental state $a$ ($b$). The system Hamiltonian is $H = \left( (\Omega_a/2) \sigma^x + (\Delta_a/2) \sigma^z\right) {\otimes}\ket a \bra a + \left( (\Omega_b/2) \sigma^x + (\Delta_b/2) \sigma^z\right) {\otimes}\ket b \bra b$ and the effective photo-detection jump operator is $$\begin{aligned} c = \sigma^- {\otimes}(\sqrt{\gamma_a} \ket a \bra a + \sqrt{\gamma_b} \ket b \bra b).\end{aligned}$$ The transitions between the two configurations are described by incoherent jumping rates $W(a \to b)$ and $W(b \to a)$ and corresponding jump operators $J_{a\to b} = \sqrt{W(a\to b)} {\mathds{1}}{\otimes}\ket b \bra a$ and $J_{b\to a} = \sqrt{W(b\to a)} {\mathds{1}}{\otimes}\ket a \bra b$. The system is now equivalent to an enlarged quantum system, and it is fully described as a single quantum system by the formalism outlined above. We have used the parameters $\Omega_a = 1.1$, $\Delta_a = 1.3$, $\gamma_a = 1.6$, $\Omega_b = 2.2$, $\Delta_b = 0.2$, $\gamma_b = 2.4$ and (slow) transition rates $W(a \to b) = 0.03$ and $W(b \to a) = 0.08$ to simulate a typical detection record for the system. In Figure \[fig:TwoLevel-DoubleWellSignal\], the black solid line shows the time-binned observed signal for this record as a function of time. As the changes between the two sets of parameter occur at low rates, the photon counting permits efficient Bayesian determination of the classical states $a$ and $b$ along the same lines as [@Reick2010]. The red curve shows the mean photon scattering rates evaluated for the current estimate of which set of parameters applies. Treating all rates and coupling strengths as unknown, the large number of unknown parameters makes a straightforward Bayesian estimation of their values very complicated. In Fig. \[fig:TwoLevel-DoubleWellMCMC\] we show instead the outcome of the Markov Chain Monte Carlo sampling of the eight possible parameters over the same measurement sequence as in Fig. \[fig:TwoLevel-DoubleWellSignal\]. All values were assigned uniform prior probability distributions on the intervals shown (dashed lines in the figures), and the histograms show the concentration of the values sampled on the actual, correct parameters. Discussion and outlook ====================== In this paper we have presented a general method for inferring the values of parameters that govern the time-evolution of continuously monitored quantum systems. The systems are described by stochastic master equations, and we have shown that the trace of the un-normalized density matrix can be interpreted and applied as a likelihood function in standard statistical methods for parameter inference. Explicit differential equations for the likelihood function in terms of the normalized density matrix are exemplified for the case of photon counting (\[eq:JumpLikelihood\]) and homodyne photodetection (\[eq:DiffusionLikelihood\]). The differential equations for the likelihood allows us to use numerically stable and efficient stochastic master equation simulations as input to a variety of standard statistical estimation algorithms, e.g. Markov Chain Monte Carlo and direct maximum likelihood estimation. Our identification of the conditioned density matrix dynamics with the likelihood function, in addition, leads to an efficient method (\[eq:FisherEqJump\]), (\[eq:FisherEqDiffusion\]) to simulate the Fisher information associated with any particular measurement scheme, and thus to evaluate the confidence of parameter estimation by continuous measurements. We presented our formalism for the case of photodetection, and in our examples we assumed that all emitted radiation is detected. If there are unobserved decoherence or loss processes and, e.g., loss of the radiation signal before the detection, averaging over these processes simply contributes further (deterministic) dissipation terms of the Lindblad form $\mathcal{D}[J](\rho) = -{\left\{ J^\dagger J, \rho \right\}}/2 + J \rho J^\dagger$ in the master equations (1,2). The likelihood equations (\[eq:JumpLikelihood\],\[eq:DiffusionLikelihood\]), however, remain unchanged. A technical element in our formulation of the theory involves the introduction of a reference probability $p_0$, imposing a degree of freedom in the normalization of the effect operators and the density matrix conditioned on the measurement signal. The introduction of the reference probability $p_0$ is mathematically equivalent to converting the set of measurement outcomes into a classical probability space with a reference probability measure $P_0$ and the quantum measurement effect operators induce a probability measure on $M$ via the relation $P(A) = \int_A dP_0(m) \operatorname{Tr}(\tilde\rho|m)$ for subsets $A \subset M$. In mathematical terms the likelihood function $\operatorname{Tr}(\tilde\rho|m)$, discussed in section \[sec:BayesianInference\], can then be identified as the Radon-Nikodym derivative $dP/dP_0 (m)$. This points to a further generalization by transforming the probability measure $P$ to some other measure $\tilde P$ such that $dP/d\tilde P = Z_t$, where $Z_t$ is a martingale, and where trajectories generated with the transformed probability measure $\tilde P$ should be weighted by $Z_t$ to obtain ensemble averages. This may provide a useful technique to control the variance in numerically calculated ensemble averages and to simulate the master equation and the Fisher information matrix more efficiently. The relative entropy $S(P|P_0)$ between the probability measures $P$ and $P_0$ is the $P$-expectation value of $\log L_t$, $S(P|P_0) = \operatorname{\mathbb{E}}[\log L_t] = \int dP \log(dP/dP_0)$ and we note that the Fisher information is nothing but the relative entropy between $P_\theta$ and $P_\theta'$ for infinitesimally close $\theta$ and $\theta'$. Apart from their importance in parameter estimation, emphasized in this manuscript the entropy $S(P|P_0)$ and the Fisher information $I_{ij}$ provide means to characterize the stochastic dynamics of quantum trajectories in a manner similar to the use of entanglement susceptibility to characterize the correlations in quantum many-body physics [@zanardi_mixed-state_2007; @gu_fidelity_2008; @you_fidelity_2007; @zanardi_information-theoretic_2007].
--- abstract: | The homogeneous isotropic Boltzmann equation (HIBE) is a fundamental dynamic model for many applications in thermodynamics, econophysics and sociodynamics. Despite recent hardware improvements, the solution of the Boltzmann equation remains extremely challenging from the computational point of view, in particular by deterministic methods (free of stochastic noise). This work aims to improve a deterministic direct method recently proposed \[V.V. Aristov, Kluwer Academic Publishers, 2001\] for solving the HIBE with a generic collisional kernel and, in particular, for taking care of the late dynamics of the relaxation towards the equilibrium. Essentially (a) the original problem is reformulated in terms of particle kinetic energy (exact particle number and energy conservation during microscopic collisions) and (b) the computation of the relaxation rates is improved by the DVM-like correction, where DVM stands for Discrete Velocity Model (ensuring that the macroscopic conservation laws are exactly satisfied). Both these corrections make possible to derive very accurate reference solutions for this test case. Moreover this work aims to distribute an open-source program (called [[HOMISBOLTZ]{}]{}), which can be redistributed and/or modified for dealing with different applications, under the terms of the GNU General Public License. The program has been purposely designed in order to be minimal, not only with regards to the reduced number of lines (less than 1,000), but also with regards to the coding style (as simple as possible). Boltzmann equation; homogeneous; isotropic; deterministic method address: | Department of Energetics, Politecnico di Torino,\ Corso Duca degli Abruzzi 24, Torino, Italy\ email: pietro.asinari@polito.it, web: http://staff.polito.it/pietro.asinari author: - Pietro Asinari title: | Nonlinear Boltzmann equation for\ the homogeneous isotropic case:\ Minimal deterministic Matlab program --- [**PROGRAM SUMMARY**]{} [*Manuscript Title:*]{} Nonlinear Boltzmann equation for the homogeneous isotropic case: Minimal deterministic Matlab program\ [*Authors:*]{} Pietro Asinari\ [*Program Title:*]{} [[HOMISBOLTZ]{}]{}\ [*Journal Reference:*]{}\ [*Catalogue identifier:*]{}\ [*Licensing provisions:*]{} The program is free software, which can be redistributed and/or modified under the terms of the GNU General Public License.\ [*Programming language:*]{} Tested with Matlab$\;$ version $\geq$ 6.5. However, in principle, any recent version of Matlab$\;$ or Octave should work.\ [*Computer:*]{} All supporting Matlab$\;$ or Octave\ [*Operating system:*]{} All supporting Matlab$\;$ or Octave\ [*RAM:*]{} 300 MBytes\ [*Number of processors used:*]{}\ [*Supplementary material:*]{}\ [*Keywords:*]{} Boltzmann equation; homogeneous; isotropic; deterministic method\ [*Classification:*]{} 23 Statistical Physics and Thermodynamics\ [*External routines/libraries:*]{}\ [*Subprograms used:*]{}\ [*Nature of problem:*]{}\ The problem consists in integrating the homogeneous Boltzmann equation for a generic collisional kernel in case of isotropic symmetry, by a deterministic direct method. Difficulties arise from the multi-dimensionality of the collisional operator and from satisfying the conservation of particle number and energy (momentum is trivial for this test case) as accurately as possible, in order to preserve the late dynamics.\ [*Solution method:*]{}\ The solution is based on the method proposed by Aristov \[1\], but with two substantial improvements: (a) the original problem is reformulated in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation during microscopic collisions) and (b) a DVM-like correction (where DVM stands for Discrete Velocity Model) is adopted for improving the relaxation rates (this allows one to satisfy exactly the conservation laws at macroscopic level, which is particularly important for describing the late dynamics in the relaxation towards the equilibrium). Both these corrections make possible to derive very accurate reference solutions for this test case.\ [*Restrictions:*]{}\ The nonlinear Boltzmann equation is extremely challenging from the computational point of view, in particular for deterministic methods, despite the increased computational power of recent hardware. In this work, only the homogeneous isotropic case is considered, for making possible the development of a minimal program (by a simple scripting language) and allowing the user to check the advantages of the proposed improvements beyond the Aristov’s method \[1\]. The initial conditions are supposed parameterized according to a fixed analytical expression, but this can be easily modified.\ [*Unusual features:*]{}\ There are no unusual features.\ [*Additional comments:*]{}\ There are no additional comments.\ [*Running time:*]{}\ From minutes to hours (depending on the adopted discretization of the kinetic energy space). For example, on a 64 bit workstation with Intel$\;$ Core$\;$ i7-820Q Quad Core CPU at 1.73 GHz and 8 MBytes of RAM, the provided test run (with the corresponding binary data file storing the pre-computed relaxation rates) requires 154 seconds.\ [*References:*]{} V.V. Aristov, [*Direct Methods for Solving the Boltzmann Equation and Study of Nonequilibrium Flows*]{}, Kluwer Academic Publishers, 2001. Introduction ============ In a dilute gas, the Boltzmann transport equation [@boltzmann1872; @cercignani1987] describes the time evolution of the single-particle distribution function, which provides a statistical description about the positions and velocities of the gas molecules. From the theoretical point of view, it is one of the most important equations of non-equilibrium statistical mechanics and one the most powerful paradigms for explaining transport phenomena in fluids. Moreover, from the engineering point of view, since early fifties, it received a lot of attention due to aerodynamic requirements for high altitude vehicles and vacuum technology requirements [@cercignani1987]. Nowadays, the set of applications has been widen by including dilute gas flows in micro-electro-mechanical systems (MEMs) [@cercignani2006]. These devices are increasingly applied to a great variety of industrial and medical problems. In these problems, given the small dimensions of the devices, it is necessary to use the kinetic theory, instead of the usual fluid dynamics, based on the Navier-Stokes equations, to describe the motion of dilute gases in the small gaps of these devices. Because of the intrinsic complexity of this equation (the single-particle distribution function is defined in the phase space and the time evolution is ruled by a five-fold collisional integral), solving the nonlinear Boltzmann equation is extremely complex. Hence, from the very beginning, there was an attempt to formulate simpler models, which preserve the main features of the dynamic approach to the thermodynamic equilibrium. As pointed out in the Cercignani’s biographical work [@cercignani1998], Boltzmann himself started in his fundamental paper [@boltzmann1872] by considering first the case when the distribution function does not depend on space ([*homogeneous*]{} case), but only on time and the magnitude of the molecular velocity ([*isotropic*]{} collisional integral). The same homogeneous isotropic case is considered by Truesdell [@truesdell1966] in his famous lectures on natural philosophy, as the starting point for investigating the role of time in classical thermodynamic systems (which are assumed homogeneous by definition). In fact, despite the isotropy of the collisional integral, the actual time evolutions of the distribution function (far from the equilibrium) may be very different, depending on the initial conditions. Concerning gas dynamics, focusing on the homogeneous isotropic case, it may seem a bit limiting. For example, an immediate consequence of the isotropic symmetry is that all the odd statistical moments are null by definition and hence (meaningful) moment equations can be derived for even moments only. However it is well known that the decomposition between even moments (pressure, energy,...) and odd moments (momentum, thermal flux,...) is a key concept in deriving the fluid dynamic description from the full Boltzmann equation, in case of vanishing Knudsen number [@sone2002]. In particular, in recovering the incompressible limit of the Navier-Stokes equations, the Mach number is assumed as small as the Knudsen number (diffusive scaling, see [@sone2002]) and hence the kinetic description collapses in a small neighborhood of the statistical core defined by the even moments only. This means that the distribution function can be expanded around an equilibrium distribution function, which depends on the even moments only. Hence describing properly the manifold defined by the even moments is the first basic step for describing the dynamics due to small deviations from the local equilibrium. This is the key idea behind the derivation of the so-called Lattice Boltzmann Method (LBM) [@succi2001]. This is the reason why the even moments are sometimes called backbone moments of the LBM description [@karlin2010]. A similar idea holds for the so-called quadrature method of moments (QMOM), which is a generic solution method for population balance models [@marchisio2005]. The common feature between LBM and QMOM is that both solve moment systems of equations, which are based on a contraction of the statistical description given by the Boltzmann equation. The systematic derivation of moment equations from the Boltzmann equation is beyond the purposes of the present work: a detailed review can be found in Ref. [@struchtrup2005]. The interest with regards to the homogeneous isotropic Boltzmann equation goes beyond simple dilute gases. In the so-called econophysics [@chatterjee2005], a Boltzmann type model is sometimes introduced for studying the distribution of wealth in a simple market. The founding idea, dating back to the works of Mandelbrot [@mandelbrot1960], is that the laws of statistical mechanics govern the behavior of a huge number of interacting individuals just as well as that of colliding particles in a gas container. The classical theory for homogeneous gases is easily adapted to the new economic framework: molecules and their velocities are replaced by agents and their wealth, and instead of binary collisions, one considers trades between two individuals. The goal is to recover the macroscopic distributions of wealth by tuning the microscopic models for the binary interaction among the agents. The parameters of the microscopic model can be either constant or random quantities. A recent review on this topic can be found in Ref. [@during2008] and the references therein. Another recent application of the homogeneous isotropic Boltzmann equation is given by opinion formation modeling in quantitative sociology, also called sociodynamics or sociophysics [@weidlich2000]. Quantitative sociology has the ambitious aim to provide a general strategy, that means a frame of theoretical concepts, for designing mathematical models for the quantitative description of a rather broad class of collective dynamical phenomena within human society, in particular opinion formation. The modeling of opinion dynamics has been treated in numerous works, because of its application to politics, to predict the behavior of voters during an election process or the public opinion tendencies [@weidlich2000]. Classical kinetic models based on homogeneous isotropic Boltzmann-like equations can be derived by prescribing the collision kernel for the microscopic particle interactions, namely the sociophysical model which prescribes the exchange rules for opinion in a binary interaction [@helbing1995]. Since the Boltzmann equation was the starting point for constructing numerous kinetic equations in many fields of physics, many numerical techniques have been proposed to solve it. A complete review of these efforts is clearly beyond the purposes of the present work: however a complete discussion can be found in the Ref. [@aristov2001] and the references therein. Despite this wide scenario of numerical methods and the constant increase in the computational power, solving the Boltzmann equation in practical applications is still challenging nowadays. In particular, the most demanding step consists in the evaluation of the collisional integral (which is in general five-fold in three dimensions). It is possible to distinguish between (a) [*stochastic*]{} and (b) [*deterministic*]{} methods in evaluating the collisional integral. In the stochastic methods, like the Monte Carlo method, one uses a combination of approximations based on randomly-generated variables and a fixed (molecular) velocity grid. On the other hand, in deterministic (or direct) methods, one uses only regular lattices in velocity space, usually dealing with a larger computational effort in order to achieve better accuracy (free of stochastic noise) [@aristov2001]. The goal of this work is twofold. - First of all, this work aims to improve the deterministic numerical method proposed by Aristov [@aristov2001] by (i) reformulating the original problem in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation; momentum is trivially conserved because of the isotropic symmetry) and (ii) improving the computation of the relaxation rates (making it particularly suitable for dealing with the late dynamics of the relaxation towards the equilibrium). - Secondly, this work aims to distribute an open-source program (as minimal as possible) for solving by a deterministic method the homogeneous isotropic Boltzmann equation, which can be easily understood and modified for dealing with different applications (thermodynamics, econophysics and sociodynamics), in order to derive reliable reference solutions (with an accuracy which can not be easily obtained by stochastic methods). The paper is organized as follows. First, in Section \[background\], some theoretical background is provided about the homogeneous isotropic Boltzmann equation: in particular, the derivation of the homogeneous isotropic case, the energy formulation, the numerical method, the proposed correction for the relaxation rates and the adopted quadrature formulas are discussed. In Sections \[overview\] and \[components\], an overview of the program structure and a description of the essential components are provided. Finally, in Sections \[installation\] and \[test\], the instructions about installation and how to run a test case are provided. \[background\]Theoretical background ==================================== Boltzmann equation for Maxwell molecules ---------------------------------------- Let us consider a dilute gas made of molecules. Let us introduce the probability density function, or distribution function, $f(t, \bm{x}, \bm{\xi})$ for the time $t\in\mathbb{R}^+$, for the position $\bm{x}\in\mathbb{R}^3$, where $\mathbb{R}^3$ is the physical space, and for the molecular velocity $\bm{\xi}\in\mathbb{R}_{\bm{\xi}}^3$, where $\mathbb{R}_{\bm{\xi}}^3$ is the velocity space ($\mathbb{R}^3\cup\mathbb{R}_{\bm{\xi}}^3$ is the phase space). Hence the distribution function is defined on the domain $\{t>0,\,\bm{x}\in\mathbb{R}^3,\,\bm{\xi}\in\mathbb{R}_{\bm{\xi}}^3\}$. The distribution function allows one to compute the infinitesimal probability to find some molecules in the time interval between $t$ and $t+dt$, in the infinitesimal volume $d\bm{x}\in\mathbb{R}$ around the point $\bm{x}$ and with a velocity in the infinitesimal volume $d\bm{\xi}\in\mathbb{R}_\xi$ around the velocity $\bm{\xi}$, namely $f(t, \bm{x}, \bm{\xi})dt d\bm{x} d\bm{\xi}$. According to the kinetic theory of gases, the probability density function of a dilute gas with elastic binary interactions satisfies the Boltzmann transport equation [@boltzmann1872; @cercignani1987], namely $$\label{BE} \frac{\partial f}{\partial t} +\bm{\xi}\cdot\nabla_{\bm{x}} f=Q(f,f), $$ where the collisional integral is given by $$\label{coll} Q(f,f)\dot{=}\int_{\mathbb{R}_{\bm{\xi}}^3}\int_{\bm{g}\cdot\bm{n}<0} B(\bm{g},\bm{n})\left[f(\bm{\xi}') f(\bm{\xi_*}')-f(\bm{\xi}) f(\bm{\xi_*})\right] d\bm{n}\,d\bm{\xi_*},$$ $\bm{\xi_*}\in\mathbb{R}_{\bm{\xi}}^3$ is the generic field particle (integration dummy variable) and $d\bm{\xi_*}$ is its infinitesimal volume in the velocity space; $\bm{\xi}',\,\bm{\xi_*}'\in\mathbb{R}_{\bm{\xi}}^3$ are the post–collision test and field particle velocities respectively; $\bm{n}\in\mathbb{R}^3$ is the unit vector along the direction connecting the centers of the two particles during the instantaneous collision and versus pointing from particle $\bm{\xi}$ to $\bm{\xi}_*$, while $d\bm{n}$ is the infinitesimal solid angle; $\bm{g}=\bm{\xi}_*-\bm{\xi}$ is the relative velocity (of the field particle with regards to the test particle); finally, $B(\bm{g},\bm{n})$ is a volumetric particle flux or collision kernel. In the following, we will discuss only the case of Maxwell molecules [@cercignani1987], which highly simplify the expression of the collision kernel, namely $B(|\bm{g}\cdot\bm{n}|)$. Let us assume the following expression $$\label{generic} B(|\bm{g}\cdot\bm{n}|)=a^2\,c_s\,\left(\frac{\left|\,\bm{g}\cdot\bm{n}\,\right|}{c_s}\right)^\theta,$$ where $a$ is the particle radius, $c_s$ is a characteristic mean particle velocity (i.e. statistical mean of the particle velocity deviations, which is related to the macroscopic sound speed) and $\theta$ is a tunable parameter (natural number, i.e. $\theta\in\mathbb{N}$). The case $\theta=1$ recovers the hard spheres model (popular in fluid dynamics), while the case $\theta=0$ recovers the constant kernel model (which yields constant collision frequency, as commonly done in econophysics and sociophysics). In the previous equation, the post–collision test and field particle velocities $\bm{\xi}'$ and $\bm{\xi_*}'$ are given by $$\begin{aligned} \label{post1} \bm{\xi}'&=&\bm{\xi}+\left(\bm{g}\cdot\bm{n}\right)\,\bm{n},\\ \label{post2} \bm{\xi_*}'&=&\bm{\xi_*}-\left(\bm{g}\cdot\bm{n}\right)\,\bm{n},\end{aligned}$$ which means that there are many possible outcomes $(\bm{\xi}',\bm{\xi_*}')$ from a given pair of incoming (test and field) particle velocities $(\bm{\xi},\bm{\xi_*})$, depending on the impact direction $\bm{n}$ obtained by connecting the particle centers during the collision. With other words, the generic microscopic collision is defined once two additional degree of freedoms are specified ($\bm{n}$ is a versor). Homogeneous isotropic case -------------------------- Let us consider first the homogeneous case (in space). Consequently the probability density function becomes $f(t, \bm{\xi})$ and the homogeneous Boltzmann equation becomes $$\label{HBE} \frac{\partial f}{\partial t}=Q(f,f), $$ where the collisional integral is rewritten equivalently as $$\label{coll2} Q(f,f)=\int_{-\infty}^{+\infty}\int_0^{4\pi} {S}(q)\,B(q)\left[f(\bm{\xi}') f(\bm{\xi_*}')-f(\bm{\xi}) f(\bm{\xi_*})\right] d\bm{n}\,d\bm{\xi_*},$$ where $q=\bm{g}\cdot\bm{n}$, $B(q)=B(|q|)$ for simplicity and $S(q)$ is an auxiliary function introduced for simplifying the integration domain (at the price of making more complex the integrand), namely $$\label{calB} {S}(q)= \left\{ \begin{array}{rr} 1, &\qquad q< 0,\\ 0, &\qquad q\geq 0. \end{array} \right.$$ Now, let us introduce also the isotropic symmetry of the collision kernel. Because of this symmetry, the probability density function is further simplified $f(t, \xi)$, for the time $t\in\mathbb{R}^+$ and for the magnitude of the molecular velocity $\xi=\left\|\bm{\xi}\right\|\in\mathbb{R}_\xi^+$. In this way, the distribution function allows one to compute the infinitesimal probability to find some molecules in the time interval between $t$ and $t+dt$ with a velocity magnitude between $\xi$ and $\xi+d\xi$, namely $f(t,\xi)dt d\xi$. Clearly this probability density function can be reformulated in terms of the particle kinetic energy $E=\xi^2/2$, namely $f(t, E)$. Let us introduce the unit vector $\bm{n}_*$ along the direction $\bm{\xi}_*$ and the unit vector $\bm{n}_\odot$ along the direction $\bm{\xi}$, namely $$\label{nstar} \bm{n}_*=\frac{\bm{\xi}_*}{\left\|\bm{\xi}_*\right\|},\qquad \bm{n}_\odot=\frac{\bm{\xi}}{\left\|\bm{\xi}\right\|}.$$ By means of the previous versor, the volume element $d\bm{\xi_*}$ can be expressed as $\xi_*^2 d\bm{n_*}d{\xi_*}$ and consequently $$\label{coll3} Q(f,f)=\int_{0}^{+\infty}\int_0^{4\pi}\int_0^{4\pi} \left(f' f_*'-f f_*\right)\,{S}(q)\,B(q)\,\xi_*^2\, d\bm{n}\,d\bm{n_*}\,d{\xi_*}.$$ It is clear that $q$ is the only parameter potentially dependent on directions $\bm{n}$ and $\bm{n}_*$ and, in general, $$\label{q} q=\bm{\xi}_*\cdot\bm{n}-\bm{\xi}\cdot\bm{n}={\xi}_*\cos{(\alpha_y)}-{\xi}\cos{(\alpha_x)},$$ where $\alpha_x$ is the angle between $\bm{\xi}$ and $\bm{n}$, while $\alpha_y$ is the angle between $\bm{\xi}_*$ and $\bm{n}$. Let us introduce the auxiliary variable $x=\cos{(\alpha_x)}$ and $y=\cos{(\alpha_y)}$, namely $q={\xi}_*\,y-{\xi}\,x$. Let us express the surface elements defined by $d\bm{n}$ and $d\bm{n_*}$ in Eq. (\[coll3\]) by using $\bm{\xi}$ as polar axis for $d\bm{n}$ and $\bm{n}$ as polar axis for $d\bm{n_*}$, namely $d\bm{n}=\sin{(\alpha_x)}\,d\alpha_x\,d\beta_x$ and $d\bm{n_*}=\sin{(\alpha_y)}\,d\alpha_y\,d\beta_y$ respectively, where $\beta_x$ and $\beta_y$ are the corresponding azimuthal angles. This yields $$\label{coll5} Q(f,f)=\int_{0}^{+\infty}\int_0^{2\pi}\int_{-1}^{+1}\int_0^{2\pi}\int_{-1}^{+1} \left(f' f_*'-f f_*\right)\,{S}(q)\,B(q)\,\xi_*^2\, dx\,d\beta_x\, dy\,d\beta_y\, d{\xi_*}.$$ Taking the square of Eqs. (\[post1\], \[post2\]) and recalling that $q=\bm{g}\cdot\bm{n}$ yields $$\begin{aligned} \label{post1s} ({\xi}')^2&=&{\xi}^2+q^2+2\,q\,{\xi}\,x= \xi^2\,(1-x^2)+{\xi}_*^2\,y^2,\\ \label{post2s} ({\xi_*}')^2&=&{\xi_*}^2+q^2-2\,q\,\xi_*\,y= \xi_*^2\,(1-y^2)+{\xi}^2\,x^2.\end{aligned}$$ Finally, since $\left(f' f_*'-f f_*\right)$ does not depend on $\beta_x$ and $\beta_y$, Eq. (\[coll5\]) becomes $$\label{coll6} Q(f,f)=N(f,f)-\nu(f)\,f,$$ where $$\label{Nold} N(f,f)=4\,\pi^2\,\int_{0}^{+\infty}\xi_*^2\,\int_{-1}^{+1}\int_{-1}^{+1} f(\xi') f(\xi_*')\,{S}(q)\,B(q)\, dx\,dy\,d{\xi_*},$$ $$\label{nuold} \nu(f)=4\,\pi^2\,\int_{0}^{+\infty}f(\xi_*)\,\xi_*^2\,\int_{-1}^{+1}\int_{-1}^{+1} {S}(q)\,B(q)\, dx\,dy\,d{\xi_*}.$$ The variables $x$ and $y$ are called collisional parameters (integration dummy variables). Let us define $\Omega$ the domain of integration of the collisional parameters, namely $\Omega\dot{=}[-1,1]\times[-1,1]$. It is possible to divide $\Omega$ in two subregions, namely $\Omega_{q\geq0}$ and $\Omega_{q<0}$, defined by $q\geq0$ and $q<0$ respectively. Clearly $\Omega_{q\geq0}\cup\Omega_{q<0}=\Omega$ and they are separated by the condition $q=0$, which is the line $y=\xi/\xi_*\,x$. For any generic point $P\dot{=}(x_P,y_P)\in\Omega_{q<0}$, it is possible to define another point $P_*$ symmetric with regards to the origin, namely $P_*=(-x_P,-y_P)\in\Omega_{q\geq0}$. Taking into account that $B(q)=B(x,y)$, it easy to prove that $B(P)=B(P_*)$ and consequently $$\label{symmetry1} \int_{-1}^{+1}\int_{-1}^{+1}{S}(q)\,B(q)\,dx\,dy= \frac{1}{2}\,\int_{-1}^{+1}\int_{-1}^{+1}B(q)\,dx\,dy.$$ Recalling Eqs. (\[post1s\],\[post2s\]), namely $$\begin{aligned} \label{HIpost1s2} \xi'=\xi'(\xi,\xi_*,x,y)&=&\sqrt{\xi^2\,(1-x^2)+{\xi}_*^2\,y^2},\\ \label{HIpost2s2} \xi_*'=\xi_*'(\xi,\xi_*,x,y)&=&\sqrt{\xi_*^2\,(1-y^2)+{\xi}^2\,x^2},\end{aligned}$$ it is easy to prove that $\xi'(P)=\xi'(P_*)$ and $\xi_*'(P)=\xi_*'(P_*)$ and consequently $$\label{symmetry2} \int_{-1}^{+1}\int_{-1}^{+1} f(\xi') f(\xi_*')\,{S}(q)\,B(q)\, dx\,dy= \frac{1}{2}\,\int_{-1}^{+1}\int_{-1}^{+1} f(\xi') f(\xi_*')\,B(q)\, dx\,dy.$$ Substituting Eq. (\[symmetry2\]) into Eq. (\[Nold\]) and Eq. (\[symmetry1\]) into Eq. (\[nuold\]) yields $$\label{N} N(f,f)=2\,\pi^2\,a^2c_s^{1-\theta}\,\int_{0}^{+\infty}\xi_*^2\,\int_{-1}^{+1}\int_{-1}^{+1} f(\xi') f(\xi_*')\,\big|{\xi}_*\,y-{\xi}\,x\big|^\theta\, dx\,dy\,d{\xi_*},$$ $$\label{nu} \nu(f)=2\,\pi^2\,a^2c_s^{1-\theta}\,\int_{0}^{+\infty}f(\xi_*)\,\xi_*^2\,\int_{-1}^{+1}\int_{-1}^{+1} \big|{\xi}_*\,y-{\xi}\,x\big|^\theta\, dx\,dy\,d{\xi_*}.$$ Energy formulation ------------------ Let us introduce a change of variables in the previous expressions. Let us introduce $E=\xi^2/2$, $E_*=\xi_*^2/2$, $E'=(\xi')^2/2$ and $E_*'=(\xi_*')^2/2$, namely $$\label{ener_N} N(f,f) = F c_s^{-\theta} \int_{0}^{+\infty}E_*^{1/2}\,\int_{-1}^{+1}\int_{-1}^{+1} f(E') f(E_*')\,|y\,E_*^{1/2}-x\,E^{1/2}|^\theta\, dx\,dy\,d{E_*},$$ $$\label{ener_nu} \nu(f) = F c_s^{-\theta} \int_{0}^{+\infty}f(E_*)\,E_*^{1/2}\,\int_{-1}^{+1}\int_{-1}^{+1} |y\,E_*^{1/2}-x\,E^{1/2}|^\theta\, dx\,dy\,d{E_*},$$ where $F=2^{(\theta+3)/2} \pi^2 a^2 c_s$ has the dimensions of a volumetric flow rate. Consequently the collision relations become $$\begin{aligned} \label{ener_post1s} E'&=&E\,(1-x^2)+E_*\,y^2,\\ \label{ener_post2s} E_*'&=&E\,x^2+E_*\,(1-y^2).\end{aligned}$$ Let us verify the existence of collisional invariants [@cercignani1987] for the previous formulation. Let us introduce the generic macroscopic quantity $\Phi$, namely $$\label{moment} \Phi(t)= 4\pi\sqrt{2}\int_{0}^{+\infty}\phi(E)\,f\,E^{1/2}\,dE,$$ where $\phi(E)$ is a generic function of the particle kinetic energy. The macroscopic dynamics of the quantity $\Phi$ can be computed as $$\label{ener_conserv_Qa} \frac{d\Phi}{dt}=\int_{0}^{+\infty}Q(f,f)\,\phi(\xi)\,d\bm{\xi} = 4\pi\sqrt{2}\int_{0}^{+\infty}Q(f,f)\,\phi(E)\,E^{1/2}\,dE,$$ or equivalently $$\label{moment_eq1} \frac{d\Phi}{dt}=\left\langle \phi(E),f(E') f(E_*')-f(E) f(E_*) \right\rangle,$$ where $$\begin{aligned} \label{def_inner} \frac{\left\langle \phi,\varphi \right\rangle}{4\pi\sqrt{2}\,F c_s^{-\theta}}= \int_{0}^{+\infty}\int_{0}^{+\infty}\int_{-1}^{+1}\int_{-1}^{+1} |y\,E_*^{1/2}-x\,E^{1/2}|^\theta\, \phi\,\varphi\,(E\,E_*)^{1/2}\,dx\,dy\,d{E_*}dE.\nonumber\end{aligned}$$ Clearly the macroscopic dynamics can not depend on the arbitrary labeling of the microscopic particles. Hence let us invert $E$ and $E_*$ and, since $x=\cos{(\alpha_x)}=\bm{n}_\odot\cdot\bm{n}$ and $y=\cos{(\alpha_y)}=\bm{n}_*\cdot\bm{n}$ (see Eqs. (\[nstar\])), let us invert the variables $x$ and $y$ as well. Because of these inversions, the following expression holds $$\label{moment_eq2} \frac{d\Phi}{dt}=\left\langle \phi(E_*),f(E') f(E_*')-f(E) f(E_*) \right\rangle.$$ Next, let us invert the pre- and post-collisional velocities. The collisional parameters expressed by means of the post-collisional velocities become $$\begin{aligned} \label{xpypa} x'=\frac{\bm{\xi}'\cdot\bm{n}}{\left\|\bm{\xi}'\right\|}=y\,\sqrt{\frac{E_*}{E\,(1-x^2)+E_*\,y^2}},\\ \label{xpypb} y'=\frac{\bm{\xi}_*'\cdot\bm{n}}{\left\|\bm{\xi}_*'\right\|}=x\,\sqrt{\frac{E}{E\,x^2+E_*\,(1-y^2)}}.\end{aligned}$$ It follows immediately that $y'\,\sqrt{E_*'}-x'\,\sqrt{E'}=x\,\sqrt{E}-y\,\sqrt{E_*}$, which ensures that the collisional kernel is unchanged. Equations (\[ener\_post1s\], \[ener\_post2s\], \[xpypa\], \[xpypb\]) define the transformation $(E,E_*,x,y)\rightarrow(E',E_*',x',y')$ and they allow one to compute the corresponding Jacobian. Its modulus gives the factor by which the transformation expands or shrinks the infinitesimal volume in the product $\left\langle \phi,\varphi \right\rangle$, namely $$\label{changeofvaria} dx'\,dy'\,d{E_*'}dE'=\sqrt{\frac{E\,E_*}{E'\,E_*'}}\,dxdy\,dE_*dE.$$ Consequently $$\label{moment_eq3} \frac{d\Phi}{dt}=-\left\langle \phi(E'),f(E') f(E_*')-f(E) f(E_*) \right\rangle.$$ By using Eqs. (\[moment\_eq1\], \[moment\_eq2\], \[moment\_eq3\]), it is easy to prove [@cercignani1987] also for the energy formulation that $$\label{moment_eq4} \frac{d\Phi}{dt}=\left\langle \phi(E)+\phi(E_*)-\phi(E')-\phi(E_*'),f(E') f(E_*')-f(E) f(E_*) \right\rangle.$$ This means that if the quantity $\phi(E)$ is unchanged by the microscopic collision (collisional invariant), then the corresponding macroscopic quantity $\Phi$ is constant in time (conserved quantity). In particular, let us consider the following moments $$\label{momentp} \Phi_p(t)= 4\pi\sqrt{2}\int_{0}^{+\infty}f\,E^{p+1/2}\,dE,$$ which are obtained by taking $\phi(E)=\phi_p=E^p$ into Eq. (\[moment\]). Clearly $\phi_0=1$ and $\phi_1=E$ are both invariant during the generic microscopic collision: hence, the corresponding macroscopic quantities $\Phi_0$ and $\Phi_1$ are conserved, namely $d\Phi_0/dt=0$ and $d\Phi_1/dt=0$. These macroscopic quantities are usually formulated in terms of number density $$\label{number} n=\Phi_0= 4\pi\sqrt{2}\int_{0}^{+\infty}f\,E^{1/2}\,dE,$$ and specific internal energy $$\label{energy} e=\frac{\Phi_1}{\Phi_0}= \frac{4\pi\sqrt{2}}{n}\int_{0}^{+\infty}f\,E^{3/2}\,dE= \frac{\int_{0}^{+\infty}f\,E^{3/2}\,dE}{\int_{0}^{+\infty}f\,E^{1/2}\,dE}.$$ The collisional invariants $\phi_0=1$ and $\phi_1=E$ (and consequently the conserved quantities $n$ and $e$) are also involved in the definition of the local equilibrium, i.e. the distribution function $f_E$ such that $Q(f_E,f_E)=0$. Let us assume $f_E=\exp[-(c_0\phi_0+c_1\phi_1)]$, where $c_0$ and $c_1$ are some proper constants. The collisional operator $Q(f,f)\propto f' f_{*}'-f f_{*}$ is consequently null, namely $$\label{zerocond} Q(f_E,f_E)\propto \exp\left[-c_1(E'+E_*')\right] -\exp\left[-c_1(E+E_*)\right]=0.$$ The constants $c_0$ and $c_1$ can be found by ensuring that Eqs. (\[number\], \[energy\]) are satisfied, namely $$\label{eq_E} f_E=\frac{n}{(2\pi E_B)^{3/2}}\,\exp{\left(-\frac{E}{E_B}\right)},$$ where $E_B=2 e/3$. Recalling that the pressure $P$ is defined as one third of the stress tensor trace [@cercignani1987], it follows that $P=2/3\,n\,e=n\,E_B$. Moreover, recalling the ideal gas law, i.e. $P=n\,k_B\,T$, where $k_B$ is the Boltzmann constant and $T$ is the temperature, it follows that $2e/3=E_B=k_B\,T$. Introducing the specific heat capacity (per mole) at constant volume $C_v=e/T$, it follows that $C_v=3/2\,k_B$, which is correct for monatomic gases considered here. Hierarchy of moment equations ----------------------------- Sometimes it is more convenient to compute Eq. (\[moment\_eq1\]) in a slightly different way, namely $$\label{moment_eq1b} \frac{d\Phi}{dt}=\left\langle \phi(E),f(E') f(E_*')\right\rangle -\left\langle \phi(E),f(E) f(E_*)\right\rangle.$$ It has already been shown (in the previous section) that the product $\left\langle \phi,\varphi \right\rangle$ can be equivalently formulated in terms of the post-collisional velocities $\left\langle \phi,\varphi \right\rangle'$ (since the collisional kernel is invariant and the infinitesimal volume can be transformed by Eq. (\[changeofvaria\])). In particular, once the inverse transformation $(E',E_*',x',y')\rightarrow(E,E_*,x,y)$, namely $$\begin{aligned} \label{inverse_e} E&=&E'\,[1-(x')^2]+E_*'\,(y')^2,\\ \label{inverse_es} E_*&=&E'\,(x')^2+E_*'\,[1-(y')^2],\\ \label{inverse_x} x&=&y'\,\sqrt{\frac{E_*'}{E'\,[1-(x')^2]+E_*'\,(y')^2}},\\ \label{inverse_y} y&=&x'\,\sqrt{\frac{E'}{E'\,(x')^2+E_*'\,[1-(y')^2]}},\end{aligned}$$ is used for evaluating $\phi(E)=\phi(E(E',E_*',x',y'))$, the first term in the right hand side of Eq. (\[moment\_eq1b\]) can be rewritten as $$\label{term} \left\langle \phi(E),f(E') f(E_*')\right\rangle= \left\langle \phi(E(E',E_*',x',y')),f(E') f(E_*')\right\rangle'.$$ Omitting the prime symbol in the previous expression allows one to reformulate Eq. (\[moment\_eq1b\]) as $$\label{moment_eq1c} \frac{d\Phi}{dt}=\left\langle \phi(E\,(1-x^2)+E_*\,y^2) -\phi(E),f(E) f(E_*)\right\rangle.$$ The previous equation is usually the starting point of the so-called quadrature method of moments (QMOM), which is a generic solution method for population balance models [@marchisio2005]. Numerical integration of energy formulation ------------------------------------------- Let us assume a maximum value for the test particle kinetic energy $E$, namely $E_M$. Let us divide the interval $[0,\,E_M]$ in $M$ equal parts, with length $\Delta E=E_M/M$. Each cell is identified by index $1\leq i\leq M$, such that $E_i=(i-1/2)\,\Delta E$, and the probability distribution function is discretized accordingly, namely $f_i=f(E_i)$. As suggested by Ref. [@aristov2001], this simple discretization (piecewise constant) can be used to compute a numerical approximation $\tilde{\nu}_i$ of the relaxation frequency $\nu_i$ for the discrete probability distribution function $f_i$, namely $$\label{e_nu_num} \nu_i=\nu(f_i)\approx\tilde{\nu}_i = \tilde{F}\,\Delta{E}\,\sum_{j=1}^M f_j\,E_j^{1/2}\,A_{ij},$$ where $$\label{e_A} A_{ij} = \Delta E^{-\theta/2} \int_{-1}^{+1}\int_{-1}^{+1} \left|y\,E_j^{1/2}-x\,E_i^{1/2}\right|^\theta\, dx\,dy.$$ and $$\label{tildeF} \tilde{F}=F\,\left(\frac{\sqrt{\Delta E}}{c_s}\right)^\theta=2^{(\theta+3)/2} \pi^2 a^2 c_s\,\left(\frac{\sqrt{\Delta E}}{c_s}\right)^\theta.$$ The previous expression admits analytical solution, namely $$\label{e_genA} A_{ij}(\theta) = 2\,\Delta E^{-\theta/2} \frac{\left|E_i^{1/2}+E_j^{1/2}\right|^{2+\theta}-\left|E_i^{1/2}-E_j^{1/2}\right|^{2+\theta}} {E_i^{1/2}E_j^{1/2}(2+3\theta+\theta^2)},$$ which for $\theta=0$ (constant kernel model) yields $A_{ij}(0)=4$, while for $\theta=1$ (hard sphere model, consistent with Ref. [@aristov2001]) yields $$\label{e_genAaristov} A_{ij}(1) = \frac{1}{\sqrt{\Delta E}} \left\{ \begin{array}{rr} 2\,E_j^{1/2}+2/3\,E_i\,E_j^{-1/2}, &\qquad E_i\leq E_j,\\ 2\,E_i^{1/2}+2/3\,E_j\,E_i^{-1/2}, &\qquad E_i>E_j. \end{array} \right.$$ Similarly, the piecewise discretization can be used to compute a numerical approximation $\tilde{N}_i$ of the relaxation frequency $N_i$ for the discrete probability distribution function $f_i$, namely $$\label{e_N_num} N_i=N(f_i,f_i)\approx \tilde{N}_i=\tilde{F}\,\Delta{E}\,\sum_{j=1}^M E_j^{1/2}\,\sum_{k=1}^M\sum_{l=1}^M f_k f_l\,B_{ij}^{kl},$$ where $$\label{e_B} B_{ij}^{kl}= \Delta E^{-\theta/2}\int_{\Omega_{ij}^{kl}} \left|y\,E_j^{1/2}-x\,E_i^{1/2}\right|^\theta\, dx\,dy,$$ and $\Omega_{ij}^{kl}$ is the compatibility domain (which may also be null). The domain $\Omega_{ij}^{kl}$ is defined as the locus of points $(x,y)\in\Omega$ such that the post-collisional energies $\tilde{E}'(x,y)$ and $\tilde{E}_*'(x,y)$, defined as $$\begin{aligned} \label{ener_post1sb} \tilde{E}'(x,y)&=&E_i\,(1-x^2)+E_j\,y^2,\\ \label{ener_post2sb} \tilde{E}_*'(x,y)&=&E_j\,(1-y^2)+E_i\,x^2,\end{aligned}$$ are approximated by (piecewise) constants over a small region around the point $(E_k,E_l)$. Let us define by $E_{k-}=(k-1)\Delta E$ the lower rounding limit and by $E_{k+}=k\Delta E$ the higher rounding limit (similarly for $E_{l-}$ and $E_{l+}$). Consequently the pair $(x,y)$ belongs to $\Omega_{ij}^{kl}$ if $(\tilde{E}',\tilde{E}_*')$ belongs to $[E_{k-},\,E_{k+}]\times[E_{l-},\,E_{l+}]$, or equivalently $$\begin{aligned} \label{e_post1sb} E_{k-}&\leq E_i\,(1-x^2)+E_j\,y^2&\leq E_{k+},\\ \label{e_post2sb} E_{l-}&\leq E_j\,(1-y^2)+E_i\,x^2&\leq E_{l+}.\end{aligned}$$ Taking into account that $\tilde{E}_*'=\tilde{E}_*'(\tilde{E}')=E_i+E_j-\tilde{E}'$, only a segment of the function $\tilde{E}_*'=\tilde{E}_*'(\tilde{E}')$ can (diagonally) fit into the surface element. Hence, in order to define $\Omega_{ij}^{kl}$, it is enough to solve Eq. (\[e\_post1sb\]), which can be reformulated as $$\begin{aligned} \label{e_hyperbola1base} \Omega_+&=&\{(x,y)\in\Omega : E_i\,(1-x^2)+E_j\,y^2\leq E_{k+}\},\\ \label{e_hyperbola2} \Omega_{-\infty}&=&\{(x,y)\in\Omega : E_i\,(1-x^2)+E_j\,y^2\geq E_{k-}\},\\ \Omega_{ij}^{kl}&=&\Omega_+\cap\Omega_{-\infty}.\end{aligned}$$ The regions $\Omega_+$ and $\Omega_{-\infty}$ are bounded by two hyperbolas and the region $\Omega_{ij}^{kl}$ is the generic intersection between them. This way of defining $\Omega_{ij}^{kl}$ is not efficient because it requires two different formulas for defining $\Omega_+$ and $\Omega_{-\infty}$ respectively. However the problem can be conveniently reformulated, namely $$\begin{aligned} \label{e_hyperbola2base} \Omega_{-}&=&\{(x,y)\in\Omega : E_i\,(1-x^2)+E_j\,y^2\leq E_{k-}\},\\ \Omega_{ij}^{kl}&=&\Omega_+-\Omega_{-}.\end{aligned}$$ In this way, $\Omega_+$ and $\Omega_{-}$ are defined by the same formula and similarly for the integrals defined over them, which can be computed by a unique expression, namely $$\label{e_Bo} B_{ij}^{kl}=C(E_{k+})-C(E_{k-}),$$ where $$\label{Cpm} C(E_{k\pm})= \Delta E^{-\theta/2}\int_{\Omega_{\pm}(E_{k\pm})} \left|y\,E_j^{1/2}-x\,E_i^{1/2}\right|^\theta\, dx\,dy.$$ In particular, the shape of the domains $\Omega_{\pm}$ on the plane $(x,y)$ depend on the relative magnitude of the energies $E_i$, $E_j$, $E_{k-}$ and $E_{k+}$. As it will be discussed in the next subsections, six cases are possible, but only three formulas ($C_1$, $C_2$ and $C_3$) are required by conveniently switching the arguments, namely $$\label{Cpmgeneric} C(E_{k\pm}) = \left\{ \begin{array}{rr} C_1(E_i,E_j,E_{k\pm}), &\qquad E_{k\pm}\leq E_i\leq E_j,\\ C_2(E_i,E_j,E_{k\pm}), &\qquad E_i\leq E_{k\pm}\leq E_j,\\ C_3(E_i,E_j,E_{k\pm}), &\qquad E_i\leq E_j\leq E_{k\pm},\\ C_1(E_j,E_i,E_{k\pm}), &\qquad E_{k\pm}\leq E_j<E_i,\\ C_2(E_j,E_i,E_{k\pm}), &\qquad E_j\leq E_{k\pm}\leq E_i,\\ C_3(E_j,E_i,E_{k\pm}), &\qquad E_j<E_i\leq E_{k\pm}. \end{array} \right.$$ Hence, in the following subsections, only the first three cases are discussed. ### Case 1: $E_{k\pm}\leq E_i\leq E_j$ In this case, the domain $\Omega_{\pm}$ is defined by $$\begin{aligned} \label{e_hyperbola1b} \frac{x^2}{a_\pm^2}-\frac{y^2}{b_\pm^2}&\geq&1, $$ where $a_\pm=\sqrt{1-E_{k\pm}/E_i}$ and $b_\pm=\sqrt{\left(E_i-E_{k\pm}\right)/E_j}$. The domain $\Omega_\pm$ is simply made of two strips between two hyperbolas. Taking into account the already discussed symmetry of the problem with regards to the origin of the plane $(x,y)$, it is possible to save some computations. In particular, considering only the strip such that $q(E_i,E_j)\leq 0$, the function $C_1(E_i,E_j,E_{k\pm})$ can be expressed as $$\label{e_C1} C_1(E_i,E_j,E_{k\pm})= 2\,\Delta E^{-\theta/2}\int_{a_\pm}^{+1} \int_{-c_\pm(x)}^{c_\pm(x)} \,\left|y\,E_j^{1/2}-x\,E_i^{1/2}\right|^\theta\, dy\,dx,$$ where $$\label{e_C} c_\pm(x)=\sqrt{\left(E_i\,x^2-E_i+E_{k\pm}\right)/E_j}.$$ The previous expression $C_1=C_1(\theta)$ can be found analytically for particular values of $\theta\in\mathbb{N}$ (a generic expression in terms of $\theta$ was not found). In particular, for $\theta=0$ (constant kernel model) $$\begin{aligned} \label{e_C1bt0} C_1(0)= 2\,\int_{a_\pm}^{+1} \int_{-c_\pm(x)}^{c_\pm(x)} \left(E_i^{1/2}\,x-E_j^{1/2}\,y\right)\,dy\,dx=\nonumber\\ 2\,\frac{E_{k\pm}^{1/2}}{E_j^{1/2}}+\frac{E_i-E_{k\pm}}{E_i^{1/2} E_j^{1/2}}\, \ln\left(\frac{E_i^{1/2}-E_{k\pm}^{1/2}}{E_i^{1/2}+E_{k\pm}^{1/2}}\right),\end{aligned}$$ and for $\theta=1$ (hard spheres model) $$\label{e_C1bt1} C_1(1)=\frac{2}{{\Delta E}^{1/2}}\, \int_{a_\pm}^{+1} \int_{-c_\pm(x)}^{c_\pm(x)} \left(E_i^{1/2}\,x-E_j^{1/2}\,y\right)\, dy\,dx=\frac{4}{3\,{\Delta E}^{1/2}}\,\frac{E_{k\pm}^{3/2}}{E_i^{1/2}\,E_j^{1/2}}.$$ From the previous expressions, if $E_{k-}$ ($<E_{k+}$) is minimum, i.e. $E_{k-}=0$, then $C_1(0)=C_1(1)=0$. ### Case 2: $E_i\leq E_{k\pm}\leq E_j$ In this case, the domain $\Omega_\pm$ is defined by $$\begin{aligned} \label{e_hyperbola1c} \frac{y^2}{b_\pm^2}-\frac{x^2}{a_\pm^2}&\leq&1, $$ where $a_\pm=\sqrt{E_{k\pm}/E_i-1}$ and $b_\pm=\sqrt{\left(E_{k\pm}-E_i\right)/E_j}$. The domain $\Omega_\pm$ is again made of two strips between two hyperbolas, but the function $C_2(E_i,E_j,E_{k\pm})$ can be computed by means of one strip only ($q(E_i,E_j)\leq 0$). The integral $C_2$ over $\Omega_\pm$ depends on the coordinates $(x_I,y_I)$ of the intersections between the previous hyperbolas and $y_I=\pm 1$. The abscissas of these intersections are $x_I=\pm e_\pm$ where $$\label{e_E} e_\pm= \sqrt{1+\frac{E_j-E_{k\pm}}{E_i}}.$$ In particular, for $E_j\geq E_{k\pm}$, which is the present case, $e_\pm\geq 1$ and consequently the intersections $(x_I,y_I)$ are out of the domain $\Omega_\pm$. Hence, for the preset case, we can neglect this problem. Consequently $$\label{e_C2} C_2(E_i,E_j,E_{k\pm})= 2\,\Delta E^{-\theta/2}\,\int_{-1}^{+1} \int_{-c_\pm(x)}^{d(x)} \left(E_i^{1/2}\,x-E_j^{1/2}\,y\right)^\theta\, dy\,dx,$$ where $c_\pm(x)$ is defined by Eq. (\[e\_C\]) and $d(x)=x\,\sqrt{E_i/E_j}$. The previous integral $C_2=C_2(\theta)$ admits analytical solutions for particular values of $\theta\in\mathbb{N}$. In particular, for $\theta=0$ (constant kernel model) $$\label{e_C2bt0} C_2(0)=2\,\frac{E_{k\pm}^{1/2}}{E_j^{1/2}}+\frac{E_{k\pm}-E_i}{E_i^{1/2} E_j^{1/2}}\, \ln\left(\frac{E_{k\pm}^{1/2}+E_i^{1/2}}{E_{k\pm}^{1/2}-E_i^{1/2}}\right),$$ and for $\theta=1$ (hard spheres model) $$\label{e_C2bt1} C_2(1)=2\,\frac{3\,E_{k\pm}-E_i}{3\,{\Delta E}^{1/2}\,E_j^{1/2}}.$$ ### Case 3: $E_i\leq E_j\leq E_{k\pm}$ In this case, the domain $\Omega_\pm$ is defined by $$\begin{aligned} \label{e_hyperbola1d} \frac{y^2}{b_\pm^2}-\frac{x^2}{a_\pm^2}&\leq&1, $$ where $a_\pm=\sqrt{E_{k\pm}/E_i-1}$ and $b_\pm=\sqrt{\left(E_{k\pm}-E_i\right)/E_j}$. The domain $\Omega_\pm$ is made of a combination of two hyperbolas and the boundaries of $\Omega$: however it is still symmetric with regards to the origin and hence the function $C_3(E_i,E_j,E_{k\pm})$ can be expressed by means of the subregion with $q(E_i,E_j)\leq 0$. Since $E_j\leq E_{k\pm}$, $e_\pm\leq 1$ where $e_\pm$ is given by Eq. (\[e\_E\]) and consequently the intersections $(x_I,y_I)$ between the previous hyperbolas and $y_I=\pm 1$ are inside the domain $\Omega_\pm$. Consequently $$\begin{aligned} \label{e_C3} C_3(E_i,E_j,E_{k\pm})= &&2\,\Delta E^{-\theta/2}\,\int_{-1}^{-e_\pm} \int_{-1}^{d(x)} \left(E_i^{1/2}\,x-E_j^{1/2}\,y\right)^\theta\, dy\,dx+\nonumber\\ &&2\,\Delta E^{-\theta/2}\,\int_{-e_\pm}^{+e_\pm} \int_{-c_\pm(x)}^{d(x)} \left(E_i^{1/2}\,x-E_j^{1/2}\,y\right)^\theta\, dy\,dx+\nonumber\\ &&2\,\Delta E^{-\theta/2}\,\int_{+e_\pm}^{+1} \int_{-1}^{d(x)} \left(E_i^{1/2}\,x-E_j^{1/2}\,y\right)^\theta\, dy\,dx,\end{aligned}$$ where $c_\pm(x)$ is given by Eq. (\[e\_C\]), $d(x)=x\,\sqrt{E_i/E_j}$ and $e_\pm$ is given by Eq. (\[e\_E\]). The previous integral $C_3=C_3(\theta)$ admits an analytical solution for particular values of $\theta\in\mathbb{N}$. In particular, for $\theta=0$ (constant kernel model) $$\begin{aligned} \label{e_C3bt0} C_3(0)&=&4-2\,\sqrt{\frac{E_i+E_j-E_{k\pm}}{E_i}}\nonumber\\ &&-2\,\frac{E_i-E_{k\pm}} {E_i^{1/2} E_j^{1/2}}\, \ln\left[\frac{E_j^{1/2}+\left(E_i+E_j-E_{k\pm}\right)^{1/2}} {\left(E_j-(E_i+E_j-E_{k\pm})\right)^{1/2}}\right],\end{aligned}$$ and for $\theta=1$ (hard spheres model) $$\label{e_C3bt1} C_3(1)=\frac{2}{{\Delta E}^{1/2}}\left[E_j^{1/2}+\frac{1}{3}\,\frac{E_i}{E_j^{1/2}} -\frac{2}{3\,E_i^{1/2}E_j^{1/2}}\,\left(E_i+E_j-E_{k\pm}\right)^{3/2}\right].$$ Clearly, the previous expressions are always well defined, because the maximum value of $E_{k+}$ ($>E_{k-}$) is exactly $E_{k+}=E_i+E_j$, which corresponds to $E_{l-}=0$. In particular, if $E_{k+}=E_i+E_j$, then $C_3(0)=A_{ij}(0)$ and $C_3(1)=A_{ij}(1)$. \[DVM\]Discrete Velocity Model (DVM) and master equation -------------------------------------------------------- As already pointed out, the compatibility domain $\Omega_{ij}^{kl}$ is defined as the locus of points $(x,y)\in\Omega$ such that, for some given pre-collisional energies $(E_i,E_j)$, the post-collisional energies $\tilde{E}'(x,y)$ and $\tilde{E}_*'(x,y)$ (see Eqs.(\[ener\_post1sb\], \[ener\_post2sb\])) are in the neighborhood of the node $(E_k,E_l)$ (coherently with the adopted piecewise approximation). Clearly the compatibility domain may also be null. In particular, two cases may be distinguished. If the pre-collisional energies are such that $E_i+E_j\leq E_M$, then all the post-collisional energies fit into the adopted discretization mesh for the kinetic energy. On the other hand, if $E_i+E_j>E_M$, then some post-collisional energies are still physically possible, but they fall outside the discretization mesh (and they should be excluded for consistency). Hence purely geometrical considerations yield the following property, namely $$\begin{aligned} \label{domain-decom} \bigcup_{k=1}^M\bigcup_{l=1}^M \Omega_{ij}^{kl}&=&\Omega,\qquad\mbox{for}\qquad E_i+E_j\leq E_M,\\ \bigcup_{k=1}^M\bigcup_{l=1}^M \Omega_{ij}^{kl}&<&\Omega,\qquad\mbox{for}\qquad E_i+E_j>E_M,\end{aligned}$$ and consequently $$\begin{aligned} \label{e_AvB} \sum_{k=1}^M\sum_{l=1}^M B_{ij}^{kl}&=&A_{ij},\qquad\mbox{for}\qquad E_i+E_j\leq E_M,\\ \sum_{k=1}^M\sum_{l=1}^M B_{ij}^{kl}&<&A_{ij},\qquad\mbox{for}\qquad E_i+E_j>E_M.\end{aligned}$$ In case $E_i+E_j\leq E_M$, the fact that the equality is exactly satisfied (by the discrete numerical operators) is a consequence of the energy formulation, which allows one to ensure perfect conservation of particle number and energy on a discrete lattice. On the other hand, if $E_i+E_j>E_M$, the pre-collisional energies starting from outside of the discretization mesh are automatically excluded (even though they are physically possible) and hence also the post-collisional energies falling outside the discretization mesh should be excluded as well for consistency. In this way, all the direct and reverse collisions live on the same discretization mesh. The latter strategy is advantageous from the computational point of view, but it reveals that the adopted numerical description only approximates the dynamics due to the collisions with $E_i+E_j>E_M$. In case that very accurate simulations are required, it would be better to focus on the sub-region $[0,E_M/2]$. Let us define the following matrix $$\label{e_AvB2} \hat{A}_{ij}= \sum_{k=1}^M\sum_{l=1}^M B_{ij}^{kl},$$ and consequently Eq. (\[e\_nu\_num\]) becomes $$\label{e_nu_num2} \nu_i \approx\tilde{\nu}_i= F\,\Delta{E}\,\sum_{j=1}^M f_j\,E_j^{1/2}\,\hat{A}_{ij}.$$ Introducing $\tilde{Q}_i=\tilde{N}_i-\tilde{\nu}_i f_i$ and taking into account Eqs. (\[e\_N\_num\], \[e\_nu\_num2\]) yield $$\label{e_coll7} Q_i=Q(f_i,f_i)\approx\tilde{Q}_i= \tilde{F}\,\Delta{E}\,\sum_{j=1}^M E_j^{1/2}\,\left(\sum_{k=1}^M\sum_{l=1}^M f_k f_l\,B_{ij}^{kl} -f_i f_j\,\hat{A}_{ij}\right).$$ We would like to investigate the (macroscopic) conservation properties of the discrete operator $\tilde{Q}_i$. In order to do this, let us rewrite Eq. (\[changeofvaria\]) for the discrete case, which is simplified by the fact that $dE_*'$, $dE'$, $dE_*$ and $dE$ are all approximated by $\Delta E$, namely $$\label{changeofvaria_d} dx'\,dy'=\sqrt{\frac{E_i\,E_j}{\tilde{E}'(x,y)\,\tilde{E}_*'(x,y)}}\,dx\,dy\approx\sqrt{\frac{E_i\,E_j}{E_k\,E_l}}\,dx\,dy.$$ Clearly the last relation is only asymptotically satisfied by the discrete operator: hence, even though the microscopic collisions are conservative (in terms of mass and kinetic energy), the corresponding macroscopic moments are not exactly conserved. The previous relation suggest to multiply and divide Eq. (\[e\_coll7\]) by $E_i^{1/2}$, which allows one to recover the underlaying Discrete Velocity Model (DVM) [@gatignol1975][^1], namely $$\label{e_coll9} \frac{\partial f_i}{\partial t}= \frac{F\Delta E^2}{E_i^{1/2}}\sum_{j,k,l=1}^M \Gamma_{ij}^{kl}\,\left(f_k f_l-f_i f_j\right),$$ where $$\label{e_coll8} \Gamma_{ij}^{kl}= \frac{\sqrt{E_i\,E_j}}{\Delta E}\,B_{ij}^{kl}.$$ The following properties hold [@gatignol1975], namely $$\label{prop_sym} \Gamma_{ij}^{kl}=\Gamma_{ji}^{kl}\approx\Gamma_{kl}^{ij}.$$ We would like to mention that this kind of models may be affected by the problem of (spurious) conservation laws [@bobilev2008]. In this particular case, numerical meshes in the velocity/energy space (i.e. lattices) large enough should fix the problem from the practical point of view. Equation (\[e\_coll9\]) is sometimes also called master equation and $\Gamma_{ij}^{kl}$ is called the matrix of transition frequencies. It is possible to correct the matrix of transition frequencies such that it satisfies exactly the symmetry properties (DVM correction). There are $24=4!$ possible permutations of the four indexes $i$, $j$, $k$ and $l$ in the matrix of transition frequencies, but only eight permutations ensure the conservation of kinetic energy. If two indexes are equal ($i=j$ or $k=l$), then only four permutations (conserving kinetic energy) are possible. Let us define by $\{\Gamma_{ij}^{kl}\}$ the set of transition frequencies obtained by permutations of the indexes $(i,j,k,l)$ conserving kinetic energy. The DVM correction is defined as $$\label{correction} \forall\,(i,j,k,l):\Gamma_{ij}^{kl}\in\{\Gamma_{ij}^{kl}\}, \qquad \tilde{\Gamma}_{ij}^{kl}=\overline{\{\Gamma_{ij}^{kl}\}},$$ where the overline means the arithmetic mean of the considered set (symmetrization). By means of this DVM correction, the following property holds (exactly) $$\label{prop_sym_corr} \tilde{\Gamma}_{ij}^{kl}=\tilde{\Gamma}_{ji}^{kl}=\tilde{\Gamma}_{kl}^{ij},$$ as required by the DVM models [@gatignol1975]. Consequently it is possible to correct the dimensionless frequencies, namely $$\label{e_coll_corr} \tilde{B}_{ij}^{kl}= \frac{\Delta E}{\sqrt{E_i\,E_j}}\,\tilde{\Gamma}_{ij}^{kl},$$ $$\label{e_AvB2b} \tilde{A}_{ij}= \sum_{k=1}^M\sum_{l=1}^M \tilde{B}_{ij}^{kl},$$ which ensure that both particle number and kinetic energy are perfectly conserved also at macroscopic level. In the following, the symbols $\tilde{\nu}_i$ and $\tilde{Q}_i$ are still used (for keeping the notation as simple as possible), even thought they are computed by $\tilde{B}_{ij}^{kl}$ and $\tilde{A}_{ij}$ instead of $B_{ij}^{kl}$ and $\hat{A}_{ij}$. It is worth the effort to point out that, because of the DVM correction, $\tilde{A}_{ij}\neq A_{ij}$ even for $E_i+E_j\leq E_M$ (while, under the same conditions, $\hat{A}_{ij}=A_{ij}$). Ensuring the numerical conservation of conserved hydrodynamic moments is also one of the key ideas behind the derivation of the so-called Lattice Boltzmann Method (LBM) [@succi2001] (even though mass and momentum only are conserved on the smallest lattices). \[quad\]Quadrature formulas for computing the moments ----------------------------------------------------- In order to compute the moments defined by Eq. (\[momentp\]), the piecewise constant approximation is used. This is consistent with the receipt used for solving the collisional integral $Q(f,f)=N(f,f)-\nu\,f$. It is worth the effort to point out that the property given by Eq. (\[prop\_sym\_corr\]) (and ensured numerically by means of the DVM correction) implies the conservation of particle number and energy, only if the piecewise constant approximation is used. Hence, even though more elaborate quadrature formulas are possible for computing the moments, they would spoil the main advantage of the DVM correction, i.e. ensuring that the conservation laws are perfectly satisfied. According to the piecewise constant approximation, Eq. (\[momentp\]) can be approximated by $$\label{momentp2} \Phi_p\approx\tilde{\Phi}_p= 4\pi\sqrt{2}\,\Delta E\,\sum_{i=1}^M f_i\,E_i^{p+1/2}.$$ This way of computing the moments is straightforward, but it produces some problems in defining the local equilibrium. Let us suppose to define the local discrete equilibrium as $(f_E)_i=f_E(E_i)$, i.e. the local discrete equilibria coincide with the nodal values of the continuous function $f_E$ defined by Eq. (\[eq\_E\]) (for some values of $n$ and $e$). Applying the previous definition yields $\tilde{\Phi}_0((f_E)_i)\neq n$ and $\tilde{\Phi}_1((f_E)_i)\neq n\,e$, where $n$ and $e$ are defined by continuous integrals in Eq. (\[number\]) and Eq. (\[energy\]) respectively. This is clearly an effect of the numerical error due to the quadrature formula. In order to circumvent this problem, let us define the local equilibrium in the following way by recursive tuning. For any discrete distribution function $f_i$, let us define $\tilde{n}=\tilde{\Phi}_0(f_i)$ and $\tilde{n}\,\tilde{e}=\tilde{\Phi}_1(f_i)$. Let us define the $$\label{eq_E_num} (\tilde{f}_E)_i=\exp[-(\tilde{c}_0+\tilde{c}_1\,E_i)],$$ where the constants $\tilde{c}_0$ and $\tilde{c}_1$ are defined such that $$\label{eq_E_num_cond} \tilde{\Phi}_0\left((\tilde{f}_E)_i\right)=\tilde{n},\qquad \tilde{\Phi}_1\left((\tilde{f}_E)_i\right)=\tilde{n}\,\tilde{e}.$$ By means of this recursive tuning of the local discrete equilibrium, the particle number and energy are both constant during the whole relaxation process. Eventually, if the continuous distribution function is known as initial condition, the assumptions $\tilde{n}={\Phi}_0(f)$ and $\tilde{n}\,\tilde{e}={\Phi}_1(f)$ (by Eq. (\[momentp\])) can be used instead. \[sBGK\]Recovering BGK ---------------------- It is well known that the collisional integral of the Boltzmann equation drives any initial distribution function towards the local equilibrium [@cercignani1987]. When the distribution function is very close to the local equilibrium, the remaining dynamics becomes very slow (on the kinetic time scale) and it can be described by the so-called fluid dynamic time scale (which is suitable for describing phenomena in the corresponding fluid dynamic regime). Let us search for simplified expressions of the collisional integral $\tilde{Q}_i$ in such regime. The key idea is to use the equilibrium distribution function for computing an approximation of the relaxation frequency given by Eq. (\[e\_nu\_num2\]). Since we search for an approximation of the real relaxation frequency, let us consider $A_{ij}$ (admitting analytical expression) instead of $\hat{A}_{ij}$ in Eq. (\[e\_nu\_num2\]), namely $\tilde{\nu}_i\approx(\nu_E)_i$ where $$\label{nue} (\nu_E)_i= \tilde{F}\,\Delta{E}\,\sum_{j=1}^M A_{ij}\,E_j^{1/2}\,(\tilde{f}_E)_j.$$ Consequently, recalling Eq. (\[e\_coll7\]), it is possible to introduce the following approximation $$\label{BGKlike} \tilde{Q}_i\approx(\tilde{Q}_{B})_i = (\nu_E)_i\,\left[(\tilde{f}_E)_i-f_i\right].$$ In general, $(\nu_E)_i$ still depends on the particle kinetic energy $E_i$. For fixing the ideas, let us consider the Constant Kernel Model - CKM ($\theta=0$ in Eq. (\[generic\])), where $A_{ij}(0)=4$ and $$\label{nue_ckm} \nu_{E}(0)= 4\,\tilde{F}\,\Delta{E}\,\sum_{j=1}^M E_j^{1/2}\,(\tilde{f}_E)_j= \frac{\tilde{F}\,\tilde{n}}{\pi\sqrt{2}}=\frac{F\,\tilde{n}}{\pi\sqrt{2}},$$ i.e. the approximated relaxation frequency $\nu_{E}(0)$ is a constant which depends on the local number density. A similar procedure can be followed for the Hard Sphere Model - HSM ($\theta=1$ in Eq. (\[generic\])), which also admits an analytical expression for $(\nu_{E})_i(1)$ involving the error function: see Ref. [@cercignani1987] for details. For the present purposes, i.e. the discussion of the numerical results of the test case, let us derive the limit of $(\nu_{E})_i(1)$ for high kinetic energies (by considering the case $E_i>E_j$ in Eq. (\[e\_genAaristov\])), namely $$\label{nue_hsm1} \lim_{E_i\rightarrow E_M}(\nu_{E})_i(1)\approx \frac{2\,E_i^{1/2}}{c_s}\,F\,\Delta{E}\,\sum_{j=1}^M E_j^{1/2}\,(\tilde{f}_E)_j= \frac{F\,\tilde{n}}{\pi\sqrt{2}}\,\left(\frac{\sqrt{E_i}}{2\,c_s}\right).$$ From the previous limiting case, it is possible to derive the so-called BGK approximation [@cercignani1987], where BGK stands for Bhatnagar-Gross-Krook who proposed this simple collisional model. The key idea is to assume a constant relaxation frequency (depending on the local number density), namely $$\label{BGK} (\tilde{Q}_{BGK})_i = \nu_{BGK}\,\left[(\tilde{f}_E)_i-f_i\right].$$ In the following, we will assume for simplicity $\nu_{BGK}=(\nu_E)_1$ where $(\nu_E)_1$ stands for $(\nu_E)_i$ at $E_1=\Delta E/2$, even though it should be (more precisely) $\nu_{BGK}=\lim_{E\rightarrow 0}\nu_E(E)$. \[overview\]Overview of the software structure ============================================== In this section, we provide an overview of the [[HOMISBOLTZ]{}]{} program which was developed using Matlab. The basic idea is to provide a simple illustration of the discussed methodology, which can be easily ported to other environments (FORTRAN, C++,...). The [[HOMISBOLTZ]{}]{} program is free software, which can be redistributed and/or modified under the terms of the GNU General Public License. The [[HOMISBOLTZ]{}]{} program has been purposely designed in order to be minimal, not only with regards to the reduced number of lines (less than 1,000), but also with regards to the coding style (as simple as possible, hence not optimized in terms of execution time). A brief flow chart of the program is the following. Essentially there are two main parts in the [[HOMISBOLTZ]{}]{} program: (a) computing the data structure storing the dimensionless frequencies for the considered model, i.e. [[B(i,j)]{}]{}, and (b) the main solution loop (fully explicit and based on the forward Euler integration rule). Both parts are described in the following sections. \[components\]Description of the individual software components =============================================================== Data structure storing the dimensionless frequencies ---------------------------------------------------- The data structure storing the dimensionless frequencies, i.e. [[B(i,j)]{}]{}, is the fundamental data structure of the whole program and it is also the most time-consuming to be computed. Essentially [[B(i,j)]{}]{} is the data structure storing the dimensionless frequencies $\tilde{B}_{ij}^{kl}$ (let us suppose that the DVM correction applies) for all the *GAIN* events $(E_k,E_l)\rightarrow(E_i,E_j)$. The dimensionless frequencies $\tilde{A}_{ij}$ for all the *LOSS* events $(E_i,E_j)\rightarrow(E_k,E_l)$ can be computed by Eq. (\[e\_AvB2b\]). The matrix $\tilde{B}_{ij}^{kl}$ is a four dimensional (sparse) matrix and it is not convenient to compute/store it directly. Let us introduce a proper labeling for dealing with the sparse matrix $\tilde{B}_{ij}^{kl}$. Let us define $\Lambda_{ij}$ the set formed by all the pairs of natural indices $(k,l)$ such that $E_k+E_l=E_i+E_j$, with $0< E_k< E_M$ and $0< E_l< E_M$. Let us define with $M_{ij}$ the number of elements of the set $\Lambda_{ij}$ and let us identify each pair of indices by $\lambda$, namely if $1\leq \lambda\leq M_{ij}$ then $(k(\lambda),l(\lambda))\in\Lambda_{ij}$. Consequently, Eq. (\[e\_coll7\]) (after the DVM correction) can be reformulated by reducing the number of nested summations, namely $$\label{e_coll_code} \tilde{Q}_i= \tilde{F}\,\Delta{E}\,\sum_{j=1}^M E_j^{1/2}\,\left(\tilde{\Psi}_{ij} -f_i f_j\,\tilde{A}_{ij}\right),$$ $$\label{Psi_code} \tilde{\Psi}_{ij}=\sum_{\lambda=1}^{M_{ij}} f_{k(\lambda)} f_{l(\lambda)}\,\tilde{B}_{ij}^{k(\lambda)l(\lambda)},$$ and $\tilde{B}_{ij}^{k(\lambda)l(\lambda)}$ is simply $\tilde{B}_{ij}^{kl}$ for $k=k(\lambda)$ and $l=l(\lambda)$ and it is stored in the data structure [[B(i,j)]{}]{}. Hence all the relevant information stored in [[B(i,j)]{}]{} can be labeled by $\lambda$. A brief overview of the data structure [[B(i,j)]{}]{} is the following. Main loop --------- The main loop of the program aims to compute $\tilde{Q}_i=\tilde{N}_i-\tilde{\nu}_i\,f_i$ at a given time step. Recalling Eq. (\[e\_coll\_code\]), the operative formulas immediately follow, namely $$\label{N_code} \tilde{N}_i= \tilde{F}\,\Delta{E}^{3/2}\,\sum_{j=1}^M (j-1/2)^{1/2}\,\tilde{\Psi}_{ij},$$ $$\label{nu_code} \tilde{\nu}_i= \tilde{F}\,\Delta{E}^{3/2}\,\sum_{j=1}^M (j-1/2)^{1/2}\,f_j\,\tilde{A}_{ij}.$$ The previous formulas are implemented straightforwardly in the main loop. ... for t = ... (time) for i = 1:M N(i) = 0; nu(i) = 0; for j = 1:M % GAIN term Psi(i,j) = 0; for m = 1:B(i,j).howmany k = B(i,j).k(m); l = B(i,j).l(m); Bijkm = B(i,j).value(m); Psi(i,j) = Psi(i,j)+f(k)*f(l)*Bijkm; end N(i) = N(i)+... F*DeltaE^(3/2)*(j-1/2)^(1/2)*Psi(i,j); % LOSS term nu(i) = nu(i)+... F*DeltaE^(3/2)*(j-1/2)^(1/2)*f(j)*A(i,j); end Q(i) = N(i)-nu(i)*f(i); end % Forward Euler integration rule (explicit) f = f+Deltat.*Q; end ... \[installation\]Installation instructions ========================================= The package of the [[HOMISBOLTZ]{}]{} program consists of five files, namely 1. [[HOMISBOLTZ.m]{}]{}, which is the (single-file) main program (including all the subroutines described in the previous flow chart); 2. [[CKM\_Structure\_B\_DVM\_nodes\_50.mat]{}]{}, which is the binary data file containing the data structure [[B(i,j)]{}]{} in case of the Constant Kernel Model (CKM, $\theta=0$ in Eq. (\[generic\])) with $M=50$; 3. [[CKM\_Structure\_B\_DVM\_nodes\_100.mat]{}]{}, which is the binary data file containing the data structure [[B(i,j)]{}]{} in case of the Constant Kernel Model (CKM, $\theta=0$ in Eq. (\[generic\])) with $M=100$; 4. [[HSM\_Structure\_B\_DVM\_nodes\_50.mat]{}]{}, which is the binary data file containing the data structure [[B(i,j)]{}]{} in case of the Hard Sphere Model (HSM, $\theta=1$ in Eq. (\[generic\])) with $M=50$; 5. [[HSM\_Structure\_B\_DVM\_nodes\_100.mat]{}]{}, which is the binary data file containing the data structure [[B(i,j)]{}]{} in case of the Hard Sphere Model (HSM, $\theta=1$ in Eq. (\[generic\])) with $M=100$. The previous [[.mat]{}]{} files are not strictly required. When executed, the program first searches for the binary data file corresponding to the required combination of collision kernel, discretization resolution ($M$) and DVM correction (ON/OFF). If this binary data file exists, it will be loaded for saving computational time. Otherwise the data structure [[B(i,j)]{}]{} will be computed and saved as binary data file for future use. Hence the previous binary data files are provided as examples. \[test\]Test run description ============================ In this section, a full test case is described. Let us consider a dilute gas made of molecules. The interactions among the molecules can be described by means of the collision kernel given by Eq. (\[generic\]) with the following parameters[^2] $$\label{parameters} a=1,\qquad c_s=50,\qquad \theta=1\qquad\mbox{Hard Sphere Model (HSM)}.$$ No BGK-like approximation is adopted, since we want to investigate the full nonlinear Boltzmann equation (in the homogeneous isotropic case). From the numerical point of view, we can not investigate the full space $\mathbb{R}^+$ for the particle kinetic energy. We need to bound our investigations in the range $[0,\,E_M]$ and to be sure that the initial conditions well fit into this sub-portion of $\mathbb{R}^+$. Actually, in order to achieve better accuracies, as already pointed out (see Section \[DVM\] for details), the whole dynamic phenomenon should fit into the sub-portion $[0,E_M/2]$. Let us consider the following initial condition $f(t=0,E)=f_I(E)$, where $$\label{f_0} f_I(f_0,G_0,G_{00})=f_0\,\exp{\left[-\frac{\left(\sqrt{E}-\sqrt{G_0}\right)^2}{G_{00}}\right]}.$$ In the considered test case, the values of these parameters are $$\label{parameters2} E_M=5000,\qquad f_0=5\times 10^{-4},\qquad G_0=600,\qquad G_{00}=35.$$ In order to decide the duration of the numerical simulation, we need to investigate the characteristic time scale of the relaxation phenomenon $\tau$. This characteristic time scales as $\tau\sim (n\,F)^{-1}\sim (n\,a^2\,c_s)^{-1}$. However it may be much smaller than that, when the distribution function approaches the local equilibrium (fluid dynamic regime). Hence the duration of the phenomenon depends also on how far the initial conditions are from the local equilibrium. For the present test case (by trials and errors), the duration of the numerical simulation was fixed at $T_F=1\times 10^{-3}$. Finally, the parameters concerning the numerical integration must be specified. The range $[0,\,E_M]$ is divided by $M=100$ parts and consequently $\Delta E=E_M/M$. The time frame $T_F$ is divided by $T=100$ parts[^3] and consequently $\Delta T=T_F/T$. Since an explicit integration rule is used to solve the kinetic equation (namely the forward Euler rule), an upper threshold on the discretization time step is expected, namely $$\label{CFL} \Delta T<k_\gamma\,\Delta E^\gamma,$$ where $k_\gamma$ is a proper constant and $\gamma$ is an exponent depending on the mode driving the instability ($\gamma=1$ for the advective mode and $\gamma=2$ for the diffusive mode). The previous condition is the celebrated Courant-Friedrichs-Lewy (CFL) stability condition. The adopted parameters for the considered test case satisfy this condition. In the numerical simulations, few non-conserved moments are monitored during the relaxation phenomenon, namely $\tilde{\Phi}_p$ with $p\in[2-9]$ (see Eq. (\[momentp2\]) for details). ![(Color online) Distribution function dynamics from the initial condition (blue), namely $f(t=0,E)=f_I(E)$ where $f_I(E)$ is given by Eq. (\[f\_0\]), to the local equilibrium (black), namely $\tilde{f}_E$ given by Eq. (\[eq\_E\_num\], \[eq\_E\_num\_cond\]). []{data-label="fig:1"}](Fig1.eps){width="80.00000%"} Figure \[fig:1\] reports the distribution function dynamics from the initial condition given by Eq. (\[f\_0\]), to the local equilibrium given by Eq. (\[eq\_E\_num\], \[eq\_E\_num\_cond\]). The approach to the local equilibrium is initially quite rapid (kinetic stage) and it becomes very slow closer to the equilibrium (fluid dynamic stage). It is not so difficult to catch the main trend in the dynamics of the distribution function. However the formulation in terms of the distribution function may hide some accuracy problems in the relaxation of the high-order moments close to the equilibrium. ![(Color online) Macroscopic moments dynamics (in time) described by means of the relaxation rates $\tilde{R}_p$ given by Eq. (\[RP\]) for $p\in[2-9]$. []{data-label="fig:2"}](Fig2.eps){width="80.00000%"} ![(Color online) Normalized macroscopic moments dynamics (in time) described by means of the normalized relaxation rates $\tilde{R}_p/\tilde{R}_p(t=0)$, where $\tilde{R}_p$ is given by Eq. (\[RP\]) for $p\in[2-9]$. []{data-label="fig:3"}](Fig3.eps){width="80.00000%"} In order to investigate the last point, let us introduce the relaxation rate $\tilde{R}_p$ for the macroscopic moment $\tilde{\Phi}_p$, namely $$\label{RP} \tilde{R}_p=\frac{\tilde{\Phi}_p-\tilde{\Phi}_p^E}{\tilde{\Phi}_p^E},$$ where $\tilde{\Phi}_p^E=\tilde{\Phi}_p(\tilde{f}_E)$. The time evolution of the macroscopic moments with $p\in[2-9]$ is described in Figure \[fig:2\] by means of $\tilde{R}_p$ and in Figure \[fig:3\] by means of the normalized relaxation rates $\tilde{R}_p/\tilde{R}_p(t=0)$. Both quantities approach the zero value in the late dynamics. The proposed method (and in particular the DVM correction and the recursive tuning of the local equilibrium, see Section \[quad\] for details) allows one to catch very precisely the approach to the local equilibrium, even by high-order moments. According to the reported results, the hard sphere model produces a slower approach to the equilibrium by the higher order moments. This point is investigated next. ![(Color online) Time evolution of the normalized effective relaxation frequencies $\tilde{\nu}_p/\nu_{BGK}$ for $p\in[2-9]$, where $\tilde{\nu}_p$ is given by Eq. (\[nup\]) and $\nu_{BGK}$ is the constant frequency prescribed by the BGK model (see Section \[sBGK\] for details). []{data-label="fig:4"}](Fig4.eps){width="80.00000%"} In order to check even more precisely the late dynamics of the high-order moments, let us introduce a (time-dependent) effective[^4] relaxation frequency $\tilde{\nu}_p$ for the moment $p$, namely $$\label{nup} \tilde{\nu}_p=\frac{\tilde{\Phi}_p^Q}{\tilde{\Phi}_p^E-\tilde{\Phi}_p},$$ where $\tilde{\Phi}_p^Q=\tilde{\Phi}_p(\tilde{Q})$. In Figure \[fig:4\] the effective relaxation frequencies for $p\in[2-9]$ are normalized by $\nu_{BGK}$, where $\nu_{BGK}=(\nu_E)_1$, $(\nu_E)_1$ stands for $(\nu_E)_i$ at $E_1=\Delta E/2$ and $(\nu_E)_i$ is given by Eq. (\[nue\]). The results reported in Figure \[fig:4\] show that $\tilde{\nu}_p<\nu_{BGK}$ during the whole dynamics and actually $\tilde{\nu}_p/\nu_{BGK}$ all tend to the same asymptotic value ($\approx0.665$) for $t\rightarrow\infty$. In order to explain such behavior, let us consider the BGK-like approximation given by Eq. (\[BGKlike\]), i.e. $\tilde{Q}_i\approx(\tilde{Q}_{B})_i$, and let us introduce it in the definition of $\tilde{\nu}_p$ (the subscript $i$ has been removed for simplicity), namely $$\label{nup2} \tilde{\nu}_p\approx\frac{\tilde{\Phi}_p\left(\nu_E\,(\tilde{f}_E-f)\right)}{\tilde{\Phi}_p\left(\tilde{f}_E-f\right)}.$$ The previous approximation allows one to interpret $\tilde{\nu}_p$ as a weighted average of the relaxation frequency $(\nu_E)_i$ (valid in the late dynamics) by means of the weight $(\tilde{f}_E-f)_i$. The weight $(\tilde{f}_E-f)_i$ has no definite sign: the ranges of $E_i$ where this weight is positive or negative depend on the initial condition (both ranges must exist because $(\tilde{f}_E)_i$ and $f$ have the same number density by definition). In particular, the adopted initial condition given by Eq. (\[f\_0\]) implies $(\tilde{f}_E-f)_i<0$ for high kinetic energies (see Figure \[fig:1\]). Taking into account that the relaxation frequency $(\nu_E)_i$ of the hard sphere model for high kinetic energies tends to increase monotonically as $(\nu_E)_i\sim\sqrt{E_i}$ (according to Eq. (\[nue\_hsm1\])), this leads to a penalization effect in the computation of the effective frequency $\tilde{\nu}_p$. This penalization is larger for higher order moments (i.e. it increases with $p$, as showed in Figure \[fig:4\]) and this explains why the hard sphere model produces a slower approach to the equilibrium by the higher order moments. Conclusions =========== In this work, some improvements to the deterministic numerical method proposed by Aristov [@aristov2001] for the homogeneous isotropic Boltzmann equation are discussed. Firstly, the original problem was reformulated in terms of particle kinetic energy and this allows one to ensure exact particle number and energy conservation during the microscopic collisions (momentum is trivially conserved because of the isotropic symmetry). Secondly, the computation of the relaxation rates was improved by the DVM correction, which allows one to satisfy exactly the macroscopic conservation laws and it is particularly suitable for dealing with the late dynamics of the relaxation towards the equilibrium. This work aims also to distribute an open-source program (called [[HOMISBOLTZ]{}]{}), which can be easily understood and modified for dealing with different applications (thermodynamics, econophysics and sociodynamics), in order to derive reliable reference solutions (with an accuracy which can not be easily obtained by stochastic methods). The [[HOMISBOLTZ]{}]{} program was developed using Matlab. The basic idea is to provide a simple illustration of the discussed methodology, which can be easily ported to other environments (FORTRAN, C++,...). The [[HOMISBOLTZ]{}]{} program is free software, which can be redistributed and/or modified under the terms of the GNU General Public License. The [[HOMISBOLTZ]{}]{} program has been purposely designed in order to be minimal, not only with regards to the reduced number of lines (less than 1,000), but also with regards to the coding style (as simple as possible, hence not optimized in terms of execution time). Acknowledgements {#acknowledgements .unnumbered} ================ The author would like to thank Professor Taku Ohwada (Kyoto University, Japan) for many enlightening clarifications about the solution of the Boltzmann equation by deterministic numerical methods. Moreover he would like to thank Dr. Miguel Onorato and Davide Proment (Università degli Studi di Torino, Physics Department, Italy) for useful comments. The author acknowledges the support of the EnerGRID project. [50]{} L. Boltzmann, Weitere Studien über das Wärmegleichgewicht under Gasmolekülen, Sitzungsberichte der Akademie der Wissenschraften, LXVI, pagg. 275-370, 1872. (Further Studies on the Thermal Equilibrium of Gas Molecules, in S.G. Brush, Kinetich Theory, Vol.II, Pergamon Press, Oxford, 1966, pagg. 88-175). C. Cercignani, [*The Boltzmann Equation and Its Applications*]{}, Applied Mathematical Sciences, Springer-Verlag, 1987. C. Cercignani, [*Slow rarefied flows: theory and application to micro-electro-mechanical systems*]{}, Progress in Mathematical Physics, Birkhauser, 2006. C. Cercignani, [*Ludwig Boltzmann: the man who trusted atoms*]{}, Oxford University Press, New York, 1998. C.A. Truesdell, [*Six Lectures on Modern Natural Philosophy*]{}, Springer-Verlag, 1966. Y. Sone, Kinetic Theory and Fluid Dynamics, Modeling and Simulation in Science, Engineering and Technology, Birkhauser, 2002. S. Succi, Lattice Boltzmann Equation for Fluid Dynamics and Beyond, Oxford University Press, 2001. I.V. Karlin, P. Asinari, Factorization symmetry in the lattice Boltzmann method, Physica A: Statistical Mechanics and its Applications, Vol. 389, Issue 8, 1530-1548, 2010. D.L. Marchisio, R.O. Fox, Solution of population balance equations using the direct quadrature method of moments, Journal of Aerosol Science, Vol. 36, Issue 43, 2005. H. Struchtrup, Macroscopic Transport Equations for Rarefied Gas Flows: Approximation Methods in Kinetic Theory, Interaction of Mechanics and Mathematics, Springer, 2005. A. Chatterjee, S. Yarlagadda, B.K. Chakrabarti, [*Econophysics of wealth distributions*]{}, Springer, 2005. B. Mandelbrot, The Pareto-Levy law and the distribution of income, International Economic Review, Vol. 1, 79–106, 1960. B. Düring, D. Matthes, G. Toscani, Kinetic equations modelling wealth redistribution: A comparison of approaches, Phys. Rev. E, Vol. 78, Issue 5, 2008. D. Helbing, [*Quantitative Sociodynamics: Stochastic Methods and Models of Social Interaction Processes*]{}, Kluwer Academic Publishers, 1995. W. Weidlich, [*Sociodynamics: a Systematic Approach to Mathematical Modelling in the Social Sciences*]{}, Harwood Academic Publishers, 2000. V.V. Aristov, [*Direct Methods for Solving the Boltzmann Equation and Study of Nonequilibrium Flows*]{}, Kluwer Academic Publishers, 2001. R. Gatignol, [*Théorie Cinétique des Gaz à Répartition Discréte de Vitesses*]{}, Springer, 1975. A.V. Bobylev, M.C. Vinerean, Construction of Discrete Kinetic Models with Given Invariants, J. Stat. Phys., Vol. 132, 153–170, 2008. [^1]: In Eq. (\[e\_coll9\]), we have adopted a dimensionless $\Gamma_{ij}^{kl}$, which is different from the convention used in Ref. [@gatignol1975]. However we note that $[F]\,[E^{3/2}]\,[f]=[s]^{-1}$, where $[\cdot]$ means the physical dimensions. Another difference with regards to Ref. [@gatignol1975] is due to the term $E_i^{1/2}$ at the denominator, because of the homogeneous isotropic formulation considered here. [^2]: International System of Units (SI) applies. Clearly molecules of $1$ $m$ are not realistic but this dimension was adopted for simplicity. It is important to point out that the characteristic time scale of the relaxation phenomenon $\tau$ scales as $\tau\sim (n\,a^2\,c_s)^{-1} \sim a^{-2}$. [^3]: Temperature is not used directly in the code and this ensures that there is no possibility of confusion in the adopted notation. [^4]: Clearly the definition given by Eq. (\[nup\]) leads to an indeterminate form ($0/0$) for $t\rightarrow\infty$. From the numerical point of view, this may produce some spurious results, particularly when the BGK approximation is used. However, this happens when the quantity $\tilde{\nu}_p$ is no more actually relevant.
--- author: - 'A. Campos, B. Morgan' bibliography: - 'library.bib' title: 'Self-consistent feedback mechanism for the sudden viscous dissipation of finite-Mach-number compressing turbulence' --- Previous work (S. Davidovits and N. J. Fisch, “Sudden viscous dissipation of compressing turbulence,” Phys. Rev. Lett., 116(105004), 2016) demonstrated that the compression of a turbulent field can lead to a sudden viscous dissipation of turbulent kinetic energy (TKE), and suggested this mechanism could potentially be used to design new fast-ignition schemes for inertial confinement fusion. We expand on previous work by accounting for finite Mach numbers, rather than relying on a zero-Mach-limit assumption as previously done. The finite-Mach-number formulation is necessary to capture a self-consistent feedback mechanism in which dissipated TKE increases the temperature of the system, which in turn modifies the viscosity and thus the TKE dissipation itself. Direct numerical simulations with a tenth-order accurate Padé scheme were carried out to analyze this self-consistent feedback loop for compressing turbulence. Results show that, for finite Mach numbers, the sudden viscous dissipation of TKE still occurs, both for the solenoidal and dilatational turbulent fields. As the domain is compressed, oscillations in dilatational TKE are encountered due to the highly-oscillatory nature of the pressure dilatation. An analysis of the source terms for the internal energy shows that the mechanical work term dominates the viscous turbulent dissipation. As a result, the effect of the suddenly dissipated TKE on temperature is minimal for the Mach numbers tested. Moreover, an analytical expression is derived that confirms the dissipated TKE does not significantly alter the temperature evolution for low Mach numbers, regardless of compression speed. The self-consistent feedback mechanism is thus quite weak for subsonic turbulence, which could limit its applicability for inertial confinement fusion. Introduction ============ The compression of a turbulent flow occurs in a broad array of applications. Examples include one-dimensional compressions in internal combustion engines [@han1995] or across shock waves [@larsson2013], axisymmetric compressions in Z-pinches [@kroupp2018], spherically-symmetric compressions in inertial confinement fusion [@weber2014; @haines2014], and three-dimensional complex contractions in the interstellar medium [@robertson2012]. Moreover, the compression mechanism often leads to complex turbulence dynamics, and the resulting evolution of turbulence can have a strong effect on the overall behavior of the application under consideration. Thus, increased levels of understanding and improved modeling capabilities for this phenomenon are essential. Numerous direct numerical simulations of compressing turbulence have been previously carried out with the aim of improving engineering turbulence models; see for example [@wu1985; @coleman1991; @blaisdell1991; @coleman1993; @blaisdell1996]. These studies treated the fluid as a traditional gas, for which the dependence of viscosity $\mu$ on temperature $T$ is given by $\mu \sim T^n$, with $n$ having a value of, or close to, $3/4$. On the other hand, [@davidovits2016] demonstrated, through computational simulations, that when a power law exponent characteristic of weakly-coupled plasmas is used, i.e. $n=5/2$, a sudden viscous mechanism occurs which dissipates the turbulent energy. Their results showed that a turbulent field subjected to a continuous isotropic compression initially creates an amplification of turbulent kinetic energy (TKE), until viscous scaling dominates and TKE is rapidly dissipated into heat. It was thus proposed in [@davidovits2016] that the resulting increases of temperature could be used to improve the ignition conditions for inertial confinement fusion. Subsequent work has expanded on the simulations of [@davidovits2016]. The effect of ionization on the scaling of viscosity was accounted for in [@davidovits2016b]. For that study, the ionization state $Z$ was assumed to depend solely on temperature, and thus the plasma viscosity $\mu \sim T^{5/2} / Z^4$ was simplified to the form $\mu \sim T^\beta$. Their analysis of the evolution of the energy spectrum showed that the sudden dissipation of TKE occurs for $\beta > 1$ only. A TKE model that accounts for the viscous dissipative mechanism for isotropic compressions is presented in [@davidovits2017]. This model was validated against direct numerical simulations, and showed excellent agreement for viscosity-power-law exponents greater than one. The model was then used to estimate the partition of energy between the turbulence and heat, as the compression proceeds in time. A two-point spectral model based on the EDQNM formulation was used by [@viciconte2018], along with direct numerical simulations, to reproduce the sudden viscous dissipation mechanism. The lower computational cost of the EDQNM model allowed for the analysis of high-Reynolds-number effects and thus the identification of three distinct regimes: turbulent production, non-linear energy transfer, and viscous dissipation. Moreover, the assumption of homogeneous turbulence was relaxed and a spherical inhomogeneous turbulent layer under compression was simulated with both DNS and EDQNM closures. The sudden dissipation of TKE was also observed for this new case. Finally, in [@davidovits2018], a stability boundary for hot spot turbulence was derived to demarcate states of the compression for which a decrease of TKE is guaranteed. Moreover, an upper limit for the amount of TKE that can be generated during a compression was proposed. This upper limit was then compared to the internal thermal energy of the system. The simulations of the sudden viscous dissipation mechanism previously conducted have relied on the zero-Mach-limit assumption. Given a decomposition of the velocity $U_i = {\left <}U_i {\right >}+ u_i$, where ${\left <}U_i {\right >}$ is the Reynolds-averaged mean flow and $u_i$ the fluctuating velocity, the governing equations for the fluctuations in the zero-Mach limit take the form [@wu1985] $$\label{eq:lowmach_1} \frac{\partial u_i}{\partial x_i} = 0,$$ $$\label{eq:lowmach_2} {\left <}\rho {\right >}\left ( \frac{\partial u_i}{\partial t} + u_j \frac{\partial u_i}{\partial x_j} \right)= - \frac{\partial P}{\partial x_i} + \mu \frac{\partial^2 u_i}{\partial x_j \partial x_j} + f_i.$$ In the above, ${\left <}\rho {\right >}$ is the Reynolds-averaged density, $P$ the pressure, $\mu$ the viscosity, and $f_i$ a forcing function that accounts for the effect of the compression. The viscosity depends on temperature and thus an a-priori time evolution for temperature needs to be provided. For an adiabatic isotropic compression, this is $$\label{eq:adiabatic_temp} T = T_0 L^{-2},$$ where $L$ is a characteristic length of the domain being compressed and $T_0$ the initial temperature. The approach described above is suitable for demonstrating the sudden dissipation of TKE, but does not capture the self-consistent feedback mechanism mentioned in [@davidovits2016]. The feedback mechanism begins with an energy transfer from the TKE towards the internal energy, as a result of the sudden viscous dissipation. This, in turn, causes increased temperatures that amplify the viscosity of the system. The stronger values of viscosity then precipitate the viscous dissipation of TKE, thus completing the self-consistent feedback loop. In the zero-Mach limit, an evolution equation for the internal energy is not solved, and thus the effect of the dissipated TKE on the internal energy and the viscosity cannot be reproduced. It is expected that accounting for the feedback mechanism would lead to viscous dissipations that are more sudden, or of increased intensity [@davidovits2016]. An alternative to the assumption of the zero-Mach limit is turbulence belonging to the finite-Mach number regime. For this case, fully coupled governing equations for density, velocity, and energy are solved, which allows for an explicit accounting of the forward transfer of dissipated TKE into heat, and the subsequent effect of increased temperature and viscosity on the dissipation. The focus of this study is the simulation of turbulence in the finite-Mach number regime to investigate the complex self-consistent feedback mechanism, and thus further assess the benefits of viscous dissipation for inertial-confinement fusion and other high-energy density applications. The outline of the paper is as follows. includes a description of the governing equations for turbulence in the finite-Mach number regime. The mechanisms that account for the energy transfer between the TKE and the internal energy are also discussed. In \[sec:comp\_details\], details of the numerical simulations, such as the discretization scheme and the creation of realistic initial conditions, are included. The results of the simulations are then provided in \[sec:results\], which is divided into two subsections. focuses on the component of the feedback mechanism related to the TKE. Thus, the evolution of the TKE, its budget, and spectra are analyzed in this subsection. The component of the feedback loop associated with the internal energy is then investigated in \[sec:ie\_results\], where an analysis of the temperature evolution and sources for the internal energy are included. Finally, the paper ends with \[sec:conclusion\], where concluding remarks and a discussion of future work is provided. Governing equations {#sec:governing_equations} =================== Navier-Stokes equations for isotropic compressions -------------------------------------------------- We denote $\widetilde{U}_i$ and $u''_i$ as the Favre-averaged and Favre-fluctuating velocities, respectively, so that $U_i = \widetilde{U}_i + u''_i$ [@wilcox2010]. In analogy to the zero-Mach-limit formulation of [@davidovits2016], we analyze the effect of a compression on a statistically homogeneous turbulent field $u''_i$, where the compression is achieved through a specified Favre-averaged mean flow $\widetilde{U}_i$. The Favre-averaged velocity for homogeneous compressible turbulence needs to be restricted to the form $\widetilde{U}_i = G_{ij} x_j$ [@blaisdell1991]. The deformation tensor $G_{ij}$ corresponding to an isotropic compression is given in [@wu1985; @blaisdell1991; @davidovits2016], and can be expressed as $$\label{eq:def_tensor} G_{ij} = \frac{\dot{L}}{L} \delta_{ij},$$ where $L$ is a time-dependent characteristic length of the compressed domain, and $\dot{L}$ is the constant time rate of change of $L$. Given this formalism, one can derive, as detailed in \[sec:comp\_fluc\_NS\_mean\_comp\], a set of Navier-Stokes equations for the fluctuating velocity undergoing mean-flow compression. These equations, which are summarized below, constitute the finite-Mach-number analog of the low-Mach-number \[eq:lowmach\_1,eq:lowmach\_2,eq:adiabatic\_temp\], $$\label{eq:rho_miranda} \frac{\partial \rho}{\partial t} + \frac{\partial \rho u''_i}{\partial x_i} = f^{(\rho)},$$ $$\label{eq:rhou_miranda} \frac{\partial \rho u''_i}{\partial t} + \frac{\partial \rho u''_i u''_j}{\partial x_j} = \frac{\partial \sigma_{ij}}{\partial x_j} + f_i^{(u)},$$ $$\label{eq:rhoE_miranda} \frac{\partial \rho E_t}{\partial t} + \frac{\partial \rho E_t u''_i}{\partial x_i} = \frac{\partial u''_i \sigma_{ij}}{\partial x_j} + \frac{\partial}{\partial x_j} \left ( \kappa \frac{ \partial T}{\partial x_j} \right ) + f^{(e)}.$$ In the above $\rho$ is the density, $u''_i$ the Favre-fluctuating velocity, and $T$ the temperature. $E_t$ is the total energy, which is given by $E_t = U + K$, where $U=C_vT$ is the internal energy and $K_t = \frac{1}{2} u''_i u''_i$ is the kinetic energy associated with the turbulent fluctuations. $C_v$ is the specific heat at constant volume. The stress tensor is given by $$\sigma_{ij} = -P \delta_{ij} + 2 \mu \left [ \frac{1}{2} \left ( \frac{\partial u''_i}{\partial x_j} + \frac{\partial u''_j}{\partial x_i} \right ) - \frac{1}{3} \frac{\partial u''_k}{\partial x_k} \delta_{ij} \right ] ,$$ where $P$ is the pressure and $\mu$ the viscosity. A power law of the form $\mu = \mu_0 \left (T / T_0\right)^n$ is used, where $\mu_0$ and $T_0$ represent reference viscosity and temperature values, and $n$ is the power-law exponent. The thermal conductivity $\kappa$ is computed according to $\kappa = \mu C_p / Pr$, where $C_p$ is the specific heat at constant pressure and $Pr$ the Prandtl number. An ideal equation of state $P = \rho R T$ is used, where $R$ is the ideal gas constant. The forcing functions $f^{(\rho)}$, $f^{(u)}_i$, and $f^{(e)}$ account for the effect of the mean compression on the density, velocity, and total energy, respectively, and are defined as $$\label{eq:frho_miranda} f^{(\rho)} = -2 \dot{L} \rho,$$ $$\label{eq:frhou_miranda} f^{(u)}_i = -3 \dot{L} \rho u''_i,$$ $$\label{eq:fe_miranda} f^{(e)} = - \left [2 \rho E_t + \rho u''_i u''_i + 3 P \right ] \dot{L} .$$ The equations above are suitable for numerical simulations, since the compressive effect of the mean flow $\widetilde{U}_i$ has been abstracted into the three forcing functions above. These are the equations solved for the current study. Energy exchange for compressible turbulence ------------------------------------------- For turbulence in the finite-Mach-number regime, the Helmholtz decomposition is often employed to express the fluctuating velocity as $u''_i = u''^{(s)}_i + u''^{(d)}_i$, where $u''^{(s)}_i$ and $u''^{(d)}_i$ are the solenoidal and dilatational velocities, respectively. The solenoidal component satisfies $\nabla \times \bold{u}''^{(s)} = \bold{w}$ and $\nabla \cdot \bold{u}''^{(s)} = 0$, where $\bold{w} =\nabla \times \bold{u}''$ is the vorticity vector, and the dilatational component satisfies $\nabla \times \bold{u}''^{(d)} = 0$ and $\nabla \cdot \bold{u}''^{(d)} = d$, where $d = \nabla \cdot \bold{u}''$ is the dilatation. Given this decomposition, two TKEs can be defined. These are the solenoidal TKE $$k^{(s)} = \frac{1}{2} {\stackon[-8pt]{$u''^{(s)}_i u''^{(s)}_i$}{\vstretch{1.5}{\hstretch{2.0}{\widetilde{\phantom{\;\;\;\;\;\;\;\;}}}}}},$$ the dilatational TKE $$k^{(d)} = \frac{1}{2} {\stackon[-8pt]{$u''^{(d)}_i u''^{(d)}_i$}{\vstretch{1.5}{\hstretch{2.0}{\widetilde{\phantom{\;\;\;\;\;\;\;\;}}}}}}.$$ There are two additional energies in the system, namely, the mean kinetic energy $$\bar{K} = \frac{1}{2} \widetilde{U}_i \widetilde{U}_i,$$ and the mean internal energy $$\widetilde{U} = C_v \widetilde{T}.$$ [ Sc Sc Sc ]{} Name & Symbol & Definition\ Intermode advection & $IA^{(\alpha)}$ & $- \left < \frac{\partial \sqrt{\rho} u''_i u''_j}{\partial x_j} \sqrt{\rho} u''^{(\alpha)}_i \right > + \left < \frac{\rho u''_i u''^{(\alpha)}_i}{2} d \right >$\ Production & $P^{(\alpha)}$ & $- \frac{2}{3} {\left <}\rho {\right >}k^{(\alpha)} G_{ii}$\ Solenoidal dissipation & ${\left <}\rho {\right >}\epsilon^{(s)}$ & $\left < \mu w_i w_i \right >$\ Dilatational dissipation & ${\left <}\rho {\right >}\epsilon^{(d)}$ & $\frac{4}{3} \left < \mu d^2 \right >$\ Pressure dilatation & $PD$ & $\left < P d \right >$\ Mean kinetic energy advection & $AD^{(\bar{K})}$ & ${\left <}\rho {\right >}\widetilde{U}_j \frac{ \partial \bar{K}}{\partial x_j}$\ Mean kinetic energy transport & $T^{(\bar{K})}$ & $\frac{\partial}{\partial x_j} \left ( \widetilde{U}_i {\left <}\rho {\right >}\tau_{ij} + \widetilde{U}_j {\left <}P {\right >}\right )$\ Mechanical work & $MW$ & $-{\left <}P {\right >}G_{ii}$\ A derivation of the governing equations for the solenoidal and dilatational TKEs is given in \[sec:sol\_dil\_evolution\]. Along with the evolution equations for $\bar{K}$ and $\widetilde{U}$, one can summarize the governing dynamics of the four relevant energies for homogeneous turbulence as follows $$\begin{aligned} {\left <}\rho {\right >}\frac{d k^{(s)}}{d t} &= IA^{(s)} +P^{(s)} - \langle \rho \rangle \epsilon^{(s)}, \label{eq:ks_evolution}\\ {\left <}\rho {\right >}\frac{d k^{(d)}}{d t} &= IA^{(d)} +P^{(d)} - \langle \rho \rangle \epsilon^{(d)} + PD, \label{eq:kd_evolution}\\ {\left <}\rho {\right >}\frac{\partial \bar{K} }{\partial t} &= -AD^{(\bar{K})} - T^{(\bar{K})} - MW - P^{(s)} - P^{(d)}, \label{eq:me_evolution}\\ {\left <}\rho {\right >}\frac{d \tilde{U}}{dt} &= MW + \langle \rho \rangle \epsilon^{(s)} + \langle \rho \rangle \epsilon^{(d)} - PD \label{eq:ie_evolution}.\end{aligned}$$ Each of the sources in the evolution equations above is defined in \[tb:energy\_sources\]. We note that the derivation of the evolution equations for the four energies assumed a generic yet isotropic deformation tensor $G_{ij}$, and neglected the averaged heat flux since for homogeneous turbulence the averaged temperature is uniform in space [@blaisdell1991]. The intermode advection represents a transfer of energy from the solenoidal and dilatational modes, and thus satisfies $IA^{(s)} = -IA^{(d)}$. The production terms transfer the compression energy stored in the mean flow to the solenoidal and dilatational TKEs. The solenoidal and dilatational dissipations then transfer energy stored in the solenoidal and dilatational fields into heat. The pressure dilatation represents a two-way energy transfer between the mean internal energy and the dilatational TKE only, and the mechanical work transfers energy of the compression directly into heat. The mean-kinetic-energy advection and transport are not identically zero, unlike the case for the three other energies. All of these energy transfer mechanisms are depicted in \[fig:energy\_chart\]. We note that each energy component has a direct interaction with each of the other three energies. We also note that the driver for the interactions is the mean kinetic energy, since it has a predetermined time evolution that emulates the compression of the system. The other three energy components then respond in a self-consistent fashion to the time-evolution of the mean kinetic energy. The self-consistent feedback mechanism for the sudden viscous dissipation relies on these complex interactions, and thus can only be represented using the finite-Mach-number formulation and not the low-Mach-number assumption. ![Schematic of energy transfer between the four energy components: mean kinetic energy, mean internal energy, solenoidal TKE, and dilatational TKE.[]{data-label="fig:energy_chart"}](./figs/energy_chart.pdf){width="\textwidth"} Computational details {#sec:comp_details} ===================== Direct numerical simulations of \[eq:rho\_miranda,eq:rhou\_miranda,eq:rhoE\_miranda\] are carried out with the Miranda code developed at the Lawrence Livermore National Laboratory. This solver employs a tenth-order accurate Padé scheme [@lele1992] for the discretization of the spatial derivatives, and a fourth-order, low-storage, five-step Runge-Kutta solver [@kennedy1999] for the temporal derivatives. An eighth-order compact filter is applied to the conserved variables $\rho$, $\rho u''_i$, and $E_t$ after each substep of the Runge-Kutta scheme, for the purposes of stability. Miranda relies on the artificial-fluid-property approach to stabilize shock waves and contact discontinuities. Thus, an artificial bulk viscosity $\beta^*$ is introduced in the definition of the viscous stress tensor, and an artificial thermal conductivity $\kappa^*$ is added to the thermal conductivity $\kappa$ of the fluid. The artificial bulk viscosity and artificial thermal conductivity are computed as $$\begin{aligned} \beta^* &= \overline{ C_\beta \rho D(d)}, \\ \kappa^* &= \overline{ C_\kappa \rho \frac{c_v}{T \Delta t} D(T)}.\end{aligned}$$ In the above, $\Delta t$ is the time step, the overbar denotes a truncated-Gaussian filter, and $D(\cdot)$ is an eighth-order derivative operator defined as $$D(\cdot) = \max \left ( \left | \frac{\partial^8 \cdot}{\partial x^8} \right | \Delta x^{10}, \left | \frac{ \partial^8 \cdot}{\partial y^8} \right | \Delta y^{10}, \left | \frac{\partial^8 \cdot}{\partial z^8} \right | \Delta z^{10} \right ).$$ This operator strongly biases the artificial properties towards high wave numbers. The coefficients $C_\beta = 0.07$ and $C_\kappa = 0.001$ have been calibrated for simulations relevant to inertial confinement fusion, see for example [@weber2015; @weber2014; @weber2013]. For further details or capabilities of the code, the reader is referred to [@cook2004; @cook2007; @olson2014]. The computational domain consists of a cube of length $2\pi$, with a uniform distribution of $256^3$ grid points. Periodic boundary conditions are applied on all sides of the cube. The ratio of specific heats has a value of $\gamma = 5/3$, and the Prandtl number is set to $Pr = 1$. The gas constant is computed as $R = R_u / M$, where the universal gas constant is $R_u = 8.314474 e10^7$ (cgs units), and the molar mass used is that of Deuterium, i.e. $M = 2.014102$. Statistical quantities are obtained by averaging over all nodes of the mesh. \[sec:ie\_results\] ![The dissipation spectrum, normalized by the Kolmogorov velocity $u_\eta = (\epsilon \nu)^{1/4}$, for simulations of linearly-forced compressible turbulence.[]{data-label="fig:epsd_low_diss_spec"}](./figs/epsd_low_diss_spec.pdf){width="49.00000%"} The initial flow field is extracted from a simulation of linearly-forced compressible turbulence [@petersen2010; @campos2017]. This preliminary simulation is carried out for a duration of 18 initial eddy-turn-over times. The forcing coefficients introduced in [@petersen2010] require the specification of a priori values for the solenoidal and dilatational dissipations. These two quantities were obtained from specifying a total dissipation $\epsilon = \epsilon^{(s)} + \epsilon^{(d)}$ and a dissipation ratio $\epsilon^{(d)} / \epsilon^{(s)}$. The value of the total dissipation is chosen a priori so that the corresponding Kolmogorov scale $\eta = (\nu^3 / \epsilon)^{1/4}$ is sufficiently large compared to the grid spacing. plots the dissipation spectrum at the end of the preliminary forced-turbulence simulation, and illustrates the range of scales resolved on the mesh, i.e. $0 \le \kappa \eta \le 2$. This figure shows that we capture the long tail for high wavenumbers as it smoothly approaches a value of zero (a model spectrum for comparison is shown in [@pope2001]), and thus do not predict a fictitious energy pileup or unphysical rapid decay at the highest wave modes. This thus serves as further evidence that the combination of forcing coefficients and mesh resolution chosen appropriately capture all of the dissipative scales, as should be the case for any direct numerical simulation. The ratio of dissipations was set to $\epsilon^{(d)} / \epsilon^{(s)} = 0.01$. Simulations of compressing turbulence that were initialized from a linearly-forced case with $\epsilon^{(d)} / \epsilon^{(s)} = 1.0$ were also carried out. Results for this higher initial dissipation ratio are qualitatively similar to those of the lower dissipation ratio and are therefore not included in this paper. The turbulent Mach number and Taylor-scale Reynolds number are defined as $$\label{eq:turb_mach_number} M_t = \frac{\sqrt{ \widetilde{u''_i u''_i} }}{ \widetilde{c}},$$ $$\label{eq:taylor_Re} Re_\lambda = \left(\frac{20 k^2}{3 \epsilon \nu} \right)^{1/2},$$ where $c = \sqrt{\gamma R T}$ is the speed of sound, and $\nu = {\left <}\mu {\right >}/ {\left <}\rho {\right >}$ is the averaged kinematic viscosity. The extracted turbulent field at the end of the linear forcing has $M_t \approx 0.65$ and $Re_\lambda \approx 70$. The corresponding ratio of dilatational to solenoidal TKE is $k^{(d)} / k^{(s)} = 0.033$. For the linearly-forced simulations, the power law exponent is set to the traditional fluid value of $n = 3/4$. However, once the isotropic compression is applied to the initial flow field, the power law exponent is switched to the value used in [@davidovits2016], namely $n= 5/2$, so as to reproduce the sudden viscous dissipation mechanism. The initial ratio of the artificial dissipation [@campos2017] to physical dissipation is 0.016, which decays rapidly as the compression is initiated. Thus, the simulations are mostly affected by physical rather than artificial dissipative mechanisms, as should be the case for a properly-refined direct numerical simulation [@olson2014]. Results {#sec:results} ======= The analysis of the self-consistent feedback mechanism is divided into two subsections. The first focuses on the behavior of the TKEs and the various mechanisms depicted in \[fig:energy\_chart\] that modulate their temporal evolution. The second half of the analysis is centered around the resulting evolution of the internal energy, and the amplification of the temperature due to the viscous dissipation. Turbulent kinetic energies {#sec:tke_results} -------------------------- ### Profile histories [0.49]{} ![Evolution of (a) solenoidal and (b) dilatational TKE as a function of the size of the domain $L$. The solenoidal and dilatational TKEs are normalized by their initial values $k^{(s)}_0$ and $k^{(d)}_0$, respectively. The initial length of the domain is $L=1$, which decreases as time progresses. The colors correspond to different values of $S^*_0$, which is the shear parameter $S^*$ at time $t=0$.[]{data-label="fig:epsd_low_tke"}](./figs/epsd_low_tkes.pdf "fig:"){width="\textwidth"} [0.49]{} ![Evolution of (a) solenoidal and (b) dilatational TKE as a function of the size of the domain $L$. The solenoidal and dilatational TKEs are normalized by their initial values $k^{(s)}_0$ and $k^{(d)}_0$, respectively. The initial length of the domain is $L=1$, which decreases as time progresses. The colors correspond to different values of $S^*_0$, which is the shear parameter $S^*$ at time $t=0$.[]{data-label="fig:epsd_low_tke"}](./figs/epsd_low_tked.pdf "fig:"){width="\textwidth"} The time evolution of the solenoidal and dilatational TKEs are shown in \[fig:epsd\_low\_tkes,fig:epsd\_low\_tked\], respectively. As done in [@davidovits2016], rather than plotting the TKE evolution against time, a parameterization in terms of the length of the domain $L$ is used, and thus time progresses from right to left. Also equivalent to the results in [@davidovits2016], the TKE evolutions for different compression speeds $\dot{L}$ are shown. These different cases are labelled by the initial value of the RDT parameter $ S^* = S k / \epsilon $, where $S = \dot{L} / L$ is the inverse time scale of the compression, and $ k / \epsilon$ is the time scale of the turbulence. For sufficiently large values of $S^*$, the compression is rapid enough that the non-linear turbulence-turbulence interactions are negligible, and the evolution of the turbulence is described exactly by rapid-distortion theory (RDT) [@pope2001; @blaisdell1996]. As \[fig:epsd\_low\_tkes\] shows, the evolution of solenoidal TKE exhibits the sudden viscous dissipation mechanism of [@davidovits2016]. Even though the compression speeds used in this study are different from those of [@davidovits2016], there is strong qualitative agreement between the results shown in \[fig:epsd\_low\_tkes\] and those in fig. 1 of [@davidovits2016]. shows that the dilatational TKE also exhibits the sudden viscous dissipation mechanism. Additionally, strong agreement with RDT [@blaisdell1996] is observed for both the solenoidal and dilatational modes given the fastest compression rates. A few differences between the behavior of solenoidal and dilatational TKE are noted. First, increasingly strong oscillations of dilatational TKE are observed as $S^*_0$ is decreased. This highly oscillatory behavior is discussed further in \[sec:tke\_budget\]. Second, the dilatational energy lags the solenoidal energy. At the last recorded instance in time, the solenoidal TKE has decayed by more than two orders of magnitude for case $S^*_0 = 50$, whereas the dilatational TKE has decayed by about two orders of magnitude only. For the $S^*_0 = 500$ case, the solenoidal TKE has decayed by more than three orders of magnitude and the dilatational TKE has decreased by less than two orders of magnitude. Lastly, for the slowest compression rate, the solenoidal and dilatational TKE diverge in their initial behavior: whereas the dilatational TKE slightly increases until it suddenly dissipates, the solenoidal TKE decreases from the start. [0.49]{} ![Profiles of (a) solenoidal and (b) dilatational TKE plotted against the shear parameter $S^*_\alpha = S k^{(\alpha)} / \epsilon^{(\alpha)}$, where $\alpha = s$ for fig. (a) and $\alpha = d$ for fig. (b). The solenoidal and dilatational TKEs are normalized by their initial values $k^{(s)}_0$ and $k^{(d)}_0$, respectively. The vertical dashed line in the left plot corresponds to the time at which $P^{(s)} = {\left <}\rho {\right >}\epsilon^{(s)}$, and the vertical dashed line on the right plot corresponds to the time at which $P^{(d)} = {\left <}\rho {\right >}\epsilon^{(d)}$. The dashed diagonal lines correspond to \[eq:Ske\_s\_scaling\].[]{data-label="fig:epsd_low_Ske"}](./figs/epsd_low_Ske_s.pdf "fig:"){width="\textwidth"} [0.49]{} ![Profiles of (a) solenoidal and (b) dilatational TKE plotted against the shear parameter $S^*_\alpha = S k^{(\alpha)} / \epsilon^{(\alpha)}$, where $\alpha = s$ for fig. (a) and $\alpha = d$ for fig. (b). The solenoidal and dilatational TKEs are normalized by their initial values $k^{(s)}_0$ and $k^{(d)}_0$, respectively. The vertical dashed line in the left plot corresponds to the time at which $P^{(s)} = {\left <}\rho {\right >}\epsilon^{(s)}$, and the vertical dashed line on the right plot corresponds to the time at which $P^{(d)} = {\left <}\rho {\right >}\epsilon^{(d)}$. The dashed diagonal lines correspond to \[eq:Ske\_s\_scaling\].[]{data-label="fig:epsd_low_Ske"}](./figs/epsd_low_Ske_d.pdf "fig:"){width="\textwidth"} An alternate representation of the evolution of TKEs is shown in \[fig:epsd\_low\_Ske\_s,fig:epsd\_low\_Ske\_d\]. In \[fig:epsd\_low\_Ske\_s\] the evolution of the solenoidal TKE is parameterized by the solenoidal shear parameter $S^*_s = S k^{(s)}/\epsilon^{(s)}$, and in \[fig:epsd\_low\_Ske\_d\] the evolution of the dilatational TKE is parameterized by the dilatational shear parameter $S^*_d = S k^{(d)}/\epsilon^{(d)}$. The dashed vertical lines correspond to the point in time at which production is equal to dissipation, that is, the point at which $P^{(s)} = {\left <}\rho {\right >}\epsilon^{(s)}$ for \[fig:epsd\_low\_Ske\_s\] and $P^{(d)} = {\left <}\rho {\right >}\epsilon^{(d)}$ for \[fig:epsd\_low\_Ske\_d\]. The dashed diagonal lines correspond the assumption of RDT scaling, for which $\epsilon^{(\alpha)} \sim \left (k^{(\alpha)} \right)^3$ for $\alpha = s,d$. That is, assuming this relationship between TKE and dissipation, the TKE can be expressed as $$\label{eq:Ske_s_scaling} k^{(\alpha)} = \frac{\left ( k^{(\alpha)} \right)^{3/2} }{ \left ( k^{(\alpha)} \right)^{1/2} } \sim \frac{ \left ( \epsilon^{(\alpha)} \right)^{1/2} }{ \left ( k^{(\alpha)} \right)^{1/2} } \sim S^{1/2} \left . S_\alpha^* \right .^{-1/2}$$ for $\alpha = s,d$. It is important to note that $k^{(\alpha)}$ does not scale simply as $\left. S_\alpha^* \right .^{-1/2}$ since $S$ also depends on time. However, the dependence of $S$ on time is given by the predetermined and known compression history of the domain $L$. shows that for compressions $S^*_0 = 5.0$, 50, and $500$, the increase in solenoidal TKE is in close agreement with RDT. That is, the RDT assumption of $\epsilon^{(\alpha)} \sim \left (k^{(\alpha)} \right)^3$ holds quite well during the initial phase of the compression. The agreement with the scaling of \[eq:Ske\_s\_scaling\] could be beneficial for modeling purposes. Significant divergence from the RDT scaling occurs once the vertical line at which solenoidal production equals solenoidal dissipation is reached. After this point, the solenoidal dissipation overtakes the solenoidal production, and the turbulence decays. For the $S^*_0 = 0.50$ case, the compression is slow enough that the solenoidal production is never larger than the solenoidal dissipation, and thus the entire curve is located to the left of the vertical dashed line. shows a similar trend. We first note that the rapid oscillations in the dilatational profiles corresponding to slow compression speeds are also evident in this figure. The agreement with the scaling $\epsilon^{(d)} \sim \left ( k^{(d)} \right)^3$ still holds for the $S^*_0 = 5.0, 50,\text{ and }500$ cases, although, for the $S^*_0 = 5.0$ case, this agreement is not as strong as that of the corresponding solenoidal field. More importantly, the vertical line at which dilatational production equals dilatational dissipation no longer demarcates the domains of increasing and decreasing turbulence for all four cases, since for $S^*_0 = 50$ the dilatational TKE keeps on increasing after this vertical line is reached. Lastly, \[fig:epsd\_low\_Ske\_d\] shows that the decrease in energy is slower than that observed in \[fig:epsd\_low\_Ske\_s\] for cases $S^*_0 = 50$ and 500. This suggests that the dilatational dissipation is acting against an additional source, which, as will be shown in \[sec:tke\_budget\], turns out to be the pressure dilatation. ### Budgets {#sec:tke_budget} contain the TKE budget for the solenoidal and dilatational fields. For the $S^*_0 = 0.5$ case shown in \[fig:epsd\_low\_tke\_budget\_1\], oscillations in the pressure dilatation and dilatational dissipation are observed. The magnitude of the oscillations in ${\left <}\rho {\right >}\epsilon^{(d)}$ are significantly smaller than those of PD. We also note that the oscillations of the pressure dilatation and dilatational dissipation are correlated, with the dilatational dissipation slightly lagging the pressure dilatation. Moreover, the oscillations in $k^{(d)}$ shown in \[fig:epsd\_low\_tked\] are also correlated with $PD$, with $k^{(d)}$ lagging behind PD. This serves as evidence that pressure dilatation is responsible for the oscillatory behavior of the dilatational TKE. The strong oscillatory nature of $PD$ has been observed elsewhere, see for example [@kida1990; @miura1995] for the case of forced turbulence and [@blaisdell1991; @sarkar1992] for sheared turbulence. For the $S^*_0 = 5.0$ case shown in \[fig:epsd\_low\_tke\_budget\_2\] the oscillations in $PD$ and ${\left <}\rho {\right >}\epsilon^{(d)}$ have been attenuated. show that as the compression speed is increased to $S^*_0 = 50$ and 500, $PD$ and ${\left <}\rho {\right >}\epsilon^{(d)}$ do not exhibit oscillations up to the last simulated instance in time. Some trends shown by \[fig:epsd\_low\_tke\_budget\_1,fig:epsd\_low\_tke\_budget\_2,fig:epsd\_low\_tke\_budget\_3,fig:epsd\_low\_tke\_budget\_4\] are important to be noted. As the compression speed increases, the peak values of the profiles occur at smaller and smaller domain lengths, as highlighted by the different ranges used for the $x$ axis. This is in agreement with \[fig:epsd\_low\_tke\] and the results in [@davidovits2016]. However, more relevant to the current study is that, as the compression speed increases, peak values for the dilatational TKE sources occur at even smaller domain lengths than those of the solenoidal energy. For example, for compression speeds $S^*_0 = 50$ and 500, peak values for the solenoidal dissipation and production are reached before those of dilatational dissipation and pressure dilatation. These last two terms begin to increase rapidly only after the solenoidal dissipation and production are already decaying from their maximum values. This lag in the dilatational modes with respect to the solenoidal ones was also observed in \[fig:epsd\_low\_tke\] as previously noted, but is now more apparent due to the smaller scale of the $x$ axis in \[fig:epsd\_low\_tke\_budget\_1,fig:epsd\_low\_tke\_budget\_2,fig:epsd\_low\_tke\_budget\_3,fig:epsd\_low\_tke\_budget\_4\]. The second trend to highlight is that, as the compression speed is increased, the solenoidal sources do not show as distinct a change in behavior as the dilatational sources. The solenoidal dissipation and production profiles retain the same shape, though with different magnitudes, for the different compression rates. On the other hand, the pressure dilatation and dilatational dissipation go from a highly-oscillatory pattern to smoothly-varying non-oscillatory profiles for the fastest compression speed. This poses formidable challenges for modeling purposes. The last trend to highlight is that the pressure dilatation is either skewed towards positive values, as is the case for $S^*_0 = 0.5$, or is positive throughout the entire compression. This is further exemplified by looking at \[tb:energy\_integrated\_sources\], which shows the integrated values of the energy transfer mechanisms, from the initial to the last available simulated time. All integrated values for the pressure dilatation are positive. Thus, $PD$ behaves more as a source rather than a sink or a neutral term in the balance of dilatational TKE. Thus, the dilatational dissipation needs to counteract the effect of both the dilatational production and pressure dilatation for the sudden viscous dissipation to occur in the dilatational field. Given that for the two fastest compressions the integrated contribution of $PD$ is almost as large as that of ${\left <}\rho {\right >}\epsilon^{(d)}$, it is thus not unexpected that the dilatational TKE decays at a slower rate than the solenoidal TKE, as shown in \[fig:epsd\_low\_Ske\]. [0.49]{} ![TKE budget for the solenoidal mode on the left and the dilatational mode on the right, for $S^*_0 = 0.5$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$.[]{data-label="fig:epsd_low_tke_budget_1"}](./figs/epsd_low_ks_bal_1.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_ks\_bal\_1\] [0.49]{} ![TKE budget for the solenoidal mode on the left and the dilatational mode on the right, for $S^*_0 = 0.5$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$.[]{data-label="fig:epsd_low_tke_budget_1"}](./figs/epsd_low_kd_bal_1.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_kd\_bal\_1\] [0.49]{} ![TKE budget for the solenoidal mode on the left and the dilatational mode on the right, for $S^*_0 = 5.0$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$. The same legend as that of \[fig:epsd\_low\_tke\_budget\_1\] applies to the plots above.[]{data-label="fig:epsd_low_tke_budget_2"}](./figs/epsd_low_ks_bal_2.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_ks\_bal\_2\] [0.49]{} ![TKE budget for the solenoidal mode on the left and the dilatational mode on the right, for $S^*_0 = 5.0$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$. The same legend as that of \[fig:epsd\_low\_tke\_budget\_1\] applies to the plots above.[]{data-label="fig:epsd_low_tke_budget_2"}](./figs/epsd_low_kd_bal_2.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_kd\_bal\_2\] [0.49]{} ![TKE budget for the solenoidal mode on the left and the dilatational mode on the right, for $S^*_0 = 50$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$. The same legend as that of \[fig:epsd\_low\_tke\_budget\_1\] applies to the plots above.[]{data-label="fig:epsd_low_tke_budget_3"}](./figs/epsd_low_ks_bal_3.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_ks\_bal\_3\] [0.49]{} ![TKE budget for the solenoidal mode on the left and the dilatational mode on the right, for $S^*_0 = 50$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$. The same legend as that of \[fig:epsd\_low\_tke\_budget\_1\] applies to the plots above.[]{data-label="fig:epsd_low_tke_budget_3"}](./figs/epsd_low_kd_bal_3.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_kd\_bal\_3\] [0.49]{} ![TKE budget for the solenoidal mode on the left and the dilatational mode on the right, for $S^*_0 = 500$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$. The same legend as that of \[fig:epsd\_low\_tke\_budget\_1\] applies to the plots above.[]{data-label="fig:epsd_low_tke_budget_4"}](./figs/epsd_low_ks_bal_4.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_ks\_bal\_4\] [0.49]{} ![TKE budget for the solenoidal mode on the left and the dilatational mode on the right, for $S^*_0 = 500$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$. The same legend as that of \[fig:epsd\_low\_tke\_budget\_1\] applies to the plots above.[]{data-label="fig:epsd_low_tke_budget_4"}](./figs/epsd_low_kd_bal_4.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_kd\_bal\_4\] [ Sc Sc Sc Sc Sc]{} & $S^*_0 = 0.5$ & $S^*_0 = 5.0$ & $S^*_0 = 50$ & $S^*_0 = 500$\ $IA^{(s)}$ & 2.32e-01 & 4.80e+01& 4.45e+03 & 4.90e+03\ $IA^{(d)}$ & -2.32e-01& -4.80e+01 & -4.45e+03 & -4.90e+03\ $P^{(s)}$ & 2.89e+01& 3.08e+03 & 1.97e+05 & 1.03e+07\ $P^{(d)}$ & 2.80e+00 & 1.57e+02 & 1.39e+04 & 3.23e+05\ ${\left <}\rho {\right >}\epsilon^{(s)}$ & 1.46e+02& 1.54e+04 & 9.82e+05 & 5.07e+07\ ${\left <}\rho {\right >}\epsilon^{(d)}$ & 1.87e+01& 1.07e+03 & 3.24e+05 & 6.59e+06\ $PD$ & 4.89e+00& 3.34e+02 & 2.64e+05 & 5.15e+06\ $MW$ & 3.34e+05& 2.69e+07 & 1.35e+09 & 2.14e+10\ ### Spectra The energy spectra for the solenoidal and dilatational fields are shown in \[fig:epsd\_low\_tke\_spectra\], for the compression speed of $S^*_0 = 5.0$. The profile obtained at $L \approx 0.10$ corresponds to a point in time for which the sudden viscous dissipation mechanism is taking place, and the profile at $L \approx 0.04$ to a time for which most of the turbulence has already been dissipated. The shapes and trends exhibited by the solenoidal and dilatational spectra are equal to each other. Additionally, these profiles are in qualitative agreement with results shown in [@davidovits2016]. As the compression proceeds, the energy in the higher modes decreases whereas the energy in the lower modes increases. The set of modes for which the energy decreases expands as the compression progresses, and eventually even the lower modes are dissipated, as shown by the profile at $L \approx 0.04$. [0.49]{} ![Energy spectra for the (a) solenoidal and (b) dilatational TKE, at different times (or domain lengths) throughout the compression. The spectra correspond to the $S^*_0 = 5.0$ case.[]{data-label="fig:epsd_low_tke_spectra"}](./figs/epsd_low_spec_tkes.pdf "fig:"){width="\textwidth"} [0.49]{} ![Energy spectra for the (a) solenoidal and (b) dilatational TKE, at different times (or domain lengths) throughout the compression. The spectra correspond to the $S^*_0 = 5.0$ case.[]{data-label="fig:epsd_low_tke_spectra"}](./figs/epsd_low_spec_tked.pdf "fig:"){width="\textwidth"} Internal energy {#sec:ie_results} --------------- ![Evolution of temperature as a function of the size of the domain. The initial temperature is denoted by $\widetilde{T}_0$.[]{data-label="fig:epsd_low_T"}](./figs/epsd_low_T.pdf){width="49.00000%"} [0.49]{} ![Mean internal energy budget in log scale on the left and linear scale on the right, for $S^* = 0.50$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$. The absolute value of $PD$ is shown for the log-scale plot.[]{data-label="fig:epsd_low_ie_budget_1"}](./figs/epsd_low_ie_bal_log_1.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_ie\_bal\_log\_1\] [0.49]{} ![Mean internal energy budget in log scale on the left and linear scale on the right, for $S^* = 0.50$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$. The absolute value of $PD$ is shown for the log-scale plot.[]{data-label="fig:epsd_low_ie_budget_1"}](./figs/epsd_low_ie_bal_real_1.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_ie\_bal\_linear\_1\] [0.49]{} ![Mean internal energy budget in log scale on the left and linear scale on the right, for $S^* = 5.0$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$. The absolute value of $PD$ is shown for the log-scale plot. The same legend as that of \[fig:epsd\_low\_ie\_budget\_1\] applies to the plots above.[]{data-label="fig:epsd_low_ie_budget_2"}](./figs/epsd_low_ie_bal_log_2.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_ie\_bal\_log\_2\] [0.49]{} ![Mean internal energy budget in log scale on the left and linear scale on the right, for $S^* = 5.0$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$. The absolute value of $PD$ is shown for the log-scale plot. The same legend as that of \[fig:epsd\_low\_ie\_budget\_1\] applies to the plots above.[]{data-label="fig:epsd_low_ie_budget_2"}](./figs/epsd_low_ie_bal_real_2.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_ie\_bal\_linear\_2\] [0.49]{} ![Mean internal energy budget in log scale on the left and linear scale on the right, for $S^* = 50$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$. The absolute value of $PD$ is shown for the log-scale plot. The same legend as that of \[fig:epsd\_low\_ie\_budget\_1\] applies to the plots above.[]{data-label="fig:epsd_low_ie_budget_3"}](./figs/epsd_low_ie_bal_log_3.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_ie\_bal\_log\_3\] [0.49]{} ![Mean internal energy budget in log scale on the left and linear scale on the right, for $S^* = 50$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$. The absolute value of $PD$ is shown for the log-scale plot. The same legend as that of \[fig:epsd\_low\_ie\_budget\_1\] applies to the plots above.[]{data-label="fig:epsd_low_ie_budget_3"}](./figs/epsd_low_ie_bal_real_3.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_ie\_bal\_linear\_3\] [0.49]{} ![Mean internal energy budget in log scale on the left and linear scale on the right, for $S^* = 500$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$. The absolute value of $PD$ is shown for the log-scale plot. The same legend as that of \[fig:epsd\_low\_ie\_budget\_1\] applies to the plots above.[]{data-label="fig:epsd_low_ie_budget_4"}](./figs/epsd_low_ie_bal_log_4.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_ie\_bal\_log\_4\] [0.49]{} ![Mean internal energy budget in log scale on the left and linear scale on the right, for $S^* = 500$. The initial length of the domain is $L=1$, which decreases as time progresses. All terms have been normalized by $\rho_0 U_0^3 / L_0$. The absolute value of $PD$ is shown for the log-scale plot. The same legend as that of \[fig:epsd\_low\_ie\_budget\_1\] applies to the plots above.[]{data-label="fig:epsd_low_ie_budget_4"}](./figs/epsd_low_ie_bal_real_4.pdf "fig:"){width="\textwidth"} \[fig:epsd\_low\_ie\_bal\_linear\_4\] The temperature evolutions as a function of the domain length are shown in \[fig:epsd\_low\_T\] for all the compression speeds. These are also compared against the $1/L^2$ temperature scaling corresponding to an adiabatic isentropic process with $\gamma = 5/3$, as assumed in [@davidovits2016]. As the figure shows, the temperature evolutions are in very close agreement with the adiabatic scaling. This indicates that the terms in the mean internal energy equation neglected under the assumption of adiabatic compression, namely the solenoidal dissipation, dilatational dissipation, and pressure dilatation, do not provide a strong contribution towards the increase of temperature for the current simulations. The negligible effect of the dissipations and the pressure dilatation is confirmed by comparing the source terms of the mean internal energy, as is done in \[fig:epsd\_low\_ie\_budget\_1,fig:epsd\_low\_ie\_budget\_2,fig:epsd\_low\_ie\_budget\_3,fig:epsd\_low\_ie\_budget\_4\]. These figures show that, throughout the compression, the dominant source for the mean internal energy equation is the mechanical work, which takes the form of $MW =-3 \langle P \rangle \dot{L} / L$ for the given isotropic compression of \[eq:def\_tensor\]. For all compression speeds tested, the solenoidal dissipation, dilatational dissipation, and pressure dilatation are eclipsed by the mechanical work at all times during the compression. However, for the two fastest compression rates, the peak values of the dilatational dissipation and pressure dilatation are not achieved by the last-available simulated instance in time. Nonetheless, as shown in \[fig:epsd\_low\_tke\], by this last simulated instance in time the dilatational TKE has already surpassed its peak value and has dissipated by more than an order of magnitude, and it is thus unlikely that the dilatational dissipation and pressure dilatation will ever overtake the mechanical work. For the specific-heat ratio of $\gamma = 5/3$ and the assumption of an adiabatic compression, the mechanical work scales as $1/L^6$, which is shown as black dots in \[fig:epsd\_low\_ie\_budget\_1,fig:epsd\_low\_ie\_budget\_2,fig:epsd\_low\_ie\_budget\_3,fig:epsd\_low\_ie\_budget\_4\]. Since the mechanical work overpowers the other sources of mean internal energy, it is expected that the assumption of an adiabatic compression would hold as well as shown in \[fig:epsd\_low\_ie\_budget\_1,fig:epsd\_low\_ie\_budget\_2,fig:epsd\_low\_ie\_budget\_3,fig:epsd\_low\_ie\_budget\_4\]. The dominance of the mechanical work can be further exemplified by considering the integrated values of the mean internal energy sources, shown in . The time-integrated contribution towards the increase of temperature due to mechanical work is at least three orders of magnitude larger than the second most significant time-integrated source term, namely, the solenoidal dissipation. An alternate metric for highlighting the dominance of the mechanical work is the comparison of the time-integrated contribution from the TKEs to mean internal energy against the time-integrated contribution from the mean kinetic energy to the mean internal energy. The ratio of these two factors for the four cases $S^*_0 = 0.50, 5.0, 50, \text{and, } 500$ is 0.0005, 0.0006, 0.0008, and 0.002, respectively. Given that, for the parameters used in these simulations, the dissipated turbulent kinetic energy does not significantly increase the temperature of the system above the adiabatic prediction, it is crucial to determine under which conditions would the dissipated TKE actually lead to meaningful increases in temperature. To do this, we make use of the relation $$\label{eq:cons_energy_ratio} \frac{d}{dt} \left ( \frac{\widetilde{U} + k}{\widetilde{U}^{(a)}} \right) = 0,$$ which was derived in \[sec:cons\_energy\_ratio\]. $\widetilde{U}^{(a)}$ is the mean internal energy of the system given the idealized adiabatic compression, and is thus given by $\widetilde{U}^{(a)} = \widetilde{U}_0 L^{-2}$ where $\widetilde{U}_0$ is the initial value of $\widetilde{U}$. Integrating from the initial time $t_0$ to a final time $t_f$, one obtains $$\label{eq:gain_factor_intermediate} \left . \frac{\widetilde{U} + k}{\widetilde{U}^{(a)}} \right |_{t_f} = \left . \frac{ \widetilde{U} + k}{\widetilde{U}^{(a)} } \right |_{t_0} = 1 + \frac{k_0}{\widetilde{U}_0} = 1 + \frac{5}{9} M_{u,0}^2 .$$ In the above we have made use of the definition of the fluctuating Mach number [@blaisdell1991] $$M_u = \frac{\sqrt{ \widetilde{ u''_i u''_i}}}{ c(\widetilde{T})},$$ whose initial value is denoted by $M_{u,0}$. We introduce $\widetilde{T}^{(a)} = \widetilde{T}_0 L^{-2}$ as the temperature corresponding to an adiabatic compression. If we define $t_f$ as the time by which all of the turbulent kinetic energy has dissipated, and $\widetilde{T}_f$ and $\widetilde{T}^{(a)}_f$ as the temperatures $\widetilde{T}$ and $\widetilde{T}^{(a)}$ at times $t > t_f$, respectively, then \[eq:gain\_factor\_intermediate\] can be expressed as $$\label{eq:gain_factor} \frac{\widetilde{T}_f}{\widetilde{T}^{(a)}_f} = 1 + \frac{5}{9} M_{u,0}^2 .$$ The above relation highlights a few notable aspects of the compression mechanism. Given $\widetilde{T}_0$ and $k_0$, $M_{u,0}$ is known, which, along with values of $L$ smaller than those corresponding to the time $t_f$, can be used in \[eq:gain\_factor\] to obtain the temperature achieved after the TKE has been fully dissipated. It is therefore unnecessary to carry out expensive numerical simulations to predict the temperature of the system post TKE depletion. The second aspect to highlight is that the temperature ratio $\widetilde{T}_f / \widetilde{T}^{(a)}_f$ is independent of the compression speed. However, the relationship in \[eq:gain\_factor\] holds for times $t > t_f$, where $t_f$ is different for the various compression speeds. For the simulations described in this paper, the initial fluctuating Mach number immediately preceding the start of the compression is $M_{u,0} = 0.651$. Using \[eq:gain\_factor\], this gives $\widetilde{T}_f / \widetilde{T}^{(a)}_f = 1.235$. lists this ratio computed from simulation data available at the last simulated instance in time, for the four compression speeds. As the table shows, there is strong agreement with the analytical value of 1.235. The slightly lower ratio for the fastest compression is most likely due to the fact that all of the TKE, specially the dilatational TKE, has not yet fully dissipated into heat. can now thus be used to predict under which conditions the dissipated TKE would lead to meaningful increases in temperature. For subsonic initial fluctuating Mach numbers, the temperature post TKE depletion can be up to about 1.5 times larger than that obtained with an adiabatic compression. If supersonic Mach numbers are used, such as $M_{u,0} = 2$ and 5, then the temperature post TKE depletion would be about 3 and 15 times larger, respectively, than for an adiabatic compression. For highly supersonic turbulence such as that encountered in the interstellar medium [@federrath2013; @konstandin2016], a Mach number of $M_{u,0} = 17 $ would lead to final temperatures about 160 times higher than those predicted assuming an adiabatic scaling. As stated in [@davidovits2018], the hot spot of an inertial-confinement-fusion capsule can be characterized by a turbulent Mach number $M_t \approx 0.4$. Using this value in \[eq:gain\_factor\] leads to $\widetilde{T}_f/\widetilde{T}_f^{(a)} \approx 1.09$. This increase of temperature is minimal, and is eclipsed by the effect of the mechanical work. For example, if we assume that the sudden viscous dissipation of TKE occurs at $L = 0.1$, a small reduction of the domain size to $L = 0.0958$ would already allow the mechanical work to generate an equivalent increase in temperature. It is thus expected that only for flow fields with large initial Mach numbers would the self-consistent feedback mechanism lead to sudden dissipations with significant effects. [ Sc Sc Sc Sc Sc]{} & $S^*_0 = 0.50$ & $S^*_0 = 5.0$ & $S^*_0 = 50$ & $S^*_0 = 500$\ $\frac{\widetilde{T}_f }{ \widetilde{T}^{(a)}_f } $ & 1.232 & 1.232 & 1.232 & 1.230\ Concluding remarks {#sec:conclusion} ================== A sudden viscous dissipation of plasma turbulence under compression was demonstrated in [@davidovits2016]. We expand on this previous work by accounting for the self-consistent feedback loop associated with this viscous mechanism. The feedback loop entails a transfer of energy from the turbulence towards the internal energy, and the subsequent increased temperatures and viscosities that in turn accelerate the original dissipation of TKE. Although previous efforts have reproduced the sudden dissipation of TKE, these do not capture the subsequent effect of the dissipated energy on the temperature, and the consequences thereof. This limitation is due to the use of the zero-Mach-limit assumption. To capture the increase of internal energy resulting from the dissipated TKE, and thus account for the entire self-consistent feedback loop, direct numerical simulations have been carried out using a finite-Mach number formulation that solves transport equations for the density, fluctuating velocity, and total energy. The analysis of the self-consistent feedback loop was divided into two steps: the first focused on the evolution of the solenoidal and dilatational TKEs, and the second on the evolution of the mean internal energy as it absorbs the dissipated TKE. Results show that both the solenoidal and dilatational TKE exhibit the sudden viscous dissipation mechanism, with the dissipation of dilatational TKE slightly lagging that of solenoidal TKE. Moreover, large oscillations in the temporal evolution of dilatational TKE for slow compression rates are observed, which are correlated with the highly-oscillatory nature of the pressure dilatation. The pressure dilatation constitutes a two-way energy transfer between the dilatational TKE and the mean internal energy of the system. However, a detailed analysis of the dilatational TKE budget shows that the time-integrated effect of the pressure dilatation is to transfer energy from heat towards dilatational TKE, even for cases when the pressure dilatation transfers energy in both directions on short time scales. Thus, the dilatational dissipation needs to overcome both the dilatational production and pressure dilatation for the sudden viscous dissipation of dilatational TKE to take place, which explains the delayed dissipation of dilatational TKE with respect to solenoidal TKE. An analysis of the sources for the mean internal energy shows that mechanical work, which transforms energy from the mean flow to increase heat, dominates all other sources of mean internal energy for the turbulent Mach numbers chosen in this study. For all instances in time, the mechanical work term is larger, often by multiple orders of magnitude, than the solenoidal and dilatational dissipation and the pressure dilatation. As a result, the contribution of the dissipated TKE towards the increase of temperature is minimal, and the temperature evolution closely follows an adiabatic scaling. This validates previous efforts [@davidovits2016; @davidovits2016b; @davidovits2017; @viciconte2018] that relied on a fixed adiabatic scaling for the temperature evolution. An analytical expression was also derived for the ratio of the temperature post TKE depletion to the temperature obtained at the same instance of the compression assuming an adiabatic scaling. This ratio depends on the initial fluctuating Mach number only, indicating that the rate of compression does not affect the magnitudes of the temperature post TKE depletion. The derived analytical expression also shows that for subsonic initial fluctuating Mach numbers, the true temperature of the system is not substantially larger than that obtained if the TKE contribution is neglected. To provide a point of reference, it was shown that for an adiabatic compression where only the mechanical work is active, reducing the domain from $L = 0.1$ to $L = 0.0958$ would have an equivalent effect on temperature as that of the suddenly dissipated TKE. This thus indicates that the potential of the sudden viscous dissipation mechanism to significantly enhance the heating of the plasma by dissipating the inherent turbulence could be limited. Nonetheless, this mechanism could serve as an effective tool to prevent the occurrence of turbulent fluctuations, which are responsible for detrimental mixing in inertial-confinement-fusion capsules. It is crucial to highlight that the finite-Mach-number framework chosen here, although more general than the zero-Mach-number formalism, is still missing physics relevant to inertial-confinement fusion such as non-ideal equations of state, radiation transport, multiple species, plasma viscosity models, separate ion and electron temperatures, and alpha heating. Thus, these factors need to be explored to provide a definite assessment on the ability of the sudden viscous dissipation mechanism to improve the performance of inertial confinement fusion. Acknowledgments =============== The authors wish to thank Dr. O. Schilling for helpful comments on drafts of the manuscript. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Derivation of the finite-Mach-number Navier-Stokes equations for isotropic mean compression {#sec:comp_fluc_NS_mean_comp} =========================================================================================== The derivation of the governing equations used for the computational simulations in this study is detailed below. This derivation is divided into five distinct steps, each described in the five subsections below. Compressible Navier-Stokes equations ------------------------------------ The starting point are the Navier-Stokes equations for a compressible fluid. Thus, the evolution of the density $\rho =\rho(t,\bold{x})$, velocity $U_i =U_i(t,\bold{x})$ and total energy $E=E(t,\bold{x})$ is governed by $$\label{eq:rho} \frac{\partial \rho}{\partial t} + \frac{ \partial \rho U_i}{ \partial x_i} = 0,$$ $$\label{eq:rhou} \frac{\partial \rho U_i}{\partial t}+\frac{\partial \rho U_i U_j}{\partial x_j} = \frac{\partial \sigma_{ij}}{\partial x_j} ,$$ $$\label{eq:rhoE} \frac{\partial \rho E}{\partial t} + \frac{\partial \rho E U_j}{\partial x_j} = \frac{\partial U_i \sigma_{ij}}{\partial x_j} + \frac{\partial}{\partial x_j} \left ( \kappa \frac{\partial T}{\partial x_j} \right ).$$ Closure of the above is achieved with $$\label{eq:sigma} \sigma_{ij} = -P \delta_{ij} + 2\mu \left[ \frac{1}{2} \left( \frac{\partial U_i}{\partial x_j} + \frac{\partial U_j}{\partial x_i} \right ) - \frac{1}{3} \frac{\partial U_k}{\partial x_k} \delta_{ij} \right] ,$$ $$E = U + K ,$$ $$U = C_v T \qquad K = \frac{1}{2} U_i U_i ,$$ $$P = \rho R T ,$$ $$\kappa = \frac{\mu C_p}{Pr} ,$$ $$\mu = \mu_0 \left(\frac{T}{T_{0}}\right)^{n}.$$ $P=P(t,\bold{x})$ is the pressure, $T=T(t,\bold{x})$ the temperature, $U=U(t,\bold{x})$ the internal energy, $K=K(t,\bold{x})$ the kinetic energy, $\mu = \mu(t,\bold{x})$ the dynamic viscosity, and $\kappa =\kappa(t,\bold{x})$ the thermal conductivity. $C_v$, $C_p$, $R$, and $Pr$ are the specific heat at constant volume, the specific heat at constant pressure, the ideal gas constant, and the Prandtl number, respectively. For the power law of viscosity, $\mu_0$ and $T_0$ represent reference viscosity and temperature values, and $n$ is the power-law exponent. Homogeneous turbulence ---------------------- We summarize here and in the following subsection the derivations carried out by [@blaisdell1991] to obtain the governing equations for homogeneous compressible turbulence. The quantities $\langle \rho \rangle$ and $\langle P \rangle$ are defined as Reynolds-averaged density and pressure, respectively, and $\widetilde{U}_i$ as the Favre-averaged velocity. [@blaisdell1991] showed that for turbulence to remain homogeneous, necessary and sufficient conditions are that $\langle \rho \rangle $ and $\langle P \rangle$ depend on $t$ but not $\bold{x}$, and that $\tilde{U}_i$ be given by $$\tilde{U}_i = G_{ij} x_j,$$ where $G_{ij} = \frac{\partial \tilde{U}_i }{\partial x_j}$ also depends only on $t$ and not $\bold{x}$. Given the above assumptions, averaging of the momentum equation shows that the evolution of $G_{ij}$ is dictated by $$\frac{dG_{ij}}{dt} + G_{kj}G_{ik} = 0.$$ Moreover, using the assumptions above and plugging in the decomposition $U_i = \widetilde{U}_i + u''_i$ in \[eq:rho,eq:rhou,eq:rhoE,eq:sigma\], [@blaisdell1991] derived the governing equations in terms of the fluctuating velocity. These are $$\label{eq:rho_f_G} \frac{\partial \rho}{\partial t} + \frac{\partial \rho}{\partial x_i} G_{ij} x_j + \frac{\partial \rho u''_i}{\partial x_i} = f^{(\rho)},$$ $$\label{eq:rhou_f_G} \frac{\partial \rho u''_i}{\partial t} + \frac{\partial \rho u''_i}{\partial x_j} G_{jk} x_k + \frac{\partial \rho u''_i u''_j}{\partial x_j} = \frac{\partial \sigma_{ij}}{\partial x_j} + f^{(u)}_i ,$$ $$\label{eq:rhoE_f_G} \frac{\partial \rho E_t}{\partial t} + \frac{\partial \rho E_t}{\partial x_i} G_{ik} x_k + \frac{\partial \rho E_t u''_i}{\partial x_i} = \frac{\partial u''_i \sigma_{ij}}{\partial x_j} + \frac{\partial}{\partial x_j} \left ( \kappa \frac{ \partial T}{\partial x_j} \right ) +f^{(e)}.$$ Closure of the above is achieved with $$\sigma_{ij} = -P \delta_{ij} + 2\mu \left[ \frac{1}{2} \left( \frac{\partial u''_i}{\partial x_j} + \frac{\partial u''_j}{\partial x_i} \right ) - \frac{1}{3} \frac{\partial u''_k}{\partial x_k} \delta_{ij} \right] + 2\mu \left[ \frac{1}{2} \left ( G_{ij} + G_{ji} \right ) - \frac{1}{3} G_{ii} \delta_{ij} \right] ,$$ $$E_t = U + K_t ,$$ $$U = C_v T \qquad K_t = \frac{1}{2} u''_i u''_i ,$$ $$P = \rho R T ,$$ $$\kappa = \frac{\mu C_p}{Pr} ,$$ $$\mu = \mu_0 \left(\frac{T}{T_{0}}\right)^n ,$$ $$\label{eq:frho} f^{(\rho)} = -\rho G_{ii} ,$$ $$\label{eq:frhou} f^{(u)}_i = -\rho u''_j G_{ij} - \rho u''_i G_{jj} ,$$ $$\label{eq:frhoe} f^{(e)} = - \rho E_t G_{ii} - \rho u''_i u''_j G_{ij} + G_{ij} \sigma_{ij} .$$ Rogallo transformation ---------------------- As is typically done for simulations of homogeneous turbulence (see for example [@rogallo1981; @blaisdell1991]) one can reformulate the equations using a deforming reference frame—referred to here as the Rogallo reference frame—to eliminate those terms in \[eq:rho\_f\_G,eq:rhou\_f\_G,eq:rhoE\_f\_G\] that have an explicit dependence on position. The variables in the Rogallo reference frame are denoted as ${\mathring{\rho}}= {\mathring{\rho}}(t,{\mathring{\bold{x}}})$, ${\mathring{u}''}= {\mathring{u}''}(t,{\mathring{\bold{x}}})$, ${\mathring{P}}= {\mathring{P}}(t,{\mathring{\bold{x}}})$, ${\mathring{T}}= {\mathring{T}}(t,{\mathring{\bold{x}}})$. The relationship between the variables in the original reference frame and the Rogallo reference frame is as $$\begin{aligned} \rho &= {\mathring{\rho}}(t,\bold{f}) \nonumber \\ u''_i & = {\mathring{u}''}_i(t,\bold{f}) \nonumber \\ P & = {\mathring{P}}(t,\bold{f}) \nonumber \\ T & = {\mathring{T}}(t, \bold{f}),\end{aligned}$$ where $f_i = A_{ij} x_j$. $A_{ij}$ is referred to as the coordinate-transformation tensor, it depends on $t$ only, and is defined so as to satisfy $$\frac{dA_{ij}}{dt} + A_{ik} G_{kj} = 0.$$ Using this transformation, the governing equations in the Rogallo reference frame are $$\label{eq:rho_A} \frac{\partial {\mathring{\rho}}}{\partial t} + \frac{\partial {\mathring{\rho}}{\mathring{u}''}_i}{\partial {\mathring{x}}_j}A_{ji} = \mathring{f}^{(\rho)} ,$$ $$\label{eq:rhou_A} \frac{\partial {\mathring{\rho}}{\mathring{u}''}_i}{\partial t} + \frac{\partial {\mathring{\rho}}{\mathring{u}''}_i {\mathring{u}''}_j}{\partial {\mathring{x}}_k}A_{kj} = \frac{\partial \mathring{\sigma}_{ij}}{\partial {\mathring{x}}_k} A_{kj} + \mathring{f}^{(u)}_i ,$$ $$\label{eq:rhoE_A} \frac{\partial {\mathring{\rho}}\mathring{E}_t}{\partial t} + \frac{\partial {\mathring{\rho}}\mathring{E}_t {\mathring{u}''}_i}{\partial {\mathring{x}}_j}A_{ji} = \frac{\partial {\mathring{u}''}_i \mathring{\sigma}_{ij}}{\partial {\mathring{x}}_k}A_{kj} + \frac{\partial}{\partial {\mathring{x}}_l} \left ( \mathring{\kappa} \frac{ \partial {\mathring{T}}}{\partial {\mathring{x}}_k} \right )A_{kj}A_{lj} + \mathring{f}^{(e)} .$$ Closure of the above is achieved with $$\label{eq:sigma_A} \mathring{\sigma}_{ij} = -{\mathring{P}}\delta_{ij} + 2 \mathring{\mu} \left[ \frac{1}{2} \left( \frac{\partial {\mathring{u}''}_i}{\partial {\mathring{x}}_n}A_{nj} + \frac{\partial {\mathring{u}''}_j}{\partial {\mathring{x}}_n}A_{ni} \right ) - \frac{1}{3} \frac{\partial {\mathring{u}''}_k}{\partial {\mathring{x}}_n}A_{nk} \delta_{ij} \right] + 2 \mathring{\mu} \left[ + \frac{1}{2} \left ( G_{ij} + G_{ji} \right ) - \frac{1}{3} G_{ii} \delta_{ij} \right] ,$$ $$\mathring{E}_t = \mathring{U} + \mathring{K}_t ,$$ $$\mathring{U} = C_v {\mathring{T}}\qquad \mathring{K}_t = \frac{1}{2} {\mathring{u}''}_i {\mathring{u}''}_i ,$$ $${\mathring{P}}= {\mathring{\rho}}R {\mathring{T}},$$ $$\mathring{\kappa} = \frac{ \mathring{\mu} C_p}{Pr} ,$$ $$\mathring{\mu} = \mu_0 \left( \frac{{\mathring{T}}}{T_0}\right)^n ,$$ $$\mathring{f}^{(\rho)} = -{\mathring{\rho}}G_{ii} ,$$ $$\mathring{f}^{(u)}_i = -{\mathring{\rho}}{\mathring{u}''}_j G_{ij} - {\mathring{\rho}}{\mathring{u}''}_i G_{jj} ,$$ $$\mathring{f}^{(e)} = - {\mathring{\rho}}\mathring{E}_t G_{ii} - {\mathring{\rho}}{\mathring{u}''}_i {\mathring{u}''}_j G_{ij} + G_{ij} \mathring{\sigma}_{ij} .$$ Isotropic compression --------------------- The mean flow deformation for isotropic compression is given in [@wu1985; @blaisdell1991; @davidovits2016], and can be expressed as $$G_{ij} = \frac{\dot{L}}{L} \delta_{ij} ,$$ where $\dot{L}$ is constant and thus $L = 1 + \dot{L} t$. The corresponding coordinate transformation tensor is $$A_{ij} = \frac{1}{L} \delta_{ij}.$$ Thus, using the above in \[eq:rho\_A,eq:rhou\_A,eq:rhoE\_A\], we obtain $$\frac{\partial {\mathring{\rho}}}{\partial t} + \frac{\partial {\mathring{\rho}}{\mathring{u}''}_i}{\partial {\mathring{x}}_i}\frac{1}{L} = \mathring{f}^{(\rho)} ,$$ $$\frac{\partial {\mathring{\rho}}{\mathring{u}''}_i}{\partial t} + \frac{\partial {\mathring{\rho}}{\mathring{u}''}_i {\mathring{u}''}_j}{\partial {\mathring{x}}_j} \frac{1}{L} = \frac{\partial \mathring{\sigma}_{ij}}{\partial {\mathring{x}}_j} \frac{1}{L} + \mathring{f}_i^{(u)} ,$$ $$\frac{\partial {\mathring{\rho}}\mathring{E}_t}{\partial t} + \frac{\partial {\mathring{\rho}}\mathring{E}_t {\mathring{u}''}_i}{\partial {\mathring{x}}_i} \frac{1}{L} = \frac{\partial {\mathring{u}''}_i \mathring{\sigma}_{ij}}{\partial {\mathring{x}}_j} \frac{1}{L} + \frac{\partial}{\partial {\mathring{x}}_j} \left ( \mathring{\kappa} \frac{ \partial {\mathring{T}}}{\partial {\mathring{x}}_j} \right ) \frac{1}{L^2} + \mathring{f}^{(e)}.$$ Closure of the above is achieved with $$\mathring{\sigma}_{ij} = -{\mathring{P}}\delta_{ij} + 2 \mathring{\mu} \left[ \frac{1}{2} \left( \frac{\partial {\mathring{u}''}_i}{\partial {\mathring{x}}_j} \frac{1}{L} + \frac{\partial {\mathring{u}''}_j}{\partial {\mathring{x}}_i}\frac{1}{L} \right ) - \frac{1}{3} \frac{\partial {\mathring{u}''}_k}{\partial {\mathring{x}}_k}\frac{1}{L} \delta_{ij} \right] ,$$ $$\mathring{E}_t = \mathring{U} + \mathring{K}_t ,$$ $$\mathring{U} = C_v {\mathring{T}}\qquad \mathring{K}_t = \frac{1}{2} {\mathring{u}''}_i {\mathring{u}''}_i ,$$ $${\mathring{P}}= {\mathring{\rho}}R {\mathring{T}},$$ $$\mathring{\kappa} = \frac{ \mathring{\mu} C_p}{Pr} ,$$ $$\mu = \mu_0 \left( \frac{{\mathring{T}}}{T_0}\right)^n ,$$ $$\mathring{f}^{(\rho)} = - 3 {\mathring{\rho}}\frac{\dot{L}}{L} ,$$ $$\mathring{f}^{(u)}_i = -4 {\mathring{\rho}}{\mathring{u}''}_i \frac{\dot{L}}{L} ,$$ $$\mathring{f}^{(e)} = - 3 {\mathring{\rho}}\mathring{E}_t \frac{ \dot{L} }{L} - {\mathring{\rho}}{\mathring{u}''}_i {\mathring{u}''}_i \frac{\dot{L}}{L} - 3 {\mathring{P}}\frac{\dot{L}}{L}.$$ Re-scaling ---------- An additional transformation can be performed so that, as the simulation advances in time, division by very small values of $L$ is avoided. The analogue of this re-scaling for the zero-Mach limit is detailed in [@davidovits2016] and in the Appendix of [@davidovits2016b]. The new re-scaled flow variables are $\hat{\rho} = \hat{\rho}(\hat{t},{\mathring{\bold{x}}})$, $\hat{u}''_i = \hat{u}''_i(\hat{t},{\mathring{\bold{x}}})$, $\hat{P} = \hat{P}(\hat{t},{\mathring{\bold{x}}})$, and $\hat{T} = \hat{T}(\hat{t},{\mathring{\bold{x}}})$. Their relation to the original variables is $$\begin{aligned} {\mathring{\rho}}&= \hat{\rho}(g,{\mathring{\bold{x}}}) L^{-1} \nonumber \\ {\mathring{u}''}_i &= \hat{u}''_i(g,{\mathring{\bold{x}}}) \nonumber \\ {\mathring{P}}&= \hat{P}(g,{\mathring{\bold{x}}}) L^{-1} \nonumber \\ {\mathring{T}}&= \hat{T}(g,{\mathring{\bold{x}}}),\end{aligned}$$ where $g = g(t)$ is defined by $\frac{dg}{dt} = L^{-1}$. We define $\hat{\mu} = \hat{\mu}(\hat{t},{\mathring{\bold{x}}})$ and $\hat{\kappa} = \hat{\kappa}(\hat{t},{\mathring{\bold{x}}})$ as $$\hat{\mu} = \mu_0 \left ( \frac{\hat{T}}{T_0} \right)^n$$ and $$\hat{\kappa} = \frac{\hat{\mu} C_p}{Pr}$$ so that $\mathring{\mu} = \hat{\mu}(g,{\mathring{\bold{x}}})$ and $\mathring{\kappa} = \hat{\kappa}(g,{\mathring{\bold{x}}})$. The re-scaled stress tensor $\hat{\sigma} = \hat{\sigma}(\hat{t},{\mathring{\bold{x}}})$ is defined as $$\hat{\sigma}_{ij} = -\hat{P} \delta_{ij} + 2\hat{\mu} \left [ \frac{1}{2} \left ( \frac{\partial \hat{u}''_i}{\partial {\mathring{x}}_j} + \frac{\partial \hat{u}''_j}{\partial {\mathring{x}}_i} \right ) - \frac{1}{3} \frac{\partial \hat{u}''_k}{\partial {\mathring{x}}_k} \delta_{ij} \right ] ,$$ so that $\mathring{\sigma}_{ij} = \hat{\sigma}_{ij}(g,{\mathring{x}}) L^{-1}$. Using the re-scaled variables above, the continuity equation transforms as follows $$\left (\frac{\partial \hat{\rho}}{\partial \hat{t}} \right)_{\hat{t} = g} \frac{1}{L^2} - ( \hat{\rho} )_{\hat{t} = g} \frac{ \dot{L}}{L^2} + \left ( \frac{\partial \hat{\rho} \hat{u}''_i}{\partial {\mathring{x}}_i} \right)_{\hat{t} = g} \frac{1}{L^2} = -3 ( \hat{\rho} )_{\hat{t} = g} \frac{\dot{L}}{L^2}.$$ The conservation of momentum equation becomes $$\left (\frac{\partial \hat{\rho} \hat{u}''_i}{\partial \hat{t}} \right )_{\hat{t} = g}\frac{1}{L^2} - ( \hat{\rho} \hat{u}''_i)_{\hat{t} = g} \frac{\dot{L}}{L^2} + \left ( \frac{\partial \hat{\rho} \hat{u}''_i \hat{u}''_j}{\partial {\mathring{x}}_j} \right)_{\hat{t} = g} \frac{1}{L^2} = \left ( \frac{\partial \hat{\sigma}_{ij}}{\partial {\mathring{x}}_j} \right)_{\hat{t} = g} \frac{1}{L^2} -4 ( \hat{\rho} \hat{u}''_i)_{\hat{t} = g} \frac{\dot{L}}{L^2}.$$ The total energy $\hat{E}_t = \hat{E}_t(\hat{t},{\mathring{\bold{x}}})$ is defined as $\hat{E}_t = \hat{U} + \hat{K}_t$, where $\hat{U} = C_v \hat{T}$ and $\hat{K}_t = \frac{1}{2} \hat{u}''_i \hat{u}''_i$, so that $\mathring{E}_t = \hat{E}_t(g,{\mathring{\bold{x}}})$. The energy equation can thus be expressed as $$\begin{gathered} \left ( \frac{ \partial \hat{\rho} \hat{E}_t }{\partial \hat{t}} \right)_{\hat{t} = g} \frac{1}{L^2} - ( \hat{\rho} \hat{E}_t )_{\hat{t} = g} \frac{\dot{L}}{L^2} + \left ( \frac{\partial \hat{\rho} \hat{E}_t \hat{u}''_i}{\partial {\mathring{x}}_i} \right)_{\hat{t} = g} \frac{1}{L^2} = \\ \left ( \frac{\partial \hat{u}''_i \hat{\sigma}_{ij} }{\partial x_j} \right )_{\hat{t} = g} \frac{1}{L^2} + \left [ \frac{\partial}{\partial x_j} \left ( \hat{\kappa} \frac{\partial \hat{T}}{\partial x_j} \right ) \right ]_{\hat{t} = g} \frac{1}{L^2} - 3 (\hat{\rho} \hat{E}_t )_{\hat{t} = g} \frac{\dot{L}}{L^2} - (\hat{\rho} \hat{u}''_i \hat{u}''_i)_{\hat{t} = g} \frac{\dot{L}}{L^2} - 3 (\hat{P})_{\hat{t} = g} \frac{\dot{L}}{L^2}.\end{gathered}$$ The $1/L^2$ factors can now be eliminated. Evaluating the equations at $t = g^{-1}(\hat{t})$ we finally obtain $$\frac{\partial \hat{\rho}}{\partial \hat{t}} + \frac{\partial \hat{\rho} \hat{u}''_i}{\partial {\mathring{x}}_i} = -2 \hat{\rho} \left ( \dot{L} \right )_{t = g^{-1}(\hat{t})} ,$$ $$\frac{\partial \hat{\rho} \hat{u}''_i}{\partial \hat{t}} + \frac{\partial \hat{\rho} \hat{u}''_i \hat{u}''_j}{\partial {\mathring{x}}_j} = \frac{\partial \hat{\sigma}_{ij}}{\partial {\mathring{x}}_j} - 3 \hat{\rho} \hat{u}''_i \left ( \dot{L} \right )_{t = g^{-1}(\hat{t})} ,$$ $$\frac{\partial \hat{\rho} \hat{E}_t}{\partial \hat{t}} + \frac{\partial \hat{\rho} \hat{E}_t \hat{u}''_i}{\partial {\mathring{x}}_i} = \frac{\partial \hat{u}''_i \hat{\sigma}_{ij}}{\partial {\mathring{x}}_j} + \frac{\partial}{\partial {\mathring{x}}_j} \left ( \hat{\kappa} \frac{ \partial \hat{T}}{\partial {\mathring{x}}_j} \right ) - \left ( 2 \hat{\rho} \hat{E}_t + \hat{\rho} \hat{u}''_i \hat{u}''_i + 3 \hat{P}\right) \left ( \dot{L} \right )_{t = g^{-1}(\hat{t})} .$$ Tu summarize, we write the governing equations as $$\label{eq:rho_modframe} \frac{\partial \hat{\rho}}{\partial \hat{t}} + \frac{\partial \hat{\rho} \hat{u}''_i}{\partial {\mathring{x}}_i} = \hat{f}^{(\rho)} ,$$ $$\label{eq:rhou_modframe} \frac{\partial \hat{\rho} \hat{u}''_i}{\partial \hat{t}} + \frac{\partial \hat{\rho} \hat{u}''_i \hat{u}''_j}{\partial {\mathring{x}}_j} = \frac{\partial \hat{\sigma}_{ij}}{\partial {\mathring{x}}_j} + \hat{f}_i^{(u)} ,$$ $$\label{eq:rhoE_modframe} \frac{\partial \hat{\rho} \hat{E}_t}{\partial \hat{t}} + \frac{\partial \hat{\rho} \hat{E}_t \hat{u}''_i}{\partial {\mathring{x}}_i} = \frac{\partial \hat{u}''_i \hat{\sigma}_{ij}}{\partial {\mathring{x}}_j} + \frac{\partial}{\partial {\mathring{x}}_j} \left ( \hat{\kappa} \frac{ \partial \hat{T}}{\partial {\mathring{x}}_j} \right ) + \hat{f}^{(e)} .$$ Closure of the above is achieved with $$\label{eq:sigma_modframe} \hat{\sigma}_{ij} = -\hat{P} \delta_{ij} + 2\hat{\mu} \left [ \frac{1}{2} \left ( \frac{\partial \hat{u}''_i}{\partial {\mathring{x}}_j} + \frac{\partial \hat{u}''_j}{\partial {\mathring{x}}_i} \right ) - \frac{1}{3} \frac{\partial \hat{u}''_k}{\partial {\mathring{x}}_k} \delta_{ij} \right ] ,$$ $$\hat{E}_t = \hat{U}+ \hat{K}_t ,$$ $$\hat{U} = C_v \hat{T} \qquad \hat{K}_t = \frac{1}{2} \hat{u}''_i \hat{u}''_i ,$$ $$\hat{P} = \hat{\rho} R \hat{T} ,$$ $$\hat{\kappa} = \frac{ \hat{\mu} C_p}{Pr} ,$$ $$\hat{\mu} = \mu_0 \left(\frac{ \hat{T}}{T_0}\right)^{n} ,$$ $$\label{eq:frho_modframe} \hat{f}^{(\rho)} = -2 \dot{L} \hat{\rho} ,$$ $$\label{eq:frhou_modframe} \hat{f}^{(u)}_i = -3 \dot{L} \hat{\rho} \hat{u}''_i ,$$ $$\label{eq:fe_modframe} \hat{f}^{(e)} = - \left [2 \hat{\rho} \hat{E}_t + \hat{\rho} \hat{u}''_i \hat{u}''_i + 3 \hat{P} \right ] \dot{L} .$$ The last issue to be addressed is the time $\hat{t}$ that corresponds to $L=0$. Solving $\frac{dg}{dt} = L^{-1}$ leads to $$g = -\frac{1}{2V_b} \ln( L).$$ Since we evaluated the equations at time $t = g^{-1}(\hat{t})$, we have $$\hat{t} = -\frac{1}{2V_b} \ln( L).$$ Thus, $L=0$ corresponds to $\hat{t} \to \infty$. However, it is not expected that the simulation will need to proceed up to infinity, and that instead the viscous instability would kick in prior to this limit. Derivation of evolution equations for solenoidal and dilatational TKE {#sec:sol_dil_evolution} ===================================================================== To derive the evolution equations for the solenoidal and dilatational TKEs given homogeneous turbulence with some applied mean-flow deformation, we follow the procedure of [@kida1990]. Thus, we introduce the variable $v_i = \sqrt{\rho} u''_i$. The derivation of the evolution equation for $v_i$ begins with $$\begin{gathered} \frac{\partial v_i}{\partial t} = \frac{\partial}{\partial t} \left ( \frac{\rho u''_i}{\sqrt{\rho}} \right) = \frac{1}{\sqrt{\rho}} \left ( \frac{\partial \rho u''_i}{\partial t} - \frac{u''_i}{2} \frac{\partial \rho}{\partial t} \right ) = \\ \frac{1}{\sqrt{\rho}} \left ( -\frac{\partial \rho u''_i}{\partial x_j} G_{jk}x_k - \frac{\partial \rho u''_i u''_j}{\partial x_j} + \frac{\partial \sigma_{ij}}{\partial x_j} + f_i^{(u)} + \frac{u_i}{2} \frac{\partial \rho}{\partial x_j} G_{jk}x_k + \frac{u_i}{2} \frac{\partial \rho u''_j}{\partial x_j} - \frac{u_i}{2} f^{(\rho)} \right ),\end{gathered}$$ where we have used \[eq:rho\_f\_G,eq:rhou\_f\_G\]. One can easily derive the relations $$-\frac{\partial \rho u''_i}{\partial x_j} G_{jk} x_k + \frac{u_i}{2} \frac{\partial \rho}{\partial x_j} G_{jk} x_k = -\sqrt{\rho} \frac{\partial v_i}{\partial x_j} G_{jk} x_k,$$ and $$- \frac{\partial \rho u''_i u''_j}{\partial x_j} + \frac{u_i}{2} \frac{\partial \rho u''_j}{\partial x_j} = -\sqrt{\rho} \left ( \frac{\partial v_i u''_j}{\partial x_j} + \frac{ v_i}{2} \frac{\partial u''_j}{\partial x_j} \right),$$ to thus obtain $$\frac{\partial v_i}{\partial t} = -\frac{\partial v_i}{\partial x_j} G_{jk} x_k - \frac{\partial v_i u''_j}{\partial x_j} + \frac{v_i}{2} \frac{ \partial u''_j}{\partial x_j} + \frac{1}{\sqrt{\rho}} \frac{\partial \sigma_{ij}}{\partial x_j} + \frac{1}{\sqrt{\rho}} f_i^{(u)} - \frac{1}{\sqrt{\rho}} \frac{u''_i}{2} f^{(\rho)}.$$ The evolution equation for $v_i$ can now be multiplied by $v_i^{(\alpha)} = \sqrt{\rho} u''^{(\alpha)}_i$, where $\alpha = s,d$. This is then followed by the application of the averaging operator. Given the Helmholtz decomposition of any two fluctuating variables $F''_i = F''^{(s)}_i + F''^{(d)}_i$ and $G''_i = G''^{(s)}_i + G''^{(d)}_i$, the orthogonality relation $\langle F''^{(\alpha)}_i G''^{(\beta)}_i \rangle = 0$ for $\alpha,\beta = s,d$ and $\alpha \neq \beta$ is used to obtain $$\begin{gathered} \frac{1}{2} \frac{d}{d t} {\left <}v_i^{(\alpha)} v_i^{(\alpha)} {\right >}= -\frac{1}{2} \frac{ \partial}{\partial x_j} {\left <}v_i^{(\alpha)} v_i^{(\alpha)} {\right >}G_{jk} x_k \\ - {\left <}\frac{\partial v_i u''_j}{\partial x_j} v_i^{(\alpha)} {\right >}+ {\left <}\frac{v_i}{2} \frac{ \partial u''_j}{\partial x_j} v_i^{(\alpha)} {\right >}+ {\left <}\frac{1}{\sqrt{\rho}} \frac{\partial \sigma_{ij}}{\partial x_j} v_i^{(\alpha)} {\right >}+ {\left <}\frac{1}{\sqrt{\rho}} f_i^{(u)} v_i^{(\alpha)} {\right >}- {\left <}\frac{1}{\sqrt{\rho}} \frac{u''_i}{2} f^{(\rho)} v_i^{(\alpha)} {\right >}.\end{gathered}$$ The first term on the right-hand side above vanishes due to homogeneity. Plugging in for $v_i$ and using \[eq:frho,eq:frhou\], the above becomes $$\begin{gathered} \frac{1}{2} \frac{d}{d t} {\left <}\rho u''^{(\alpha)}_i u''^{(\alpha)}_i {\right >}= - {\left <}\frac{\partial \sqrt{\rho} u''_i u''_j}{\partial x_j} \sqrt{\rho} u''^{(\alpha)}_i {\right >}+ {\left <}\frac{\rho u''_i u''^{(\alpha)}_i}{2} \frac{ \partial u''_j}{\partial x_j} {\right >}\\ + {\left <}\frac{\partial \sigma_{ij}}{\partial x_j} u''^{(\alpha)}_i {\right >}- {\left <}\rho u''^{(\alpha)}_i u''^{(\alpha)}_j {\right >}G_{ij} - \frac{1}{2} {\left <}\rho u''^{(\alpha)}_i u''^{(\alpha)}_i {\right >}G_{jj} .\end{gathered}$$ Using either $\alpha = s$ or $d$, and assuming $G_{ij}$ is isotropic, we obtain the evolution equations for the solenoidal and dilatational kinetic energies, which we express as follows $$\begin{aligned} \frac{d \langle \rho \rangle k^{(s)}}{d t} +{\left <}\rho {\right >}k^{(s)} G_{jj} &= IA^{(s)} +P^{(s)} - \langle \rho \rangle \epsilon^{(s)}, \\ \frac{d \langle \rho \rangle k^{(d)}}{d t} + {\left <}\rho {\right >}k^{(d)} G_{jj} &= IA^{(d)} +P^{(d)} - \langle \rho \rangle \epsilon^{(d)} + PD.\end{aligned}$$ In the above, $k^{(s)}$, $IA^{(s)}$, $P^{(s)}$, and $\epsilon^{(s)}$ are the solenoidal components of the turbulent kinetic energy, intermode advection, production, and dissipation, respectively. $k^{(d)}$, $IA^{(d)}$, $P^{(d)}$, and $\epsilon^{(d)}$ are the dilatational components of the turbulent kinetic energy, intermode advection, production, and dissipation, respectively. $PD$ is the pressure dilatation. These quantities are defined as $$\begin{aligned} k^{(\alpha)} &= \frac{1}{2}{\stackon[-8pt]{$u''^{(\alpha)}_i u''^{(\alpha)}_i$}{\vstretch{1.5}{\hstretch{2.0}{\widetilde{\phantom{\;\;\;\;\;\;\;\;}}}}}},\\ IA^{(\alpha)} & = - \left < \frac{\partial \sqrt{\rho} u''_i u''_j}{\partial x_j} \sqrt{\rho} u''^{(\alpha)}_i \right > + \left < \frac{\rho u''_i u''^{(\alpha)}_i}{2} d \right > ,\\ P^{(\alpha)} & = - {\left <}\rho u''^{(\alpha)}_i u''^{(\alpha)}_j {\right >}G_{ij} ,\\ \langle \rho \rangle \epsilon^{(s)} &= \left < \mu w_i w_i \right > ,\\ \langle \rho \rangle \epsilon^{(d)} &= \frac{4}{3} \left < \mu d^2 \right > ,\\ PD &= \left < P d \right > .\end{aligned}$$ We note that $$IA^{(s)} + IA^{(d)} = - \frac{\partial}{\partial x_j} \left ( \frac{1}{2} \left < \rho u''_i u''_i u''_j \right > \right ) = 0 ,$$ and thus, the advection terms represent an intermode transfer of energy between the solenoidal and dilatational components. Using the Favre-averaged conservation of mass equation, we finally express the evolution equation for the turbulent kinetic energies as $$\begin{aligned} {\left <}\rho {\right >}\frac{d k^{(s)}}{d t} &= IA^{(s)} +P^{(s)} - \langle \rho \rangle \epsilon^{(s)} ,\\ {\left <}\rho {\right >}\frac{d k^{(d)}}{d t} &= IA^{(d)} +P^{(d)} - \langle \rho \rangle \epsilon^{(d)} + PD.\end{aligned}$$ Proof of time invariance for the energy ratio $\left ( \widetilde{U} + k \right )/ \widetilde{U}^{(a)}$ {#sec:cons_energy_ratio} ======================================================================================================= The chain rule applied to the time derivative of the energy ratio gives $$\label{eq:ratio_1} \frac{d}{dt} \left ( \frac{ \widetilde{U} + k }{ \widetilde{U}^{(a)}} \right ) = \frac{d}{dt} \left ( \widetilde{U} + k \right ) \frac{1}{\widetilde{U}^{(a)}} + \left ( \widetilde{U} + k \right ) \frac{d}{dt} \left ( \frac{1}{\widetilde{U}^{(a)}} \right ).$$ Given the definition of the adiabatic internal energy $\widetilde{U}^{(a)} = \widetilde{U}_0 L^{-2}$, we have $$\label{eq:ratio_2} \frac{d}{dt} \left ( \frac{1}{\widetilde{U}^{(a)}} \right ) = \frac{2 L \dot{L}}{\widetilde{U}_0} .$$ Using \[eq:ks\_evolution,eq:kd\_evolution,eq:ie\_evolution\], one obtains $$\frac{d}{dt} \left ( \widetilde{U} + k \right ) = \frac{MW + P}{{\left <}\rho {\right >}} ,$$ where $P$ is the total production $P^{(s)} + P^{(d)}$. Given the deformation tensor $G_{ij}$ used for isotropic compressions, $MW = -3 {\left <}P {\right >}\dot{L}/L$ and $P = -2 {\left <}\rho {\right >}k \dot{L} / L$. Using the equation of state ${\left <}P {\right >}= {\left <}\rho {\right >}R \widetilde{T}$, the definition of the internal energy $\widetilde{U} = C_v \widetilde{T}$, and the specific heat ratio $\gamma = 5/3$, we have $$\label{eq:ratio_3} \frac{d}{dt} \left ( \widetilde{U} + k \right ) = -2 \widetilde{U} \frac{\dot{L} }{L} - 2 k \frac{\dot{L}}{L} .$$ Thus, using \[eq:ratio\_2,eq:ratio\_3\] in \[eq:ratio\_1\], we can show that $$\frac{d}{dt} \left ( \frac{ \widetilde{U} + k }{ \widetilde{U}^{(a)}} \right ) = -2 \widetilde{U} \frac{\dot{L}{L}}{\widetilde{U}_0} - 2k \frac{\dot{L} L}{\widetilde{U}_0} + 2 \widetilde{U} \frac{ L \dot{L}}{\widetilde{U}_0} + 2 k \frac{ L \dot{L}}{\widetilde{U}_0} = 0 .$$
--- abstract: 'A definition of the thermodynamic entropy based on the time-dependent probability distribution of the macroscopic variables is developed. When a constraint in a composite system is released, the probability distribution for the new equilibrium values goes to a narrow peak. Defining the entropy by the logarithm of the probability distribution automatically makes it a maximum at the equilibrium values, so it satisfies the Second Law. It is also satisfies the postulates of thermodynamics. Objections to this definition by Dieks and Peters are discussed and resolved.' author: - 'Robert H. Swendsen' bibliography: - 'Swendsen\_ENTROPY.bib' title: The definition of the thermodynamic entropy in statistical mechanics --- Introduction ============ Thermodynamics is an extremely successful phenomenological theory of macroscopic experiments. The entropy plays a central role in this theory because it is a unique function for each system that determines all thermodynamic information. The calculation of the form of the entropy lies in the microscopic description given by statistical mechanics. In this paper, I present a simple derivation of the entropy using reasonable assumptions about the probability distributions of macroscopic variables and approximations based on the large number of particles in macroscopic systems. The basic task of thermodynamics is the prediction of the values of the macroscopic variables after the release of one or more constraints and the subsequent relaxation to a new equilibrium. This appears in the key thermodynamic postulate that is a particular form of the second law.[@Tisza; @Callen; @RHS_book]. > The values assumed by the extensive parameters of an isolated composite system in the absence of an internal constraint are those that maximize the entropy over the set of all constrained macroscopic states[@RHS_book]. I will show that the solution to this problem in statistical mechanics leads to a function that satisfies this postulate, as well as satisfying the rest of the postulates of thermodynamics. Since these postulates are sufficient to generate all of thermodynamics, and since the thermodynamic entropy is unique[@Lieb_Yngvason], this function can be identified as the entropy. I have presented other derivations in the past that are equivalent, though perhaps not as direct[@RHS_1; @RHS_4; @RHS_5; @RHS_8; @RHS_9; @RHS_unnormalized]. They have been criticized by Dieks[@Dieks_unique_entropy; @Dieks_logic_of_identity_2014] and Peters[@Peters_2010; @Peters_2014], whose arguments will be discussed in Sections \[section: Dieks\] and \[section: Peters\]. The prediction of equilibrium values from statistical mechanics {#section: statistical mechanics} =============================================================== Thermodynamics is a description of the properties of systems containing many particles (macroscopic systems), for which the fluctuations can be ignored because they are smaller than the experimental resolution. The basic problem of thermodynamics is to predict the equilibrium values of the extensive variables after the release of a constraint in a composite system. I will first consider this as a problem is statistical mechanics, without using any thermodynamic concepts. Consider a composite system of $M \ge 2$ subsystems, with a total energy $E_T$, volume $V_T$, and particle number $N_T$[@footnote_1_particle_types]. Denote the total phase space for this composite system (in three dimensions) by $\{p,q\}$, where $p$ is the $3N_T$-dimensional momentum space, and $q$ is the $3N_T$-dimensional configuration space. Define the probability distribution in the phase space of the composite system as $\phi_T \left( \{p, q\} , t \right)$, where $t$ is the time. I’ll assume that the composite system is initially in equilibrium at time $t=0$, and that the initial conditions are given by setting $\phi_T$ equal to a constant, subject to all information available about the system at that time. Assume that interactions between subsystems are sufficiently short-ranged that they may be neglected[@RHS_continuous]. Then, we can write the total Hamiltonian as a sum of contributions from each system. $$\label{H total 1} H_T = \sum_{j=1}^M H_j (E_j, V_j, N_j)$$ The energy, volume, and particle number of subsystem $j$ are denoted as $E_j$, $V_j$, and $N_j$, subject to the conditions on the sums. $$\sum_{j=1}^{M} E_j = E_T ; \, \sum_{j=1}^{M} V_j = V_T ; \, \sum_{j=1}^{M} N_j = N_T$$ In keeping with the idea that we are describing macroscopic experiments, assume that no measurements are made that might identify individual particles, whether or not they are formally indistinguishable[@RHS_distinguishability]. This means that there are $N_T!/ \left( \prod_{j=1}^{M} N_j!\right)$ different permutations for assigning particles to subsystems, and all permutations may be regarded as equally probable. The probability distribution in the phase space of the composite system is given by $$\begin{aligned} \label{phi t=0} \phi_T \left( \{p, q\} , t =0 \right) &=& \frac{1}{\Omega_T} \left( \frac{N_T! }{ \prod_{j=1}^{M} N_j! } \right) \nonumber \\ && \times \prod_{k=1}^{M} \delta \left( E_k - H_k( \{ p_k, q_k \} ) \right) ,\end{aligned}$$ where $\{ p_k, q_k \}$ is the phase space for the particles in subsystem $k$, and $\Omega_T$ is a normalization factor. The constraint that the $N_k$ particles in subsystem $k$ are restricted to a volume $V_k$ is left implicit in Eq. (\[phi t=0\]). The probability distribution for the macroscopic observables can then be written as $$\begin{aligned} \label{W 1} W \left( \{ E_j, V_j,N_j \} %,t \right) &=& \frac{N_T!}{\Omega_T} \left( \frac{ 1 }{ \prod_j N_j! } \right) \nonumber \\ && \times \int dp \int dq \prod_{j=1}^M \delta( E_j - H_j ) ,\end{aligned}$$ or $$\label{W 2} W( \{E_j, V_j, N_j \} ) = \frac{ \prod_{j=1}^M \Omega_j ( E_j, V_j, N_j ) }{ \Omega_T / N_T! h^{3N_T} } ,$$ where $$\label{Omega j 1} \Omega_j = \frac{1}{h^{3N_j} N_j!} \int_{-\infty}^{\infty} dp_j \int_{V_j} dq_j \, \delta( E_j - H_j ) .$$ The factor of $1/h^{3N_j}$, where $h$ is Planck’s constant, is not necessary for classical mechanics. It has been included to ensure that the final answer agrees with the classical limit from quantum statistical mechanics[@RHS_book]. There is no requirement that the Hamiltonians $H_j$ are the same, so there is also no requirement that the individual $\Omega_j$’s have the same functional form. Long-range interactions *within a system* are allowed. If one or more constraints are now released, the probability $\phi_T \left( \{p, q\} , t \right)$ will become time dependent. After sufficient time has passed, the probability distribution will have spread throughout the available phase space, although it will still be non-uniform on the finest scale due to Liouville’s theorem. The probability distribution for the macroscopic variables will again be given by $W( \{E_j, V_j, N_j \})$, but now without the constraints on the variables that have been released[@RHS_6]. The functional dependence of $W$ on the variables $\{E_j, V_j, N_j \}$ does not change when a constraint is released. An important advantage of working with the probability distributions for macroscopic observables is that they converge to the equilibrium probability distributions at the end of an irreversible process[@RHS_6]. Although it is not necessary, the introduction of coarse graining[@Penrose_book] or the modification of the microscopic probability distribution by invoking typicality[@Goldstein_Lebowitz; @Lebowitz_symmetry] leads to the same results. Usually, $W$ is a very narrow function of the released variables. The main exception is the case of a first-order phase transition, in which it can be a very broad function of the relevant variable[@RHS_continuous]. This situation is discussed in Ref. [@RHS_continuous], and I will ignore it for the present discussion. The location of the narrow peak in $W$ as a function of the variable describing a released constraint gives the final equilibrium value of that variable at the end of the irreversible process. For example, if subsystems $1$ and $2$ are brought in thermal contact so that energy transfer is possible, the final value of $E_1$ would be given by the location of the maximum of $W$ to within thermal fluctuations. This characterizes the equilibrium values as the mode of the probability distribution, not the mean. The difference between the mean and the mode is of the order of $1/N$, which is very small and far less than the assumed experimental accuracy. Indeed, it is not even measurable for macroscopic systems[@SW_2015_PR_E_R]. When subsystems are separated, the probability $W$ remains unchanged. The constraint is restored, and the variable that was being exchanged keeps its value, which is known to within the very small fluctuations. The normalization constant, $\Omega_T$, is dependent on exactly which constraints might be released, but the other factors are not. Since the only property of the function $W( \{E_j, V_j, N_j \})$ that is needed is that it has a very narrow peak at the equilibrium value(s) after the release of constraint(s), the value of $\Omega_T$ does not affect the argument. Now that the probability distribution for the equilibrium variables has been determined, we can turn to the definition of entropy. The definition of the thermodynamic entropy {#section: entropy} =========================================== Following Boltzmann[@Boltzmann; @Boltzmann_translation; @RHS_4], the thermodynamic entropy may be identified as the logarithm of the probability distribution $W$, plus an arbitrary constant. $$\label{S = ln W 1} S_T ( \{E_j, V_j, N_j\}) = k_B \ln W + X$$ Since the probability is a maximum at equilibrium, the entropy is also with this definition. Although Boltzmann considered a dimensionless entropy and never used the “Boltzmann constant,” $k_B$, which was introduced by Planck[@Planck_1901; @Planck_book], I have included a factor of $k_B$ to be consistent with physical units. Combining Eqs. (\[W 2\]), (\[Omega j 1\]) and (\[S = ln W 1\]), the total entropy can be written as a sum of $M$ terms, each of which depends only on the properties of a single subsystem, plus a constant. $$\label{S = sum Omega j 1} S_T = \sum_{j=1}^M S_j ( E_j, V_j, N_j ) - k_B \ln \left[ \frac{ \Omega_T }{ N_T! h^{3N_T} } \right] + X$$ The entropy of the $j$-th subsystem in Eq. (\[S = sum Omega j 1\]) is given by $$\label{S j = kB ln Omega j} S_j ( E_j, V_j, N_j ) = k_B \ln \Omega_j ( E_j, V_j, N_j ) ,$$ or $$\label{S j = kB ln Omega j explicit} S_j % ( E_j, V_j, N_j ) = k_B \ln \left[ \frac{1}{h^{3N_j} N_j!} \int_{-\infty}^{\infty} dp_j \int_{V_j} dq_j \, \delta( E_j - H_j ) \right] .$$ The entropy of subsystem $j$ contains the factor $1/N_j!$, which arises from the multinomial factor in Eq. (\[phi t=0\]). It would be possible to add an arbitrary constant $X_j$ to $S_j$ in Eq. (\[S j = kB ln Omega j explicit\]), but I have chosen to set $X_j=0$ for all $j$, which is the usual convention[@RHS_9]. $S_j$ only depends on the properties of system $j$, which means that the total entropy is separable. This is just the usual thermodynamic property of additivity, but viewed from the perspective of dividing up a composite system, rather than assembling one. Since $\Omega_T$ has been defined to be a normalization constant, if all chosen constraints are released, the value of $S_T$ after the composite system has returned to equilibrium is given entirely by the additive constant (neglecting terms of the order of the logarithm of the particle numbers). $$\label{S = X in equilibrium} S_T (\text{after release}) \rightarrow X$$ This will be true regardless of which constraints have been chosen to determine $\Omega_T$, as long as all of those constraints are released. A convenient choice of $X$ is $k_B \ln \left[ \Omega_T / N_T! h^{3N_T} \right]$. Then the total entropy of the composite system is just given by the sum of the subsystem entropies. But this choice is not required. The application of the entropy equations ======================================== Eqs. (\[S = sum Omega j 1\]), (\[S j = kB ln Omega j\]), and (\[S j = kB ln Omega j explicit\]) are intended to be applied to the set of all systems in the world that can be regarded as classical. That includes not only systems in a particular laboratory, but also those in a different city or continent. Most systems will not interact with each other because of physical separation, and the constraints of their not exchanging energy, volume, or particles are expected to remain indefinitely. The entropy of a single system is given by Eq. (\[S j = kB ln Omega j explicit\]). For experiments involving only a local group of systems (or subsystems of the overall composite system), the existence of many other (sub)systems can be safely ignored, because their properties do not affect the local thermodynamic variables. Similarly, the value of the additive constants in Eq. (\[S = sum Omega j 1\]) will not affect the predictions of any experiment. Eqs. (\[S = sum Omega j 1\]) and (\[S j = kB ln Omega j explicit\]) allow us to find the non-negative change in total entropy ($\Delta S_T \ge 0$) during any irreversible process between equilibrium states that occurs after the release of a constraint, as well as the final equilibrium values of thermodynamic observables. Dieks has criticized this derivation of the entropy. I discuss his views in the next section. Dieks’ objection {#section: Dieks} ================ Dieks’ criticism rests on the claim that the choice of additive constant, $X$ in Eq. (\[S = sum Omega j 1\]), is essential for obtaining my results for the entropy[@Dieks_unique_entropy]. This claim is untenable, since I have derived the entropy of an arbitrary subsystem \[Eq. (\[S j = kB ln Omega j explicit\])\] without fixing the value of $X$, and the value of $X$ has no physical consequences. Looking further, we can see that Dieks means something different. He is interested in the value of the entropy of the entire composite system of $M$ subsystems for the case in which all constraints have been released. As shown above in Eq. (\[S = X in equilibrium\]), the release of all constraints leads to a constant $S_T %(\text{after release}) \rightarrow X$, where $X$ is arbitrary. Dieks is concerned about the determination of a particular form of this constant. Since there are no physical consequences for any value of $X$, I fail to see the importance of the issue. Dieks explicitly recognizes that this issue is without importance. Writing $N$ for what I have called $N_T$, he says in a footnote: > A more detailed discussion should also take into account that the division by $N!$ is without significance anyway as long as $N$ is constant[@Dieks_unique_entropy]. However, he still uses the value of this constant to frame his objection to my definition. The reason for this contradiction might lie in his incorrect description of my definition of entropy, which he claims amounts simply to dividing the traditional expression by $N!$. I will consider his argument in detail. Two simple subsystems --------------------- Dieks considered an isolated composite system consisting of only two ideal gases ($M=2$), and simplified his analysis by ignoring the energy dependence. In discussing his argument, I will depart from Dieks’ notation[@footnote_on_Dieks_notation] by using $N_T=N_1+N_2$ as the constant total number of particles to be consistent with the notation I used in previous sections. For clarity, I will also retain an arbitrary value of the additive constant $X$ (see Eq. (\[S = ln W 1\]), above) until the end of the discussion, although Dieks makes the specific choice of $X=k_B \ln \left( V_T^{N_T}/N_T! \right)$, “for reasons of convenience,” early in his argument[@Dieks_unique_entropy]. For Dieks’ two subsystems of classical ideal gases, my Eq. (\[S = sum Omega j 1\]) becomes his Eq. (2), $$\begin{aligned} S_T (N_1,V_1;N_2,V_2) =& k_B \ln \left( \frac{N_T!}{N_1! N_2!} \frac{ V_1^{N_1}V_2^{N_2} }{ V_T^N} \right) +X \nonumber \\ =& k_B \ln \left( \frac{V_1^{N_1}}{N_1! } \right) + k_B \ln \left( \frac{V_2^{N_2}}{ N_2!} \right) \nonumber \\ & - k_B \ln \left( \frac{ V_T^{N_T} }{ N_T!} \right) +X , \label{Dieks 2}\end{aligned}$$ where $N_T=N_1+N_2$ and $V_T=V_1+V_2$ are constants. Note that Dieks’ choice for the value of the constant $X$ means that the last two terms in Eq. (\[Dieks 2\]) cancel in his Eq. (2). Since Eq. (\[Dieks 2\]) is valid for all values of $N_1$, $N_2$, $V_1$, and $V_2$, we immediately have the (partial) entropies, $$\label{partial S j} S_j (V_j,N_j ) = k_B \ln \left( \frac{V_j^{N_j}}{ N_j!} \right) ,$$ where $j=1$ or $2$. I claim that this is a proper derivation of the factors $1/N_j!$. Dieks made the following comment on his Eq.(2) (writing $N$ for what I have called $N_T$). > Indeed, the dependence of the total entropy in Eq. (2) on $N_1$ and $N_2$ is unrelated to how $N$ occurs in this formula (and to the choice of the zero of the total entropy)[@Dieks_unique_entropy]. His comment confirms the validity of my derivation of the factors $1/N_1!$ and $1/N_2!$ in the entropies of subsystems $1$ and $2$, as well as the irrelevance of the value of the additive constant $X$. Dieks then calculates the entropy *after* the release of the constraint on the particle number and return to equilibrium. He gets the result $X=k_B \ln \left( V_T^{N_T}/N_T! \right)$. Dieks claims that this was the way I had obtained a $-k_B \ln N_T!$ dependence of the total entropy. I did not fix the value of $X$, so I did not derive an expression for the entropy after the release of constraints. Actually, the form of the $k_B \ln \left( V_T^{N_T}/N_T! \right)$ term in the joint entropy does not come from choosing the constant $X$ to make $S_T = \sum_{j=1}^M S_j $, but rather from the simplicity of the example used. If the properties of the subsystems are generalized, a different result is obtained. Two less simple subsystems -------------------------- Consider the entropy, $$\label{S_with_energy_shift} S_j = k_B N_j \left[ \frac{3}{2} \ln \left( \frac{E_j - N_j a_j }{ N_j } \right) + \ln \left( \frac{V_j }{ N_j } \right) + Y_j' \right] ,$$ where I have used Stirling’s approximation. The total entropy before allowing the systems to interact is $S_T = S_1+S_2$. The energy dependence is now given explicitly, and an energy shift per particle, $a_j$, is given to each subsystem. Assume that $a_1=0$ and $a_2>0$. Let subsystems $1$ and $2$ come into thermal contact and exchange energy and particles. The temperature dependence of the energy in the $j$-th subsystem is given by $$\label{U_j 1} E_j = \frac{3}{2} k_B N_j T_j + N_j a_j ,$$ so the condition of equilibrium with respect to energy exchange is $$\frac{ E'_1 }{ N_1 } = \frac{ E'_2 }{ N_2} - a_2 ,$$ where I have indicated the new values of the energies by $E'_1$ and $E'_2$. Now let the two subsystems exchange particles. From the condition of equilibrium with respect to particle number, it is straightforward to derive $$\label{n1_n2_T} \ln \left( \frac{ V_1 }{ N''_1 } \right) = \ln \left( \frac{ V_2 }{ N''_2 } \right) - \frac{3}{2} \left[ \frac{1}{E''_2/N''_2 a_2 - 1} \right] ,$$ where I have indicated the new values of the energies and particle numbers by double primes, i.e: $E'_j$ and $N''_j$. Since $E''_1/N''_1 \ne E''_2/N''_2$ and $V_1/N''_1 \ne V_2/N''_2$, the total entropy cannot be written as a function of $(E''_1+E''_2)$, $(V_1+V_2)$, and $(N''_1+N''_2)$. There is no term in $S''_T = S''_1(E''_1,V_1,N''_1) + S''_2(E''_2 ,V_2,N''_2)$ of the form $k_B \ln \left( V_T^{N_T}/N_T! \right)$. For the next example it will be sufficient to again consider ideal gases and ignore the energy dependence. Three simple subsystems ----------------------- Dieks’ analysis does not recognize that the thermodynamic variables in subsystems $1$ and $2$ remain $N_1$ and $N_2$, even after the systems come to equilibrium. They are not replaced by a single variable. This can be seen most easily by considering $M \ge 3$ subsystems. To avoid confusion, denote the number of particles in subsystems $1$ and $2$ by $N_{1,2}=N_1+N_2$, because it is no longer constant. Now consider how subsystems $1$ and $2$ interact with a third subsystem. Let subsystems $1$ and $2$ first come to equilibrium and then be separated again, denoting the new particle numbers by $N'_1$ and $N'_2$. Let subsystem $3$ originally have a high number density, $N_3/V_3 > N'_1/V_1= N'_2/V_2$. Now let subsystem $2$ exchange particles with system $3$, so that $N_2$ increases ($N''_2>N'_2$). Subsystems $2$ and $3$ come to a new equilibrium, for which $$\frac{N'_1}{V_1} < \frac{N''_2}{V_2} = \frac{N''_3}{V_3} .$$ The entropy of subsystems $1$ and $2$ is (with Stirling’s approximation), $$S'_1+S''_2 \approx k_B N'_1 \ln \left( \frac{V_1}{ N'_1} \right) + k_B N''_2 \ln \left( \frac{V_2}{ N''_2} \right) .$$ Since the number density is different in subsystems $1$ and $2$, it is clear that $S'_1+S''_2$ is not given by $k_B N_{1,2} \ln \left( V_{1,2} / N_{1,2} \right)$. An arbitrary number of subsystems --------------------------------- When Dieks discusses the case of many systems, he writes that I require a “consistency” condition, > that the entropy formula should be such that there will be no change in entropy when a partition is removed[@Dieks_unique_entropy]. I do not require it, and it is not a consistency condition. It is the condition that systems separated by a partition are in equilibrium, which is not generally true in the presence of a constraint. To summarize, I have calculated the dependence of the entropy on the variables $\{E_j, V_j, N_j \vert =1, \dots,M\}$ in the presence or absence of arbitrary constraints. My definition enables the calculation of the equilibrium conditions and entropy changes. The additive constant, $X$, may be determined by convention. Peters’ objection {#section: Peters} ================= A prominent question in the literature is whether entropy should be defined in one step or two. The two-step approach can be described as hybrid because it starts with a definition of entropy, notes that the definition fails in some respect, and then corrects it to agree more closely with the thermodynamic properties of entropy. The historical reason for this peculiar question lies in the effort to maintain a definition of entropy in the form of the logarithm of a volume in phase space by modifying it to correct the dependence on particle number[@Dieks_unique_entropy; @Dieks_logic_of_identity_2014; @Peters_2010; @DV; @VD; @Cheng]. Since this process usually involves the inclusion of a negative term, $-k_B \ln N!$, the result is often called a “reduced entropy.” Peters has introduced an interesting hybrid definition of the entropy[@Peters_2010; @Peters_2014]. In doing so, he explicitly rejected the derivation of entropy given in Section \[section: statistical mechanics\], although his only criticism turns out to be something we agree on. We both recognized that macroscopic experiments do not identify individual particles, so we can never know which particles are in which system. However, Peters claimed that my version was “imprecise” because it did not include the condition he denoted as being “harmonic,” defined as follows. > Systems for which all possible particle compositions are equiprobable will be called harmonic[@Peters_2010]. For comparison, I had written that, > when a system of distinguishable particles is allowed to exchange particles with the rest of the world, we must include the permutations of all possible combinations of particles that might enter or leave the system[@RHS_4]. It is clear that we have made essentially the same assumption. Peters’ takes a hybrid approach in that he chooses to define a form of the Shannon entropy, and then “reduces” it to arrive at the final form[@Shannon; @Peters_2010]. $$\begin{aligned} R_P &=& -k_B \sum_{i=1}^M \int d^{3N_i}p_i \, \int d^{3N_i}q_i \nonumber \\ && \times \rho_i(p_i,q_i) \ln \left( \rho_i(p_i,q_i) h^{3N_i} \right) \nonumber \\ && -k_B \ln N!\end{aligned}$$ This form does have the correct $N$-dependence, and for the correct reason. However, $R_P$ fails to satisfy the second law of thermodynamics. In Section 4.3.3.2 of Ref. [@Peters_2010], Peters discusses an irreversible process initiated by the release of constraints to allow exchange of energy and particles between two subsystems. He assumes that “both before and after the exchange” the two subsystems “are in microcanonical equilibrium.” The problem is that this assumption is contradicted by Liouville’s theorem, which requires the total time derivative of the probability distribution in the phase space of the complete composite system to vanish. This means $R_P$ does not increase during an irreversible process, so it does not satisfy the second law of thermodynamics. Peters explicitly acknowledges the difficulty posed by Liouville’s theorem in his Section 5.6.5, writing that, “the Liouville equation is entropy conserving and therefore cannot describe irreversible processes.” He does not comment on the contradiction between his Sections 4.3.3.2 and 5.6.5. In contrast, the Liouville equation does not conserve the entropy as defined in this paper, and the Second Law is satisfied. Summary {#section: summary} ======= I’ve argued for a definition of the thermodynamic entropy based on the probability distribution of the macroscopic variables in a composite system. The entropy defined this way satisfies the postulates for thermodynamics[@Tisza; @Callen; @RHS_book]. I’ve addressed the objections by Dieks[@Dieks_unique_entropy; @Dieks_logic_of_identity_2014] and Peters[@Peters_2010; @Peters_2014] to this derivation of the entropy from statistical mechanics and shown that they are not valid. Since the thermodynamic entropy is known to be unique apart from constants chosen by convention[@Lieb_Yngvason], any other valid definition of the entropy must be equivalent the one presented here. Acknowledgement {#acknowledgement .unnumbered} =============== I would like to thank Roberta Klatzky for many helpful discussions. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. biblabel\[1\][\#1. ]{}
--- author: - | [^1]\ Institute of Basic Science, Sungkyunkwan University\ 2066 Seobu-ro, Suwon, 440-746, Korea\ E-mail: - | Attila Mészáros\ Charles University in Prague, Faculty of Mathematics and Physics, Astronomical Institute\ V Holešovičkách 2, CZ 180 00 Prague 8, Czech Republic\ E-mail: title: 'What is the Astrophysical Meaning of the Intermediate Subgroup of GRBs?' --- ![Hardness ratio $H_{32}$ vs. $T_{90}$ duration of GRBs detected by CGRO-BATSE with identified groups of short (crosses), intermediate (full circles), and long (opened circles) bursts as published in [@hor06].[]{data-label="fig:batse_H_T90"}](batse_H_T90.eps "fig:"){width="70.00000%"}\ Two - Three Different Groups of GRBs ==================================== Gamma-Ray Bursts (GRBs) are fascinating cosmological objects, but they are not all of the same kind. There are at least two different groups, ‘short/hard’ and ‘long/soft’ [@maz81; @kou93; @nor01; @bal03; @bor04; @me06b; @zha09; @brom13]. A possibility of the existence of further groups has been intensively studied using various statistical techniques [@ver10; @rip12; @hor98; @muk98; @bal01; @hor02; @hor08; @hor09; @huj09; @rip09; @hor10; @hor06; @chat07; @ugar11]. It has been postulated that there might be a third group of GRBs with intermediate durations. However, statistical tests applied to different datasets obtained from different satellites assign varying significance to this result. The astrophysical origin of this subgroup also remains unclear. The three groups of GRBs found by BATSE, an instrument on board the Compton Gamma-Ray Observatory (CGRO), are shown in Figure \[fig:batse\_H\_T90\]. The figure compares the durations $T_{90}$ and hardnesses $H_{32}$, i.e. the ratios of the received energy per unit area in the range $100-300$keV over the same quantity in the range $50-100$keV. In the figure 1956 bursts are shown - they were observed by this instrument over the years 1991-2000. The short/hard and the long/soft groups are clearly separated around $T_{90} \simeq 2$s. It is now generally accepted [@bal03] that they are distinct astrophysical phenomena. The long ones are believed to be coupled with supernovae type Ic. The physics of the short bursts remains unclear, although a merging of two compact objects such as neutron stars has been suggested [@gehl12]. ![Hardness ratio $H$ vs. duration $T_{90}$ of GRBs detected by RHESSI with identified groups of short (crosses), intermediate (full circles), and long (triangles) bursts as published in [@rip12].[]{data-label="fig:rhessi_H_T90"}](rhessi_H_T90.eps "fig:"){width="70.00000%"}\ Several statistical analyses show that the existence of an intermediate subclass cannot be excluded. Three distinct groups have been found - not only in the BATSE[^2] database, but also for the RHESSI[^3] (Figure \[fig:rhessi\_H\_T90\]) and Swift-BAT[^4] (Figure \[fig:swift\_H\_T90\]) databases (see [@rip12] and references therein). Hence, from a statistical perspective, the existence of three subgroups is likely. However, it does not immediately follow that the three different subgroups arise from three astrophysically different progenitors. There are several selection and instrumental biases [@hak00] which can cause these separations instead. The Physics of the Intermediate GRBs ==================================== A key step in understanding the physics of the intermediate subgroup was made in [@ver10]. It was shown that for the Swift database the intermediate subclass was related to X-Ray Flashes (XRFs). Since XRFs are related to the standard long/soft type GRBs [@kip03; @sak05], at least in the Swift database the intermediate subgroup could simply be the tail of the long GRB distribution. On the other hand, the GRBs of the RHESSI database’s intermediate subgroup are not as soft as the long ones and they do not appear to constitute a tail of the long GRBs (see Figure \[fig:rhessi\_H\_T90\]). In fact quite conversely, they are more similar to the short bursts. A detailed statistical analysis of the RHESSI database has shown that the intermediate group in this database was similar to the short one [@rip12]. ![Hardness ratio vs. duration $T_{90}$ of GRBs detected by Swift-BAT with identified groups of short (red pluses), intermediate (blue stars), and long (green $\times$) bursts as published in [@hor10].[]{data-label="fig:swift_H_T90"}](swift_H_T90.eps "fig:"){width="64.00000%"}\ Method CGRO-BATSE Swift-BAT RHESSI BeppoSAX ----------------------- ------------------------------ ---------------------------- ------------------------------ --------------- F-test {$T_{90}$} $< 0.01$% [@hor98] 3.6% [@huj09] 6.9% [@rip09] ML r. {$T_{90}$} $0.5$% [@hor02] 0.5% [@hor08] 0.04% [@rip09] 3.7% [@hor09] ML r. {$T_{90}$, $H$} $\lesssim 10^{-8}$% [@hor06] $\approx10^{-6}$% [@hor10] 0.1% [@rip09], 0.3% [@rip12] 3 groups [@ver10] 2 groups [@rip12] 3 groups [@rip12] $< 0.01$% [@muk98] 3 groups [@bal01], [@chat07] : Summary of published results concerning the GRB subgroups. Mentioned are the significances of the third group found by different methods and using data from different instruments. F-test compares the best $\chi^2$ fits (two and three Gaussian distributions) of the $\log T_{90}$ duration. ’ML r.’ is the Maximum Likelihood ratio test applied either on the $\log T_{90}$ durations or on the {$\log T_{90}$, $\log H$} {duration, hardness ratio} pairs. ’BIC’ is the test based on the difference of the Bayesian Information Criterion values of the best fitted multivariate Gaussian components. BIC was also applied on the peak count rates $F$. For the BATSE database the physics of the intermediate GRBs remains an open question. In addition, here a further interesting property exists. The expected angular distribution of GRBs should be isotropic - this follows from the cosmological principle. For the long GRBs this expectation can be fulfilled, but not for the short ones (for details see work [@vav08] and references therein). Also the intermediate subgroup is not distributed isotropically (see Figure \[fig:batse\_inter\_anisot\]) on the sky [@mesz00]. ![Anisotropic distribution of 92 dim intermediate GRBs in equatorial coordinates from the BATSE database as published in [@mesz00].[]{data-label="fig:batse_inter_anisot"}](batse_inter_anisot.eps "fig:"){width="64.00000%"}\ Conclusion ========== The separation of GRBs to the short/hard and long/soft groups and the connection of the long/soft group to supernovae is widely accepted. On the other hand, both the physics of the short/hard GRBs and the existence of intermediate GRBs remain open questions. For the Swift database the intermediate GRBs can be related to XRFs and hence to the long bursts, but this relationship does not follow in the RHESSI database. For the BATSE dataset the relation between XRFs and the intermediate subgroup is also unclear. We conclude that instrumental effects are important and the identification of the intermediate subgroup with XRFs remains to be proven. [99]{} E. P. Mazets, S. V. Golenetskii, V. N. Ilinskii, et al., *Catalog of cosmic gamma-ray bursts from the KONUS experiment data I.*, *Astrophysics and Space Science* [**80**]{} (1981) 3 C. Kouveliotou, C. A. Meegan, G. J. Fishman, et al., *Identification of two classes of gamma-ray bursts*, *ApJ Letters* [**413**]{} (1993) L101 J. P. Norris, J. D. Scargle and J. T. Bonnell, *Short gamma-ray bursts are different*, *Gamma-Ray Bursts in the Afterglow Era*, Proceedings of the International Workshop Held in Rome, Italy, 17-20 October 2000, Edited by E. Costa, F. Frontera and J. Hjorth, ESO Astrophysics Symposia, Springer-Verlag (2001) 40 L. G. Balázs, Z. Bagoly, I. Horváth, A. Mészáros and P. Mészáros, *On the difference between the short and long gamma-ray bursts*, *A&A* [**401**]{} (2003) 129 L. Borgonovo, *Bimodal distribution of the autocorrelation function in gamma-ray bursts*, *A&A* [**418**]{} (2004) 487 A. M[é]{}sz[á]{}ros, Z. Bagoly, L. G. Bal[á]{}zs and I. Horv[á]{}th, *Redshift distribution of gamma-ray bursts and star formation rate*, *A&A* [**455**]{} (2006) 785 B. Zhang, B.-B. Zhang, F. J. Virgili, et al., *Discerning the physical origins of cosmological gamma-ray bursts based on multiple observational criteria: the cases of z = 6.7 GRB 080913, z = 8.2 GRB 090423, and some short/hard GRBs*, *ApJ* [**703**]{} (2009) 1696 O. Bromberg, E. Nakar, T. Piran and R. Sari, *Short versus long and collapsars versus non-collapsars: a quantitative classification of gamma-ray bursts*, *ApJ* [**764**]{} (2013) 179 P. Veres, Z. Bagoly, I. Horváth, A. Mészáros and L. G. Balázs, *A distinct peak-flux distribution of the third class of gamma-ray bursts: a possible signature of X-ray flashes?*, *ApJ* [**725**]{} (2010) 1955 J. Řípa, A. Mészáros, P. Veres and I. H. Park, *On the spectral lags and peak counts of the gamma-ray bursts detected by the RHESSI satellite*, *ApJ* [**756**]{} (2012) 44 I. Horv[á]{}th, *A third class of gamma-ray bursts?*, *ApJ* [**508**]{} (1998) 757 S. Mukherjee, E. D. Feigelson, G. Jogesh Babu, et al., *Three types of gamma-ray bursts*, *ApJ* [**508**]{} (1998) 314 A. Balastegui, P. Ruiz-Lapuente and R. Canal, *Reclassification of gamma-ray bursts*, *MNRAS* [**328**]{} (2001) 283 I. Horv[á]{}th, *A further study of the BATSE Gamma-ray burst duration distribution*, *A&A* [**392**]{} (2002) 791 I. Horv[á]{}th, L. G. Bal[á]{}zs, Z. Bagoly and P. Veres, *Classification of Swift’s gamma-ray bursts*, *A&A* [**489**]{} (2008) L1 I. Horv[á]{}th, *Classification of BeppoSAX’s gamma-ray bursts*, *Astrophysics and Space Science* [**323**]{} (2009) 83 D. Huja, A. M[é]{}sz[á]{}ros and J. [Ř]{}[í]{}pa, *A comparison of the gamma-ray bursts detected by BATSE and Swift*, *A&A* [**504**]{} (2009) 67 J. [Ř]{}[í]{}pa, A. M[é]{}sz[á]{}ros, C. Wigger, et al., *Search for gamma-ray burst classes with the RHESSI satellite*, *A&A* [**498**]{} (2009) 399 I. Horv[á]{}th, Z. Bagoly, L. G. Bal[á]{}zs, et al., *Detailed classification of Swift ’s gamma-ray bursts*, *ApJ* [**713**]{} (2010) 552 I. Horv[á]{}th, L. G. Bal[á]{}zs, Z. Bagoly, F. Ryde and A. M[é]{}sz[á]{}ros, *A new definition of the intermediate group of gamma-ray bursts*, *A&A* [**447**]{} (2006) 23 T. Chattopadhyay, R. Misra, A. K. Chattopadhyay and M. Naskar, *Statistical evidence for three classes of gamma-ray bursts*, *ApJ* [**667**]{} (2007) 1017 A. de Ugarte Postigo, I. Horv[á]{}th, P. Veres, et al., *Searching for differences in Swift’s intermediate GRBs*, *A&A* [**525**]{} (2011) A109 N. Gehrels and P. M[é]{}sz[á]{}ros, *Gamma-ray bursts*, *Science* [**337**]{} (2012) 932 J. Hakkila, D. J. Haglin, G. N. Pendleton, et al., *Gamma-ray burst class properties*, *ApJ* [**538**]{} (2000) 165 R. M. Kippen, P. M. Woods, J. Heise, et al., *Spectral characteristics of X-ray flashes compared to gamma-ray bursts*, in proceedings of *Gamma-Ray Burst and Afterglow Astronomy 2001: A Workshop Celebrating the First Year of the HETE Mission*, Woods Hole, Massachusetts, USA, 5-9 November 2001, Edited by G. R. Ricker and R. K. Vanderspek, *AIPC* [**662**]{} (2003) 244 T. Sakamoto, D. Q. Lamb, N. Kawai, et al., *Global characteristics of X-ray flashes and X-ray-rich gamma-ray bursts observed by HETE-2*, *ApJ* [**629**]{} (2005) 311 R. Vavrek, L. G. Balázs, A. Mészáros, I. Horváth and Z. Bagoly, *Testing the randomness in the sky-distribution of gamma-ray bursts*, *MNRAS* [**391**]{} (2008) 1741 A. Mészáros, Z. Bagoly, I. Horváth, L. G. Balázs and R. Vavrek, *A remarkable angular distribution of the intermediate subclass of gamma-ray bursts*, *ApJ* [**539**]{} (2000) 98 [^1]: We gratefully appreciate useful comments from Stephen Appleby. This study was supported by the Grant Agency of the Czech Republic - Grant No. P209/10/0734, and by the Creative Research Initiatives Program (RCMST) of MSIP/NRF in Korea. [^2]: http://gammaray.msfc.nasa.gov/batse/ [^3]: http://hesperia.gsfc.nasa.gov/hessi/index.html [^4]: http://swift.gsfc.nasa.gov/docs/swift/swiftsc.html
--- abstract: 'Probabilistic graphical models (PGMs) provide a compact representation of knowledge that can be queried in a flexible way: after learning the parameters of a graphical model, new probabilistic queries can be answered at test time without retraining. However, learning undirected graphical models is notoriously hard due to the intractability of the partition function. For directed models, a popular approach is to use variational autoencoders, but there is no systematic way to choose the encoder architecture given the PGM, and the encoder only amortizes inference for a single probabilistic query (i.e., new queries require separate training). We introduce Query Training (QT), a systematic method to turn any PGM structure (directed or not, with or without hidden variables) into a trainable inference network. This single network can approximate any inference query. We demonstrate experimentally that QT can be used to learn a challenging 8-connected grid Markov random field with hidden variables and that it consistently outperforms the state-of-the-art AdVIL when tested on three undirected models across multiple datasets.' author: - | David S. Hippocampus[^1]\ Department of Computer Science\ Cranberry-Lemon University\ Pittsburgh, PA 15213\ `hippo@cs.cranberry-lemon.edu`\ - | Miguel Lázaro-Gredilla[^2] , Wolfgang Lehrach, Nishad Gothoskar, Guangyao Zhou,\ **Antoine Dedieu, Dileep George**\ Vicarious AI bibliography: - 'arxiv.bib' title: | Query Training: Learning and inference\ for directed and undirected graphical models --- Introduction {#sec:intro} ============ In machine learning we are interested in discovering regularities in data that allow us to perform inference about novel data points. A paradigmatic example are neural networks (NNs). In most practical cases, a NN is a parametric function, structured in layers, that deterministically produces an output given the input. By minimizing an appropriate loss function on input-output pairs, we expect it to learn a mapping that generalizes to new cases. Recent years have seen a big success in the training of fairly complex and deep NNs using stochastic gradient descent (SGD). Tools like PyTorch [@paszke2019pytorch] and TensorFlow [@tensorflow2015-whitepaper] make this process straightforward. When a probabilistic graphical model (PGM) is used to model a dataset, there is no notion of input or output. Instead, these can be arbitrarily chosen at test time by conditioning on available data. Such a PGM captures more information about the data than a NN, for which the inputs and outputs are prefixed. Ideally, the PGM captures the full statistical description, up to model mismatches. After learning a PGM, we can query it to provide an estimation of any subset of variables given the rest (with varying degrees of uncertainty). NNs are limited to answering a single query, the one that they were trained for. Flexible querying is a requirement in all but the simplest artificial intelligence agents, which need to deal with uncertainty and cannot afford separate NNs for each possible query. Thus, PGMs are more suited as a compact knowledge representation that admits flexible querying. Despite this advantage, PGMs are not without difficulties in practice. Namely, learning (optimizing the parameters of the PGM to best match observed data) and inference (the aforementioned flexible querying) are intractable for most models of interest, even fairly simple ones. Learning the parameters of an undirected PGM with hidden variables using maximum likelihood (ML) is a notoriously difficult problem [@welling2005learning; @kuleshov2017neural; @advil]. When all variables are observed, the range of applicable techniques is broadened [@sutton2005piecewise; @sutton2006local; @sutton2007piecewise; @bradley2013learning], but the problem remains intractable in general. When hidden variables are present, the intractability is twofold: (a) integrating out the hidden variables (also a challenge in directed models) and (b) computing the partition function. The second problem is generally considered harder [@welling2005learning]. Unfortunately, for images and other data types, undirected PGMs offer the most compact representation, so using only directed PGMs is not a general enough solution. We will thus focus on *undirected* PGMs in this work. Often PGMs and NNs work together. NNs (shallow or deep) can be used either as a link function in the definition of PGMs or as an amortized inference network[^3] to accelerate learning. This is the case in variational autoencoders (VAEs, [@kingma2013auto]), pixel convolutional neural networks (PixelCNNs, [@van2016pixel]), etc. But these architectures inherit the shortcomings of NNs: despite the presence of an underlying PGM, flexible querying without retraining is not supported. VAE’s encoder can only compute the posterior over the hidden units and fails to do so if some of the inputs are missing. Similarly, PixelCNNs are trained on a specific (forward) order, and can answer forward inpainting queries, but cannot handle backward queries, or queries with uncertain inputs. **Contribution** The aim of this work is to provide a systematic framework to turn any PGM into an inference network that enjoys benefits from both PGMs and NNs. Our approach (a) supports flexible *marginal* querying without retraining; (b) provides a simple mechanism for training and inference, with no need to estimate the partition function; and (c) handles both directed and undirected PGMs. We will focus our experiments on the harder case of undirected PGMs. Query training (QT) {#sec:qt} =================== Our approach takes an (untrained) PGM and unrolls it into a single NN to answer arbitrary queries. It then trains the weights of the NN using different types of queries. The resulting NN can be used at test time for flexible querying, as if it was a PGM. We call this approach query training (QT). Queries that need to be answered -------------------------------- Given a knowledge representation of some type (e.g. PGM) for the set of variables $\bm{x} = \{x_1, \ldots,x_N\}$, we want to be able to compute conditional marginal probabilistic queries of the form $$\label{eq:querytype} p(x_\text{output} | \{x_i\}_{i:\text{input}})~~~, \forall~ \text{output} \in \{1, \ldots, N\} , ~\forall~\text{input} \subset \{1, \ldots, N\}$$ where $x_\text{output}$ is a single variable, and “input” is a *subset* of the remaining variables. Any variables that do not correspond to the input nor the output are marginalized out in the above query. Queries which do not fit Eq.  directly (e.g., the joint distribution of two output variables) can be decomposed as a combination of conditional marginal queries by using the chain rule of probability. So a system that is able to answer queries like Eq.  provides enough information to resolve any probabilistic query, with the number of queries being linear in the number of output variables[^4]. A brute force solution would be to consider each of these queries its own regression problem and train separate NNs. However, the number of different queries of this type is exponential in the number of variables, so this would be infeasible. Also, we know that those regressions are not independent, so we would be losing statistical power. Ideally, we would like to train a single NN that can be reconfigured to address different queries. To this end, we will follow approximate inference in PGMs. Approximate inference in PGMs ----------------------------- Probabilistic queries in PGMs are in most cases intractable, so approximations such as loopy belief propagation (BP, [@murphy2013loopy]) or variational inference (VI, [@blei2017variational]) are used. These approximations are invariant to scale, so the computation of the partition function is *not needed*. Loopy BP and VI can be cast as optimization problems. As such, they rarely have a closed-form solution and are instead solved iteratively, which is computationally intensive. To address this problem, amortized inference can be used. A prime example of this are VAEs [@kingma2013auto]: a learned function (typically an NN) is combined with the reparameterization trick [@rezende2014stochastic; @titsias2014doubly] to compute the posterior over the hidden variables given the visible ones. Although a VAE performs inference faster than VI optimization, a single predefined query can be solved. In contrast, BP and VI answer arbitrary queries (albeit with more compute). Also note that standard VAEs can only handle directed PGMs, and a more sophisticated variational apparatus with multiple NNs [@advil] is required for undirected PGMs. BP and VI are closely connected, but behave differently. For instance, for tree PGMs, parallel BP converges to the exact solution in a number of steps equal to the diameter of the tree [@murphy2013loopy], whereas VI will in general take much longer to converge. In general, parallel BP tends to produce higher quality marginals in less iterations, so we will choose it over VI in this work. An intuitive description of query training ------------------------------------------ BP gives a recipe to compute any marginal query of the form of Eq. , while sharing the same parameters for all queries. We can then consider the loopy BP, unrolled over a fixed set of iterations, as our inference NN. This NN takes an additional input specifying the desired query. Instead of starting with a trained PGM and generating the inference NN (which is certainly possible), we could generate a “blank” inference NN from an untrained PGM and then learn its (single set of) parameters by minimizing the cross-entropy (CE) of its predictions over both data points and queries. This would hopefully generalize to new data points and new queries never seen at training time. The intuition behind the existence of a single NN parameterization that approximately satisfies all the queries comes from the good results of running BP on a correct PGM, which uses a single set of parameters. Note that the the same set of parameters (weights) are shared across all inference steps (layers). Minimizing the CE between the training data and the query predictions with respect to the model parameters is *not* equivalent to maximum likelihood learning. However, we have derived consistency results (see Supplementary Material) for general exponential family models, analogous to those of pseudolikelihood [@hyvarinen2006consistency], showing that our CE loss is reasonable. The computation of the partition function is sidestepped, so undirected PGMs can be used just as easily: normalization is only required for the output variable of each query, i.e., in one dimension. Loopy BP is in general only approximate, so one cannot expect predictions to be exact or even necessarily consistent (e.g., the query product $p(x=0|y=1)p(y=1)$ is not guaranteed to be identical to the query product $p(y=1|x=0)p(x=0)$, although the cost function will tend to make both similar). On the plus side, the training is free to select the parameters that produce the most precise inference over the training queries, as opposed to the parameters that are closest to the original PGM, so the training procedure can compensate for shortcomings in the BP approximation. Training the inference network ------------------------------ The starting point is an unnormalized PGM parameterized by $\theta$. Its probability density can be expressed as $p({\bm x}; \theta) = p({\bm v}, {\bm h}; \theta) \propto \exp(\phi({\bm v}, {\bm h}; \theta))$ , where ${\bm v}$ are the visible variables available in our data and $\bm{h}$ are the hidden variables. A query is a binary vector $\bm{q}$ of the same dimension as $\bm{v}$ that partitions the visible variables in two subsets: one for which (possibly soft) evidence is available (inputs) and another whose marginal probability we want to estimate (outputs). Note that we want to compute the marginal of each selected output given all the inputs, independently, that is, for $\mathcal{S} = \{i: q_i = 1\}$, we want to compute all the marginal queries $p(x_i | \{ x_j \}_{j \in \mathcal{S}}), ~ \forall i \notin \mathcal{S}$. ![image](qt2.pdf){width="0.9\linewidth"} The query-trained neural network (QT-NN) follows from specifying a graphical model $\phi({\bm v}, {\bm h}; \theta)$, a temperature $T$ and a number of inference timesteps $N$ over which to run parallel BP. The general equations of the QT-NN are given next in Section \[sec:bptonn\]. In Fig. \[fig:qt\], a QT-NN takes as input a sample $\bm {v}$ from the dataset and a random query mask $\bm {q}$. The query $\bm{q}$ blocks the network from accessing the “output” variables, and instead only offers access to the “input” variables. The variables assigned to inputs and and outputs change with each query $\bm{q}$ drawn. The QT-NN produces as output an estimate of the marginal probabilities $\hat{\bm {v}}$ for the whole input sample. Obviously, we only care about how well the network estimates the variables that it did not see at the input. So we measure how well $\hat{\bm {v}}$ matches the correct $\bm {v}$ in terms of cross-entropy (CE), but only for the variables that $\bm{q}$ regards as “output”. Taking expectation wrt $\bm {v}$ and $\bm {q}$, we get $L(\theta, T) = \mathbb{E}_{\bm{v},\bm{q}}[\operatorname{CE}_{\bm{q}}(\bm{v}, \hat{\bm{v}})]$, the loss function that we use to train the QT-NN, where the estimated visible units are given by $\hat{\bm{v}} = \operatorname{QT-NN}(\bm{v}, \bm{q}; \theta, T)$. We minimize this loss wrt $\theta, T$ via SGD, sampling from the training data and some query distribution. The query distribution can be uniform, or a function of the problem structure. The number of QT-NN layers $N$ is fixed a priori. namesakeWe term this approach query training (QT). One can think of the QT-NN as a more flexible version of the encoder in a VAE: instead of hardcoding inference for a single query (normally, hidden variables given visible variables), the QT-NN also takes as input a mask $\bm {q}$ specifying which variables are observed, and provides inference results for unobserved ones. Note that $\bm{h}$ is never observed, and instead approximately marginalized over by BP. Unrolling BP into a QT-NN {#sec:bptonn} ------------------------- For a given set of graphical model parameters $\theta$ and temperature $T$ we derive a feed-forward function that approximately resolves arbitrary inference queries by unrolling the parallel BP equations for $N$ iterations. First, we combine the available evidence $\bm{v}$ and the query $\bm{q}$ into a set of unary factors. Unary factors specify a probability density function over a variable. Therefore, for each dimension inside $\bm{v}$ that $\bm{q}$ labels as “input”, we provide a (Dirac or Kronecker) delta centered at the value of that dimension. For the “output” dimensions and hidden variables $\bm{h}$ we set the unary factor to an uninformative, uniform density. Finally, soft evidence, if present, can be incorporated through the appropriate density function. The result of this process is a unary vector of factors $\bm{u}$ that only contains informative densities about the inputs and whose dimensionality is the sum of the dimensionalities of $\bm{v}$ and $\bm{h}$. Each dimension of $\bm{u}$ will be a real number for binary variables (which we encode in the logit space), and a full distribution in the general case. Once $\bm{v}$ and the query $\bm{q}$ are encoded in $\bm{u}$, we can write down the equations of parallel BP over iterations as an NN with $N$ layers, i.e., the QT-NN. To simplify notation, let us consider a PGM that contains only pairwise factors. By mapping the messages to the log-space, the predictions of the QT-NN and the messages from each layer to the next can be written as $$\begin{aligned} \hat{\bm{v}}_i &= \operatorname{softmax}\Big(\theta_i + u_i + \sum_k m_{ki}^{(N)}\Big)&~~~~~ m_{ij}^{(n)} &= f_{\theta_{ij}}\Big(\theta_i + u_i + \sum_{k\neq j} m_{ki}^{{(n-1)}} ; T\Big)&~~~~~ m_{ij}^{(0)} &= 0&\\ \hat{\bm{v}} &= g_\theta(\bm{m}^{(N)}, \bm{u})&~~~~~ \bm{m}^{(n)} &= f_\theta(\bm{m}^{(n-1)}, \bm{u}; T)&~~~~~ \bm{m}^{(0)} &= 0,\end{aligned}$$ where the second row express the same equations as the first row, but in vectorized format. Here $\bm{m}^{(n)}$ collects all the messages[^5] that exit layer $n-1$ and enter layer $n$. Messages have direction, so $m_{ij}^{(n)}$ is different from $m_{ji}^{(n)}$[^6]. Observe how the input term $\bm{u}$ is re-fed at every layer. The output of the network is a belief $\hat{\bm{v}}_i$ for each variable $i$, which is obtained by a softmax in the last layer. All these equations follow from unrolling BP over iterations, with its messages encoded in log-space. The portion of the parameters $\theta$ relevant to the factor between variables $i$ and $j$ is represented by $\theta_{ij} = \theta_{ji}$, and the portion that only affects variable $i$ is contained in $\theta_i$. Observe that all layers share the same parameters. The functions $f_{\theta_{ij}}(\cdot)$ are directly derived from $\phi(\bm{x};\theta)$ using the BP equations, and therefore inherit its parameters. Finally, parameter $T$ is the “temperature” of the message passing, and can be set to $T=1$ to retrieve the standard sum-product belief propagation or to 0 to recover max-product belief revision. Values in-between interpolate between sum-product and max-product and increase the flexibility of the NN. See the Supplementary Material for the precise equations obtained when applied to three popular undirected models used in our experiments. Connection with pseudo-likelihood --------------------------------- If the distribution over queries only contains queries with a single variable assigned as output (and the rest as input), and there are no hidden variables, the above cost function reduces to pseudo-likelihood training [@besag1975statistical]. Query training is superior to pseudo-likelihood (PL) in two ways. Firstly, it provides an explicit mechanism for handling hidden variables. Secondly and more importantly, it preserves learning in the face of high correlations in the input data, which results in catastrophic failure when using PL. If two variables $a$ and $b$ are highly correlated, PL will fail to learn the weaker correlation between $a$ and $z$, since $b$ will always be available during training to predict $a$, rendering any correlation with $z$ useless at training time. If at test time we want to predict $a$ from $z$ because $b$ is not available, the prediction will fail. In contrast, query training removes multiple variables from the input, driving the model to better leverage all available sources of information. Experiments {#sec:experiments} =========== Early works in learning undirected PGMs relied on contrastive energies [@hinton2002training; @welling2005learning]. More recent approaches are NVIL [@kuleshov2017neural] and AdVIL [@advil], with the latter being regarded as superior. We will use it as our main benchmark. In this section we first test QT as an approach for learning and querying from undirected PGMs, comparing it with AdVIL on 3 different types of undirected PGMs (using both discrete and continuous variables) and a total of 10 datasets. These models and datasets are exactly the ones used in AdVIL’s paper [@advil]. Then we try QT on a challenging grid Markov random field. Comparison with AdVIL --------------------- All models are trained using uniform random query distribution, i.e., in each SGD step, each variable is independently assigned as input or output with 0.5 probability. We report the test normalized cross-entropy (NCE) in bits to measure generalization to new data *and new queries*. We normalize the number of predicted variables, so the NCE is the average CE per-variable. The query distribution at test time is the same as the training one, but the actual queries are with high probability unseen in training. Experiments are run on a single Tesla V100 GPU. The QT-NN equations for the models in this section (which follow from unfolding BP on each PGM) and code to reproduce our results can be found in the Supplementary Material. ### Restricted Boltzmann machine (RBM) {#sec:rbm} We first test QT on which is arguably the simplest undirected model, the RBM. The log-probability of an RBM[^7] is proportional to $\phi({\bm v}, {\bm h}; \theta) = 2{\bm h}^\top W{\bm v} + {\bm h}^\top (\bm{c}_H-W\bm{1}_V) + {\bm v}^\top (\bm{c}_V-W^\top1_H)$. We use exactly the same datasets and preprocessing used in the AdVIL paper, with the same hidden layer sizes, check [@advil] for further details. Since all the variables are binary, a trivial uniform undirected model would result in an NCE of 1 bit. We also include results from persistent contrastive divergence (PCD), which is known to be very competitive for RBM training [@tieleman2008training; @marlin2010inductive]. In fact, AdVIL does not do much better than PCD on this model. Computing the test NCE for QT is equivalent to running the trained QT-NN on test data, since that is its loss function. PCD and AdVIL, however, do not provide any mechanism to solve arbitrary inference queries, so we needed to resort to slow Gibbs sampling in the learned model, which is much slower. Alternatively, we also tried copying the RBM weights learned by PCD and AdVIL into the QT with $T=1$ and report those results as PCD-BP and AdVIL-BP. For AdVIL we use the code provided by the authors. For PCD and QT the validation set is used to choose the learning rate and for early stopping, separately for each dataset. The learning rates considered are $\{0.03, 0.1, 0.3, 1, 3\}$ for PCD and $\{0.001, 0.003, 0.01, 0.03\}$ for QT. For PCD we use [scikit-learn]{} [@scikit-learn]. For QT we unfold BP in $N=10$ layers and use ADAM [@paszke2019pytorch; @kingma2014adam] with minibatches of size 500. The $T$ parameter is learned during training. The results are shown in Table \[tab:rbmdbm\_results\]. Results average 5 independent runs. QT produces significantly better results for most datasets (marked in boldface), showing that it has learned to generalize to new probabilistic queries on unseen data. ### Deep Boltzmann machine (DBM) For our second experiment, we use a slightly more sophisticated undirected PGM, a DBM with two hidden layers. The log-probability of a DBM[^8] is proportional to $\phi({\bm v}, {\bm h}; \theta) = 2{\bm h_2}^\top W_{H_2 H_1}{\bm h_1} + 2{\bm h_1}^\top W_{H_1 V}{\bm v} + {\bm h_2}^\top (\bm{c}_{H_2}-W_{H_2 H_1}\bm{1}_{H_1}) + {\bm v}^\top (\bm{c}_V-W_{H_1 V}^\top1_{H_1}) + {\bm h_1}^\top (\bm{c}_{H_1}-W_{H_2 H_1}^\top\bm{1}_{H_2} - W_{H_1 V}\bm{1}_V)$. The datasets are the same as for the RBM. The DBM structure (number of units in each hidden layer) for each dataset is identical to that of [@advil], and we reuse exactly the same process and parameters as in the previous experiment. The results are summarized in Table \[tab:rbmdbm\_results\]. QT produces the best performance (marked in boldface) on 8 out of the 9 tested datasets. It is interesting to note that, although inference becomes more challenging for DBM than for RBM (as demonstrated by the generally higher NCEs on most datasets for AdVIL-BP and AdVIL-Gibbs), the performance of QT remains essentially unchanged for DBM as compared with RBM. This suggests that QT not only learns to generalize to new probabilistic queries on unseen data, but also remains highly effective for models where inference becomes more challenging. ### Gaussian Restricted Boltzmann machine (GRBM) {#sec:grbm} Finally, we compare QT with AdVIL on learning a GRBM on the Frey faces dataset. The log-probability of a GRBM is proportional to $\phi({\bm v}, {\bm h}; \theta) = -\frac{1}{2 \sigma^2} \|{\bm v}- {\bm b}\|^2 + {\bm c}^\top {\bm h} + \frac{1}{\sigma} {\bm h}^\top W {\bm v}$. The hidden units are binary and the visible units are continuous. We follow [@advil], and fix $\sigma=1$. The dimensions of ${\bm h}$ and ${\bm v}$ are respectively 200 and 560 (corresponding to a $28\times 20$ image). We train AdVIL using the provided authors’ code. For QT we unfold BP in $N = 50$ layers and use ADAM with learning rate $5\times10^{-3}$. In the GRBM, BP will send continuous messages to the visible units, and we follow the standard practice of expectation propagation [@minka2001expectation] of characterizing those messages as Gaussians. This results in deterministic and differentiable message updates. For further details, see the Supplementary Material. Since no quantitative results are provided in [@advil], we create our own by using the preexisting train-test split in the dataset of 1572 and 393 images, respectively. Since there is no validation split, for QT we simply stop training after 50 epochs (training performance has plateaued and we see no signs of overfitting). For AdVIL, we train for 100,000 iterations and save a checkpoint every 100 iterations. We observe overfitting, so we decided to give the competing method an additional advantage and we report the results of the checkpoint that results in the best test set performance. Like in the previous experiments, the results for AdVIL are obtained both by using Gibbs sampling and by transferring its learned weights to a QT in which we run $N = 50$ BP iterations. The NCE for each of the models are Advil-Gibbs: $1.545$, Advil-BP: $1.542$, and QT: $\mathbf{1.503}$. The NCE of a trivial independent Gaussian model using the empirical mean and variance of the training pixels is $1.909$. Again, QT produces better results than AdVIL, showing that it can be applied to continuous PGMs. [**Testing on significantly different queries.**]{} In addition, we assess the ability of QT to answer inference queries whose statistics are significantly different from those seen at training time. In all the models in this section, we train by assigning each variable as input or output with 0.5 probability. We will now, at test time, make queries on images to complete contiguous patches of $5 \times 5$ pixels given the rest of the image. The NCE for each of the models are: Advil-Gibbs: $1.525$, Advil-BP: $1.530$, and QT: $\mathbf{1.493}$. The NCE of the trivial model is $1.889$. Qualitatively, we find that QT is able to produce image completions that are almost indistinguishable from the original images (shown in Fig \[fig:nips\_figures\]a). ![(a) QT can accurately complete masked regions of an image even though it was trained for significantly different queries. (b) An 8-connected cloned Markov random field. Identical factors are shown using the same color. Actual size is $30\times 30$. (c) Two examples of training pairs: a noisy input digit and its corresponding ground truth segmentation (d) Test data: noisy input digits and their inferred segmentation by the QT-NN (which is obtained unrolling the GMRF model).[]{data-label="fig:nips_figures"}](nips_figures.pdf){width="\textwidth"} This shows the PGM-like flexibility of QT, answering queries on blocks that it was never trained for. Of course, large departures from the statistics of the training queries can result in poor predictions. Grid Markov random field (GMRF) ------------------------------- We consider the challenging problem of using QT to learn an 8-connected, grid-arranged Markov random field (MRF) *with hidden variables*, as shown in Fig. \[fig:nips\_figures\]b. Although the models explored so far also had hidden variables, they did not have direct connections—as it is the case now—which make the model more loopy and learning more challenging. Grid MRFs are often used in image processing applications [@li2009markov], but the MRF variables are always observed when learning the factor parameters. In our case, the MRF variables are hidden, and they emit the pixel labels through a noisy channel, with multiple hidden states mapping to the same pixel labels. To the best of our knowledge, QT is the first method that can learn the full parameterization of an 8-connected grid MRF without access to the MRF variables. Although irrelevant for our purpose of learning a challenging undirected PGM, the proposed model is a simple incarnation of visual neuroscience principles for foreground–background segmentation. Further details about the model and its neuroscience motivation are presented in the Supplementary Material. ### The border ownership dataset, model, and task The border ownership dataset is provided with the Supplementary Material and is derived from the MNIST dataset[^9] [@lecun2010mnist]. It is structured as pairs of noisy contour images and `CONTOUR`-`IN`-`OUT` labels. Two examples are displayed in Fig. \[fig:nips\_figures\]c. The contours are missing with probability 0.2, whereas each image incorporates 8 spurious random edges of length 3 pixels. Each image is of size $30 \times 30$ pixels. The images have one-to-one correspondence with the MNIST dataset, so 60,000 images are available for training and 10,000 images are used for testing. The structure of the MRF is shown in Fig. \[fig:nips\_figures\]b. Each node is a categorical variable with 66 hidden states, 64 correspond to the label `CONTOUR`, 1 to `IN`, and 1 to `OUT`. The vertical connections are a noisy channel. There are only 4 distinct pairwise factors (different colors in Fig. \[fig:nips\_figures\]b). The task is to recover the hidden labels from the noisy image. Observe that the incomplete contours and the spurious edges make the task of foreground–background segmentation non-trivial. Second, observe that the labels do *not* provide the hidden states. In particular, which of the 64 clones of `CONTOUR` is appropriate for each pixel is unknown, and the use of multiple clones is required to properly solve the task, since the potentials are only local pairwise connections and long-range information is needed. ### Results [r]{}[.3]{} The model is trained using QT. The input and output variables in this case are not randomized, but fixed throughout training and testing: the evidence is always the noisy binary image and the target is always the noiseless ternary segmentation. We unroll BP for $N=15$ layers, use ADAM with a learning rate of $10^{-2}$ with minibatches of 50 images and run learning for 10 epochs on a single Tesla V100 GPU. The temperature parameter is fixed $T=1$ throughout learning. The results of segmentation from noisy test data are shown on Fig. \[fig:nips\_figures\]d for several example digits. Pixels decoded as `CONTOUR`, `IN`, `OUT` are respectively in red, pale blue, and pale yellow. Qualitatively, the recovery looks almost perfect. Quantitatively, Table \[tab:CMRF\] presents the intersection over union (IoU) between the estimated and real foreground[^10], and the NCE of a GMRF trained with QT and with a random baseline. QT achieves a nearly perfect digit recovery. We have tried to train the GMRF using other alternatives without luck. In particular, AdVIL requires designing three new encoder networks and one decoder network for this model. Our network designs have failed to produce any meaningful results in a reasonable amount of time. Discussion and future work {#sec:future} ========================== Query training is a general approach to learn to infer when the inference target is unknown at training time. It offers the following advantages: (a) a systematic approach to convert any PGM, even with hidden variables, into an inference network that supports (see Section \[sec:grbm\]); (b) no need to estimate the partition function (the “sleep” phase), so particularly applicable to undirected PGMs; (c) a simple learning and inference mechanism built in (e.g., AdVIL needed Gibbs sampling to answer our queries); (d) learning can correct for BP inaccuracies or too few inference steps, potentially making QT-NN faster and/or more accurate than BP itself. Why would QT generalize to new queries or scale well? The worry is that only a small fraction of the exponential number of potential queries is seen during training. The *existence* of a single inference network that works well for many different queries follows from the existence of a single PGM in which BP can approximate inference. The *discoverability* of such a network from limited training data is not guaranteed. However, there is hope for it, since the amount of training data required to adjust the model parameters should scale with the number of these, and not with the number of potential queries. Just like training data should come from the same distribution as test data, the training queries must come from the same distribution as the test queries to avoid “query overfitting”. Interesting avenues for exploration are model sampling and other inference mechanisms, such as VI. Broader Impact {#broader-impact .unnumbered} ============== Developing efficient learning mechanisms for undirected graphical models results in a) a larger breadth of PGMs being available to model a given phenomenon or machine learning problem, which in turn produces b) superior results when undirected graphical models are more suitable. This can result in superior performance of AI systems, reduced training times, reduced computing costs, and reduced environmental pollution. As with any technology, negative consequences are possible but difficult to predict at this time. This is not a deployed system with immediate failure consequences or that can by itself leverage harmful biases, although these are possible when integrated into more complex systems. Hidden activations learned by the 8-connected GMRF ================================================== ![Best viewed on screen with zoom. Each color corresponds to a different inferred hidden contour clone. The model has learned *with no supervision* to capture the local orientation of the pixel (which also reveals on which side the foreground is) as the best way to solve the denoising task.[]{data-label="fig:bomnist_results"}](clone_interpretation.pdf){width="84.00000%"} Visual neuroscience and model details for the GMRF ================================================== The purpose of this model is to perform some rudimentary foreground–background segmentation from noisy cues of the edges of the foreground, see Fig. \[fig:bomnist\]. There is abundant literature supporting border ownership as mechanism for foreground–background segmentation [@o2009short; @o2013remapping; @zhaoping2005border; @zhou2000coding], and the use of “clones” for representing higher-order dependencies [@hawkins2009sequence; @xu2016representing; @cormack1987data]. A simplified model based on these principles is presented in Fig. \[fig:gmrf\]. It has two types of variables, pixel labels (white, hidden, categorical with 66 categories) and pixel intensities (gray, observed, binary). The pixel labels categorize a pixel as belonging to the `OUT` (one category), `IN` (one category), or `CONTOUR` (64 distinct categories) of the foreground surface. In the noiseless case, the emission factors (vertical) are deterministic: the `IN` and `OUT` pixel labels produce pixel intensity 0, whereas the 64 remaining categories (`CONTOUR`) produce pixel intensities 1. The point of having 64 apparently identical categories, all of which generate a contour (these are the “clones”), is that those hidden labels can specialize and learn higher order properties of the contour, such as its orientation and the side of the foreground on which they are located. Thus, a pixel that is part of a horizontal contour will have a different pixel label than another pixel that is part of a vertical contour. Now, a `CONTOUR` pixel whose hidden label is “bottom horizontal” is likely to turn on other “bottom horizontal” pixels on its left and right and `IN` pixels on top of it. This allows to represent the long-range interactions required for foreground–background segmentation using only pairwise factors. Note that these labels are just a possible specialization that training can discover, these are not available! The system contains unsupervised hidden variables. Due to its arrangement on a grid, we call this model the grid MRF (GMRF). ![Two examples of training pairs: a noisy input digit and its corresponding ground truth segmentation[]{data-label="fig:bomnist"}](digit.pdf){width=".5\textwidth"} ![An 8-connected grid Markov random field. Identical factors are shown using the same color. Actual size, $30\times 30$.[]{data-label="fig:gmrf"}](cmrf_model3.pdf){height="2in"} #### Model details: First, let us consider the lateral connections between pixel labels in Fig. \[fig:gmrf\]. The model is fully convolutional, meaning that there are only 4 different types of potentials: up-down (green), left-right (yellow), principal diagonal (orange) and secondary diagonal (purple). These are color-coded in Fig. \[fig:gmrf\], and extend in every direction to accommodate any GMRF size. This means that the model will perform the same segmentation regardless of where in the image the surface is presented (barring edge effects). These factors connect pairs of categorical variables with 66 values, are fully parametric, and are learned by QT. The vertical connections (emission factors) are either known a priori or easy to estimate, so they are fixed during QT learning. In the noiseless case, they deterministically map to 0 or 1 as described above. In the noisy case, we have 3 noise parameters that determine the probability of generating a 1 conditioned on the label type being `CONTOUR`, `IN` or `OUT`. If we have access to ground truth segmentations and noisy images (as it is the case in our training data), those 3 parameters can be estimated in closed form. To handle segmentation from noisy images, it is useful to think of this model as having both the noisy emission factors (producing the noisy image) and noiseless emissions factors that emit the segmentation categories: `CONTOUR`, `IN` and `OUT`. We will condition on the first and target the second. Details of experimental results =============================== Restricted Boltzmann machine (RBM) {#restricted-boltzmann-machine-rbm} ---------------------------------- Deep Boltzmann machine (DBM) ---------------------------- Gaussian restricted Boltzmann machine (GRBM) {#gaussian-restricted-boltzmann-machine-grbm} -------------------------------------------- -------- ---------------------- -- Method **Exp. 1 & **Exp. 2\ AdVIL-Gibbs & &\ AdVIL-BP & &\ QT (Ours) & &\ **** -------- ---------------------- -- Grid Markov random field (GMRF) ------------------------------- Method IOU NCE --------------------------------------------------------- ---------------------- -------- <span style="font-variant:small-caps;">QT (Ours)</span> $0.28$ $0.34$ <span style="font-variant:small-caps;">Random</span> $9.0 \times 10^{-3}$ $0$ : Standard deviation of the mean for GMRF results in the main paper. The <span style="font-variant:small-caps;">Random</span> approach is fully deterministic, so there is no difference in results between independent runs.[]{data-label="tab:GMRF"} QT-NN architectures, derived for each PGM class =============================================== Here we derive efficient and numerically robust forms of the BP update equations for each of our models. These form the QT-NNs used and trained in the main paper. Restricted Boltzmann machine (RBM) {#sec:rbmcase} ---------------------------------- ![image](transfer.pdf){width="\linewidth"} #### RBM potential: We consider the case where the underlying PGM is a binary RBM with $H$ hidden units and $V$ visible units. We will use a slightly different parameterization (a linear transformation of the standard one) to simplify the form of the obtained transfer function. Thus, we set $$\label{eqn:rbm} \phi({\bm v}, {\bm h}; \theta) = 2{\bm h}^\top W{\bm v} + {\bm h}^\top (\bm{c}_H-W\bm{1}_V) + {\bm v}^\top (\bm{c}_V-W^\top1_H)$$ Note that the above potential can be expressed as: $$\begin{aligned} \begin{split} \phi({\bm v}, {\bm h}; \theta) &= \frac{1}{2} (2{\bm h} - \bm{1}_H)^\top W (2 {\bm v} - \bm{1}_V) + \frac{1}{2} (2{\bm h} - \bm{1}_H)^\top \bm{c}_H + \frac{1}{2} (2{\bm v} - \bm{1}_V)^\top \bm{c}_V \\ & ~~~~ - \frac{1}{2} \bm{1}_H^\top W\bm{1}_V +\frac{1}{2} \bm{c}_H^\top \bm{1}_H +\frac{1}{2} \bm{c}_V^\top \bm{1}_V. \end{split}\end{aligned}$$ We can therefore define $\tilde{\bm h} = 2{\bm h} - \bm{1}_H~;~\tilde{\bm v} = 2{\bm v} - \bm{1}_V$ and consider the binary RBM where the hidden and visible variables take their values in $\{-1, 1\}$ and which has the potential: $$\tilde{\phi}(\tilde{\bm v}, \tilde{\bm h}; \theta) = \frac{1}{2} \tilde{\bm h}^\top W \tilde{\bm v} + \frac{1}{2} \tilde{\bm h}^\top {\bm c}_H + \frac{1}{2} \tilde{\bm v}^\top {\bm c}_V.$$ $\phi$ and $\tilde{\phi}$ only differ by a constant of the model parameters, which will be cancelled out by the partition function. The RBM models are therefore equivalent. Three kinds of potentials are involved in $\tilde{\phi}$: $\bullet$ potentials between hidden and visible variables: $\psi_{ij}(\tilde{h}_i, \tilde{v}_j) = \exp \left( \frac{1}{2} ~ w_{ij} ~ \tilde{h}_i ~ \tilde{v}_j \right), ~ \forall i,j$. $\bullet$ potentials associated with hidden variables: $\kappa_{i}(\tilde{h}_i) = \exp \left( \frac{1}{2} ~ c_{H, i} ~ \tilde{h}_i \right), ~ \forall i$. $\bullet$ potentials associated with visible variables: $\theta_{j}(\tilde{v}_j) = \exp \left( \frac{1}{2} ~ c_{V, j} ~ \tilde{v}_j \right), ~ \forall j$. #### BP updates: We denote $N_{HV} \in \mathbb{R}_+^{H\times V \times 2}$ (resp. $N_{VH} \in \mathbb{R}_+^{V\times H \times 2}$) the messages going from the visible variables to the hidden variables (resp. from the hidden variables to the visible variables). For an observed vector $\mathbf{x} \in \{-1, 1\}^V$ and a query vector $\mathbf{q} \in \{0, 1\}^V$, we note $\phi(v_i, x_i)$ the bottom-up messages to the $i$th visible variable. We have $\phi(-1, x_i)=\phi(1, x_i)=0.5$ if $x_i$ is being queried ($q_i = 0$); and $\phi(v_i, x_i) = 1$ if $v_i=x_i$ and $0$ otherwise if $x_i$ is being observed ($q_i=1$). It therefore holds: $\phi(-1, x_i) + \phi(1, x_i) =1$. All hidden variables are considered as being queried. For the sake of simplicity, we first derive the BP updates for the sum-product BP algorithm with parallel schedules, before generalizing to a general temperature $T$. The message going from the $j$th visible variables to the $i$th hidden variables can be expressed as $$\label{BP-real-space} n^{ij}_{HV}(\tilde{h}_j) ~~\propto \sum_{\tilde{v}_j \in \{-1, 1\} } \psi_{ij}(\tilde{h}_i, \tilde{v}_j) \phi(\tilde{v}_j, x_j) \theta_j(\tilde{v}_j) \prod_{k \ne i} n^{jk}_{VH}(\tilde{v}_j), ~~ \forall \tilde{h}_i \in \{-1, 1\},$$ which is equivalent to $$\begin{aligned} \label{BP-log-space} \begin{split} n^{ij}_{HV}(\tilde{h}_i) ~~\propto ~~ & \psi_{ij}(\tilde{h}_i, -1) \phi(-1, x_j) \theta_j(-1) \prod_{k \ne i} n^{jk}_{VH}(-1) \\ &+ \psi_{ij}(\tilde{h}_j, 1) \phi(1, x_i) \theta_j(1) \prod_{k \ne i} n^{jk}_{VH}(1), ~~~ \forall \tilde{h}_i \in \{-1, 1\} \end{split}\end{aligned}$$ The BP updates for messages going from the hidden variables to the visible ones can be expressed in a similar fashion. Numerical stability can be encouraged with the two following steps. First, we normalize messages: $$n^{ij}_{HV}(1) + n^{ij}_{HV}(-1) = 1.$$ Second, we map the messages to the logit space. Messages can then be expressed by a single float. We denote $M_{HV} \in \mathbb{R}^{H\times V}$ and $M_{VH} \in \mathbb{R}^{V\times H}$ the messages in the logit space. The sum-product BP updates in Equation are equivalent to: $$\begin{aligned} m^{ij}_{HV} &= \text{logit}\left( n^{ij}_{HV}(1) \right) = \log\left( n^{ij}_{HV}(1) \right) - \log\left(1 - n^{ij}_{HV}(1) \right) \\ &= \log\left( n^{ij}_{HV}(1) \right) - \log\left(n^{ij}_{HV}(-1) \right) \\ &= \log \left( \psi_{ij}(1, -1) \phi(-1, x_j) \theta_j(-1) \prod_{k \ne i} n^{jk}_{VH}(-1) + \psi_{ij}(1, 1) \phi(1, x_j) \theta_j(1) \prod_{k \ne i} n^{jk}_{VH}(1) \right)\\ &~~~ - \log \left( \psi_{ij}(-1, -1) \phi(-1, x_j) \theta_j(-1) \prod_{k \ne i} n^{jk}_{VH}(-1) + \psi_{ij}(-1, 1) \phi(1, x_j) \theta_j(1) \prod_{k \ne i} n^{jk}_{VH}(1) \right)\\ &= \log \left( \psi_{ij}(1, -1) \phi(-1, x_j) \theta_j(-1) + \psi_{ji}(1, 1) \phi(1, x_j) \theta_j(1) \prod_{k \ne i} \exp \left(m^{jk}_{VH} \right)\right)\\ &~~~ - \log \left( \psi_{ij}(-1, -1) \phi(-1, x_j) \theta_j(-1) + \psi_{ij}(-1, 1) \phi(1, x_j) \theta_j(1) \prod_{k \ne i} \exp \left(m^{jk}_{VH} \right) \right) \\ &= \log \left( 1 + \frac{\psi_{ij}(1, 1) \phi(1, x_j) \theta_j(1)} {\psi_{ij}(1, -1) \phi(-1, x_j) \theta_j(-1)} \prod_{k \ne i} \exp \left(m^{jk}_{VH} \right)\right) + \log \left(\psi_{ji}(1, -1) \phi(-1, x_j) \theta_j(-1) \right) \\ &~~~ - \log \left( 1 + \frac{\psi_{ij}(-1, 1) \phi(1, x_j) \theta_j(1)}{\psi_{ij}(-1, -1) \phi(-1, x_j) \theta_j(-1)} \prod_{k \ne i} \exp \left(m^{jk}_{VH} \right)\right) - \log \left(\psi_{ij}(-1, -1) \phi(-1, x_j) \theta_j(-1) \right)\\ &= \log \left( 1 + \exp \left\{ w_{ij} + u_j + c_{V, j} + \sum_{k \ne i} m^{jk}_{VH} \right\}\right) - \log \left( 1 + \exp \left\{ - w_{ij} + u_j + c_{V, j} + \sum_{k \ne i} m^{jk}_{VH} \right\}\right) \\ &~~~ - w_{ij}\\ &=f_{w_{ij}} \left( u_j + c_{V, j} + \sum_{k \ne i} m^{jk}_{VH} \right),\end{aligned}$$ where we have defined $f_w(x) = \log(1 + e^{x + w}) - \log(1 + e^{x - w}) -w$ and $u_j = \text{logit} ~ \phi(1, x_j)$[^11]. Note that we can simply express $u_j = \text{logit} \left( \frac{1 + x_j}{2} \right) q_j$. For numerical stability of the message updates, we use the property: $$\begin{aligned} f_w(x) &= \log(1 + e^{x + w}) - \log(1 + e^{x - w}) -w = \log(1 + e^{x + w}) - \log(e^w + e^{x})\\ &= \operatorname{sign}(w) x|_{-|w|}^{|w|} + \log(1+e^{-|x + w|}) - \log(1+e^{-|x - w|}).\end{aligned}$$ After running the BP updates for $N$ iterations, we compute the beliefs (in the logit space) for the $j$th visible variable $$\begin{aligned} b_{V, j} &= \log \left( \phi(1, x_j) \theta_j(1) \prod_{k=1}^H n^{jk}_{VH}(1) \right) - \log \left( \phi(-1, x_j) \theta_j(-1) \prod_{k=1}^H n^{jk}_{VH}(-1) \right)\\ &= u_j + c_{V, j} + \sum_{k=1}^H m^{jk}_{VH},\end{aligned}$$ and map the beliefs back to the real space by considering the sigmoid of this quantity. #### BP summary: We can summarize the architecture of the QT-NN for a general temperature $T$ by the following equations (which correspond to the unrollment of parallel BP over time using messages in logit space presented above): $$\begin{aligned} \bm{u}_V &= \operatorname{logit}\left( \frac{\bm{x} + 1}{2} \right)\circ\bm{q} &\nonumber\text{(unary term for visible units)}\\ \bm{u}_H &= \bm{0}_H &\nonumber\text{(unary term for hidden units)}\\ M_{HV}^{(0)} &= \bm{0}_{HV} &\nonumber\text{(init messages to 0)}\\ M_{VH}^{(0)} &= \bm{0}_{VH} &\nonumber\text{(init messages to 0)}\\ M_{HV}^{(n)} &= f_{W^\top}\left( \bm{u}_V \bm{1}_H^T + \bm{c}_V \bm{1}_H ^T + M_{VH}^{(n-1)} \bm{1}_{HH} - M_{VH}^{(n-1)} \right)^{\top}&\nonumber\text{(interlayer connection)}\\ M_{VH}^{(n)} &= f_{W} \left(\bm{u}_H \bm{1}_V^T + \bm{c}_H \bm{1}_V^T + M_{HV}^{(n-1)} \bm{1}_{VV} - M_{HV}^{(n-1)} \right)^\top &\nonumber\text{(interlayer connection)}\\ \hat{\bm{v}} &= \sigma \left( \bm{u}_V + \bm{c}_V + M_{VH}^{(N)}\bm{1}_H \right) &\nonumber\text{(output layer for visible)}\\ \hat{\bm{h}} &= \sigma\left(\bm{u}_H + \bm{c}_H + M_{HV}^{(N)}\bm{1}_V \right) &\nonumber\text{(output layer for hidden)},\end{aligned}$$ where $$\begin{aligned} \sigma(x) &= 1/(1+e^{-x}) & \nonumber \\ \operatorname{logit}(x) &= \sigma^{-1}(x) = \log(x) - \log(1-x) &\nonumber\\ $$$$\begin{aligned} f_w^{\text{MP}}(x) &= \operatorname{sign}(w) x|_{-|w|}^{|w|} &\nonumber\text{($a|_{b}^{c}$ truncates $a$ between $b$ and $c$, Fig.~\ref{fig:transfer} left)}\\ f_w(x) &= f_w^{\text{MP}}(x) + \operatorname{sp}(-|x + w|, T) \nonumber - \operatorname{sp}(-|x - w|, T) &\nonumber\text{(Fig.~\ref{fig:transfer} right) }\\ \operatorname{sp}(x, T) &= T \log(1+e^{x/T})&\nonumber\text{(a.k.a. softplus function)}.\end{aligned}$$ We have used the following notations: - $\bm{0}_{HV}$ represents a matrix of zeros of size $H \times V$. Similarly $\bm{1}_{HH}$ represents a matrix of ones of size $H \times H$, and $\bm{1}_{V}$ represents a matrix of ones of size $V \times 1$. - When any of the above defined scalar functions is used with matrix arguments, the function is applied elementwise. Some observations can be made: - The Hadamard product with $\bm{q}$ effectively removes the information from the elements of $\bm{v}$ not present in the query mask, replacing them with 0, which corresponds to a uniform binary distribution in logit space. - The output of the network is $\hat{\bm{v}}$ and $\hat{\bm{h}}$, the inferred probability of 1 for both the visible and hidden units. The output $\hat{\bm{h}}$ is inferred but actually not used during training. - The computation of $f_w(x)$ as specified above is designed to be numerically robust. It starts by computing $f_w^{\text{MP}}(x)$, which would be the value of $f_w(x)$ for a temperature $T=0$, i.e., max-product message passing, and then performs a correction on top for positive temperatures. Deep Boltzmann machine (DBM) {#sec:dbmcase} ---------------------------- In this section we consider the slightly more complicated case in which the underlying PGM is a binary DBM with $V$ visible units and two hidden layers with $H_1$ and $H_2$ units. As in Section \[sec:rbmcase\], we use the slightly different parametrization $$\begin{aligned} \label{eqn:dbm} \begin{split} &\phi({\bm v}, {\bm h}; \theta)\\ = &2{\bm h_2}^\top W_{H_2 H_1}{\bm h_1} + 2{\bm h_1}^\top W_{H_1 V}{\bm v} + {\bm h_2}^\top (\bm{c}_{H_2}-W_{H_2 H_1}\bm{1}_{H_1}) + {\bm v}^\top (\bm{c}_V-W_{H_1 V}^\top1_{H_1})\\ + & {\bm h_1}^\top (\bm{c}_{H_1}-W_{H_2 H_1}^\top\bm{1}_{H_2} - W_{H_1 V}\bm{1}_V) \end{split}\end{aligned}$$ We note that, due to the fact that we are considering a DBM with just two hidden layers, we can reuse the equations derived in Section \[sec:rbmcase\] by some simple transformations. More concretely, define $$\bm{\tilde{h}} = \bm{h_1}, \bm{\tilde{v}} = \begin{bmatrix}\bm{v}\\ \bm{h_2}\end{bmatrix}, \tilde{W} = \begin{bmatrix}W_{H_1 V}\\ W_{H_2 H_1}^\top\end{bmatrix}, \bm{\tilde{c}}_H = \bm{c}_{H_1}, \bm{\tilde{c}}_{V} = \begin{bmatrix}\bm{c}_V\\ \bm{c}_{H_2}\end{bmatrix}$$ It’s easy to see that plugging $\tilde{\bm{h}}, \tilde{\bm{v}}, \tilde{W}, \tilde{\bm{c}}_H, \tilde{\bm{c}}_V$ into Equation \[eqn:rbm\] recovers Equation \[eqn:dbm\]. As a result, the QT-NN equations derived in Section \[sec:rbmcase\] for the RBM apply equally well to the DBM with two hidden layers. The only change needed lies in the incoming messages and the queries. Use $v, q$ to denote the original observed values for the visible units and our interested query. The corresponding unary term for visible units for DBM is define as $\bm{u}_V = \operatorname{logit}(\bm{\tilde{v}})\circ\bm{\tilde{q}}$, where $$\bm{\tilde{v}} = \begin{bmatrix}\bm{v} \\ 0.5\bm{1}_{H_2}\end{bmatrix}\text{ and }\bm{\tilde{q}} = \begin{bmatrix}\bm{q} \\ \bm{1}_{H_2}\end{bmatrix}$$ Gaussian restricted Boltzmann machine (GRBM) {#sec:grbmcase} -------------------------------------------- The log-probability of a GRBM is proportional to: $$\phi({\bm v}, {\bm h}; \theta) = -\frac{1}{2 \sigma^2} \|{\bm v}- {\bm b}\|^2 + {\bm c}^\top {\bm h} + \frac{1}{\sigma} {\bm h}^\top W {\bm v}$$ where $\sigma$ is fixed to $1$. To embed passing of continuous messages in the QT-NN, we approximate continuous messages with Gaussian distributions. In the following equations, for every message to a visible unit, we store the two natural parameters of a Gaussian distribution corresponding to that message. $$\begin{aligned} \bm{u}_H &= \bm{0}_H &\nonumber\text{(unary term for hidden units)}\\ \bm{u}_{V\theta_1} &= \bm{v} \circ \bm{q} + \bm{b} \circ(\bm{1}_V - \bm{q}) &\nonumber\text{(param 1 for visible unary term)}\\ \bm{u}_{V\theta_2} &= -\frac{1}{2\epsilon} \cdot \bm{q} -\frac{1}{2} \cdot (\bm{1}_V - \bm{q}) &\nonumber\text{(param 2 of visible unary term)}\\ M_{HV}^{(0)} &= \bm{0}_{HV} &\nonumber\text{(init message from visible to hidden)}\\ M_{VH\theta_1}^{(0)} &= \bm{0}_{VH} &\nonumber\text{(init message from hidden to visible)}\\ M_{VH\theta_2}^{(0)} &= -\frac{1}{2} \cdot \bm{1}_{VH} &\nonumber\text{(init message from hidden to visible)}\\ C_{VH\theta_1}^{(n-1)} & =\bm{u}_{V\theta_1} + M_{VH\theta_1}^{(n-1)} \bm{1}_H - M_{VH\theta_1}^{(n-1)} &\nonumber\text{(param 1 of cavity)}\\ C_{VH\theta_2}^{(n-1)} & =\bm{u}_{V\theta_2} + M_{VH\theta_2}^{(n-1)} \bm{1}_H - M_{VH\theta_2}^{(n-1)} &\nonumber\text{(param 2 of cavity)}\\ M_{HV}^{(n)} &= f_{W^\top} (C_{VH\theta_1}^{(n-1)}, C_{VH\theta_2}^{(n-1)})^\top &\nonumber\text{(interlayer connection)}\\ C_{HV}^{(n-1)} &= \bm{u}_{H} + \bm{c}_H + M_{HV}^{(n-1)} \bm{1}_V - M_{HV}^{(n-1)} &\nonumber\text{(cavity)}\\ M_{VH\theta_1}^{(n)},M_{VH\theta_2}^{(n)} &= g_{W} (C_{HV}^{(n-1)}, C_{VH\theta_1}^{(n-1)}, C_{VH\theta_2}^{(n-1)}) &\nonumber\text{(interlayer connection)}\\ \hat{\bm{v}}_{\theta_1} &= \bm{u}_{V\theta_1} + M_{VH\theta_1}^{(N)} \bm{1}_H &\nonumber\text{(output layer for visible)}\\ \hat{\bm{v}}_{\theta_2} &= \bm{u}_{V\theta_2} + M_{VH\theta_2}^{(N)} \bm{1}_H &\nonumber\text{(output layer for visible)}\\ \hat{\bm{h}} &= \sigma(\bm{u}_H + \bm{c}_H + M_{HV}^{(N)}\bm{1}_V) &\nonumber\text{(output layer for hidden)},\end{aligned}$$ where $$\begin{aligned} f_w (x, y) &= - \frac{2xw + w^2}{4y} &\nonumber \\ g_w (x, \theta_1, \theta_2) &= (b_1(w, x, \theta_1, \theta_2) - \theta_1, b_2(w, x, \theta_1, \theta_2) - \theta_2)& \nonumber\\ \sigma(x) &= 1/(1+e^{-x}) & \nonumber\end{aligned}$$ Notation clarifications: - $\theta_1$ and $\theta_2$ are used to denote the two natural parameters of a Gaussian distribution. In the case of $M_{VH\theta_1}$ and $M_{VH\theta_2}$, these are parameters approximating the messages from hidden units to visible units with a Gaussian distribution.. - The functions $b_1$ and $b_2$ output the two natural parameters of a Gaussian distribution approximating the belief at a visible unit, as is described below. Given $w$ (the weight of the connection between a hidden unit and visible unit), $x$ (the cavity at the hidden unit), and $\theta_1$, $\theta_2$ (the natural parameters of the cavity at the visible unit), we approximate the belief at the visible unit as follows: $$\begin{aligned} \mu &= \frac{-\theta_1}{2 \theta_2}\nonumber \\ \sigma^2 &= \frac{-1}{2 \theta_2}\nonumber \\ \mu_B &= \frac{e^{x+w\mu + \frac{1}{2} \sigma^2 w^2} (\mu + \sigma^2 w) + \mu}{e^{x+w\mu + \frac{1}{2} \sigma^2 w^2} + 1}\nonumber\\ \sigma^2_B &= \frac{e^{x+w\mu + \frac{1}{2} \sigma^2 w^2} ((\mu + \sigma^2 w)^2 + \sigma^2) + \mu^2 + \sigma^2}{e^{x+w\mu + \frac{1}{2} \sigma^2 w^2} + 1} - \mu_B^2 \nonumber\\\end{aligned}$$ The approximation of the belief is a Gaussian with mean $\mu_B$ and variance $\sigma^2_B$. and the functions $b_1$ and $b_2$ return the two natural parameters of that Gaussian, $\frac{\mu_B}{\sigma^2_B}$ and $-\frac{1}{\sigma^2_B}$ respectively. Statistical consistency ======================= We discuss herein the appealing statistical properties of the query training estimator. In particular, we prove its local consistency for exponential family models. Let $d, n, m, K, L$ be integers. For a vector $\bm{\theta}^* \in \mathbb{R}^{d}$, we consider an exponential family model with natural parameter $\bm{\theta}^*$ and corresponding probability distribution function: $$\label{bm} p(\bm{x}, \bm{z} | \bm{\theta}^*) = \frac{1}{Z (\bm{\theta}^*)} \exp\left( (\bm{\theta}^*)^T \bm{T}(\bm{x}, \bm{z}) \right)=\frac{1}{Z (\bm{\theta}^*)} \exp\left( \sum_{j=1}^d \theta^*_j T_j(\bm{x}, \bm{z}) \right),$$ where $\bm{x} \in \{1, \ldots, K\}^{n}$ is a vector of discrete observations, $\bm{z} \in \{1, \ldots, L \}^{m}$ is a vector of discrete latent (unobserved) variables, and $\{ T_j(\bm{x}, \bm{z}) \}_{j=1}^d \in \mathbb{R}^{d}$ are sufficient statistics, that is, known functions of the data. $Z(\bm{\theta}^*)$ is a normalising constant (the partition function), given by the sum: $$Z(\bm{\theta}^*) = \sum_{\bm{x} \in \{1, \ldots, K \}^n, ~~ \bm{z} \in \{1, \ldots, L \}^m} \exp\left( \sum_{j=1}^d \theta^*_j T_j(\bm{x}, \bm{z}) \right),$$ which number of terms is exponential in the dimensions $n$ and $m$. We use query training (QT) to estimate $\bm{\theta}^*$ from $\bm{x}$; and we consider a uniform distribution over queries. For a natural parameter estimate $\bm{\theta} \in \mathbb{R}^d$, the query associated with a non-empty subset $S \subset \{ 1, \ldots, n\}$ is $$Q_S(\bm{\theta}) = \prod_{i \in S} p(x_i | \bm{x}_{-S}, \bm{\theta}),$$ where $\bm{x}_{-S}$ is the subset of observations with indexes outside $S$, and $p(. |\bm{\theta} )$ is defined as in Equation . $Q_S$ is a random variable with respect to $\bm{x}$ and $S$. We consider the case of an infinite number of samples. A query training estimator is defined by $$\hat{\bm{\theta}} \in \operatorname*{argmax}_{\bm{\theta} \in \mathbb{R}^d } \bar{J} (\bm{\theta}),$$ where the QT objective value considers the expectation over $\bm{x}$ and $S$ with respect to the exponential family pdf with natural parameter $\bm{\theta}^*$ (cf. Equation ) of the normalized logarithm of $Q_S$: $$\label{objval} \bar{J}(\bm{\theta}) = \mathbb{E}_{\bm{\theta}^*} \left\{ \frac{1}{|S|} \log Q_S(\bm{\theta})\right\}= \frac{1}{2^n -1}\sum_{ \substack{S \subset \{ 1, \ldots, n\} \\ S \ne \emptyset} } \frac{1}{|S|} \sum_{i \in S} \mathbb{E}_{\bm{\theta}^*} \{ \log p( x_i | \bm{x}_{-S}, \bm{\theta}) \}.$$ If we only consider singleton queries, we obtain the pseudo-likelihood estimator, which is known to be consistent for fully visible Boltzman machines [@hyvarinen2006consistency]. We generalize herein this result and show the local consistency of the query training estimator for exponential family models. The main steps are derived in Theorems \[th:gradient\] and \[th:hessian\]. \[th:gradient\] The gradient of the QT objective value (defined in Equation ) evaluated for the ground truth natural parameter is equal to $\bm{0}$. That is, $\nabla_{\bm{\theta}} \bar{J} (\bm{\theta}^*) = \bm{0}$. We present the proof in Section \[sec:proof\]. In addition we prove in Section \[sec:proof-hessian\] the following property: \[th:hessian\] The Hessian of the QT objective value evaluated for the ground truth natural parameter is negative semidefinite. That is, $\nabla^2_{\bm{\theta}} \bar{J} (\bm{\theta}^*) \preccurlyeq 0$. We make the mild assumption that this Hessian is negative definite and conclude that in the case of an infinite amount of data, the QT estimator is equal to the ground truth $\bm{\theta^*}$ in a ball around $\bm{\theta^*}$. \[th:conistency\] Assume $\nabla^2_{\bm{\theta}} \bar{J} (\bm{\theta}^*) \prec 0$. The query training estimator is locally consistent for the exponential family model defined in Equation . #### Proof of Theorem \[th:conistency\]: By continuity, the Hessian of $\bar{J}$ is negative definite in a compact ball around $\bm{\theta^*}$ and $\bar{J}$ is strictly. concave in this ball. In addition, $\bm{\theta^*}$ is a point of zero gradient. It is then the unique local maximizer of the objective value, and the QT estimator is locally consistent. $\square$ Proof of Theorem \[th:gradient\] {#sec:proof} --------------------------------- For a natural parameter estimate $\bm{\theta} \in \mathbb{R}^d$, the query associated with a non-empty subset $S \subset \{ 1, \ldots, n\}$ is $$Q_S(\bm{\theta}) = \prod_{i \in S} Q^i_S(\bm{\theta}) = \prod_{i \in S} p(x_i | \bm{x}_{-S}, \bm{\theta}),$$ where we have noted $Q^i_S(\bm{\theta})=p(x_i | \bm{x}_{-S}, \bm{\theta})$. In addition, for any index $i \in S$ it holds: $$\begin{aligned} \label{Q_S^i} \begin{split} &Q^{i}_S(\bm{\theta}) = \frac{p(x_{i}, \bm{x}_{-S} | \bm{\theta})}{p(\bm{x}_{-S} | \bm{\theta})} = \frac{p \left(\bm{x}_{ - \left(S - \{i\} \right) } | \bm{\theta} \right) }{p(\bm{x}_{-S} | \bm{\theta})} = \frac{D_{S - \{i\}} (\bm{\theta}) }{D_{S} (\bm{\theta}) }, \end{split}\end{aligned}$$ where for a set $S$ we define $D_S=p(\bm{x}_{-S} | \bm{\theta}) Z(\bm{\theta})$. We then have: $$\begin{aligned} \begin{split} \log Q^{i}_S(\bm{\theta}) = \log D_{S - \{ i \}} (\bm{\theta}) - \log D_S (\bm{\theta}). \end{split}\end{aligned}$$ We now evaluate the gradient of $\log Q^{i}_S$ at the ground truth natural parameter in the case of an infinite amount of data. To this end, we fix an index $j$ and consider the following lemma: \[lemma\_grad\] For a set $S$ and an index $j$, the partial derivative of $\log D_S$ with respect to $\theta_j$ evaluated at $\bm{\theta} \in \mathbb{R}^d$ is: $$\frac{\partial \log D_S}{\partial \theta_{j}} (\bm{\theta}) = \mathbb{E}_{\bm{\theta}} (T_j (\bm{x}, \bm{z}) | \bm{x}_{-S})$$ where $\mathbb{E}_{\bm{\theta}}$ corresponds to the expectation with respect to the exponential family pdf with parameter $\bm{\theta}$. Lemma \[lemma\_grad\] implies that the partial derivative of the logarithm of the restricted partition function is equal to the conditional expectation of the corresponding sufficient statistics. The proof is presented in Section \[sec:proof-lemma\]. As a consequence of Lemma \[lemma\_grad\], the partial derivative of $\log Q^{i}_S(\bm{\theta})$ with respect to $\theta_{j}$ can be expressed as: $$\begin{aligned} \begin{split} \frac{\partial \log Q_S^{i} }{\partial \theta_{j }} (\bm{\theta}) &= \frac{\partial \log D_{S -\{ i \}} }{\partial \theta_{j }} (\bm{\theta}) - \frac{\partial \log D_S}{\partial \theta_{j }} (\bm{\theta}) = \mathbb{E}_{\bm{\theta}} \left( T_j (\bm{x}, \bm{z}) | \bm{x}_{- \left(S - \{ i \} \right) } \right) - \mathbb{E}_{\bm{\theta}} ( T_j (\bm{x}, \bm{z}) | \bm{x}_{-S} ). \end{split}\end{aligned}$$ This partial derivative evaluated for the ground truth parameter is then: $$\frac{\partial \log Q_S^{i} }{\partial \theta_{j }}(\bm{\theta}^*) = \mathbb{E}_{\bm{\theta}^* } \left( T_j (\bm{x}, \bm{z}) | \bm{x}_{- \left(S - \{ i \} \right) } \right) - \mathbb{E}_{\bm{\theta}^* } ( T_j (\bm{x}, \bm{z}) | \bm{x}_{-S} ).$$ In the case of an infinite amount of data from the model defined in Equation , we switch the expectation of the gradient with the gradient of the expectation to conclude with the law of total expectation: $$\begin{aligned} \begin{split} \frac{\partial \mathbb{E}_{\bm{\theta}^*} \log Q_S^{i} }{\partial \theta_{j}} (\bm{\theta}^*) = \mathbb{E}_{\bm{\theta}^*} \left\{ \frac{\partial \log Q_S^{i} }{\partial \theta_{j }} (\bm{\theta}^*) \right\} &= \mathbb{E}_{\bm{\theta}^*} \left\{ \mathbb{E}_{\bm{\theta}^* } \left( T_j (\bm{x}, \bm{z}) | \bm{x}_{- \left(S - \{ i \} \right) } \right) - \mathbb{E}_{\bm{\theta}^* } ( T_j (\bm{x}, \bm{z}) | \bm{x}_{-S} )\right\}\\ &= \mathbb{E} _{\bm{\theta}^*}( T_j (\bm{x}, \bm{z}) ) - \mathbb{E} _{\bm{\theta}^*}( T_j (\bm{x}, \bm{z}) )\\ &=0. \end{split}\end{aligned}$$ Consequently, we have: $$\nabla_{\bm{\theta}} \mathbb{E}_{\bm{\theta}^*} \log Q_S^{i} (\bm{\theta}^*) = \bm{0}.$$ Equation defines the query training objective value as: $$\bar{J}(\bm{\theta}) = \frac{1}{2^n -1}\sum_{ \substack{S \subset \{ 1, \ldots, n\} \\ S \ne \emptyset} } \frac{1}{|S|} \sum_{i \in S} \mathbb{E}_{\bm{\theta}^*} \left\{ \log Q^{i}_S (\bm{\theta} ) \right\}.$$ We then immediately conclude: $$\nabla_{\bm{\theta}} \bar{J} (\bm{\theta}^*) = \bm{0}.$$ Proof of Theorem \[th:hessian\] {#sec:proof-hessian} ------------------------------- The following Lemma is an extension of the previous Lemma \[lemma\_grad\] and is also proved in Section \[sec:proof-lemma\]. \[lemma\_hessian\] For a set $S$ and two indexes $j$ and $\ell$, the second order partial derivative of $\log D_S$ with respect to $\theta_{j}$ and $\theta_{\ell}$, evaluated at $\bm{\theta} \in \mathbb{R}^d$ is: $$\frac{\partial^2 \log D_S}{\partial \theta_{j} \partial \theta_{\ell} } (\bm{\theta}) = \mathbb{E}_{\bm{\theta}} ( T_j (\bm{x}, \bm{z} ) T_{\ell} (\bm{x}, \bm{z} ) | \bm{x}_{-S}) - \mathbb{E}_{\bm{\theta}} (T_j (\bm{x}, \bm{z} ) | \bm{x}_{-S}) \mathbb{E}_{\bm{\theta}} ( T_{\ell} (\bm{x}, \bm{z} ) | \bm{x}_{-S}).$$ We can consequently express the Hessian of $\log D_S$ as a conditional covariance matrix: $$\nabla^2_{\bm{\theta}} \log D_S (\bm{\theta}) = \mathbb{E}_{\bm{\theta}} \left\{ \bm{T} (\bm{x}, \bm{z} ) \bm{T} (\bm{x}, \bm{z} )^T | \bm{x}_{-S} \right\} - \mathbb{E}_{\bm{\theta}} \left\{ \bm{T} (\bm{x}, \bm{z} )| \bm{x}_{-S} \right\} \mathbb{E}_{\bm{\theta}} \left\{ \bm{T} (\bm{x}, \bm{z} )| \bm{x}_{-S} \right\}^T.$$ Similarly, for an index $i \in S$, the Hessian $\nabla^2_{\bm{\theta}} \log D_{S - \{ i\} } (\bm{\theta})$ is equal to: $$\mathbb{E}_{\bm{\theta}} \left\{ \bm{T} (\bm{x}, \bm{z} ) \bm{T} (\bm{x}, \bm{z} )^T | \bm{x}_{- (S - \{ i\})} \right\} - \mathbb{E}_{\bm{\theta}} \left\{ \bm{T} (\bm{x}, \bm{z} )| \bm{x}_{- (S - \{ i\})} \right\} \mathbb{E}_{\bm{\theta}} \left\{ \bm{T} (\bm{x}, \bm{z} )| \bm{x}_{- (S - \{ i\})} \right\}^T.$$ We evaluate the Hessian of $\log Q_S^{i} $ at the ground truth parameters $\bm{\theta}^*$ in the case of an infinite amount of data: $$\begin{aligned} \begin{split} &\nabla^2_{\bm{\theta}} \mathbb{E}_{\bm{\theta}^* } \{ \log Q_S^{i} (\bm{\theta}^* ) \} \\ &= \mathbb{E}_{\bm{\theta}^* } \left\{ \nabla^2_{\bm{\theta}} \log Q_S^{i} (\bm{\theta}^* )\right\}\\ &=\mathbb{E}_{\bm{\theta}^* } \left\{ \nabla^2_{\bm{\theta}} \log D_{S - \{ i\} } (\bm{\theta}^* ) - \nabla^2_{\bm{\theta} } \log D_S (\bm{\theta}^* ) \right\}\\ &= \mathbb{E}_{\bm{\theta}^* } \left\{ \mathbb{E}_{\bm{\theta}^* } \left\{ \bm{T} (\bm{x}, \bm{z} ) \bm{T} (\bm{x}, \bm{z} )^T | \bm{x}_{- (S - \{ i\})} \right\} - \mathbb{E}_{\bm{\theta}^* } \left\{ \bm{T} (\bm{x}, \bm{z} )| \bm{x}_{- (S - \{ i\})} \right\} \mathbb{E}_{\bm{\theta}^* } \left\{ \bm{T} (\bm{x}, \bm{z} )| \bm{x}_{- (S - \{ i\})} \right\}^T \right\}\\ &~~~ - \mathbb{E}_{\bm{\theta}^* } \left\{ \mathbb{E}_{\bm{\theta}^* } \left\{ \bm{T} (\bm{x}, \bm{z} ) \bm{T} (\bm{x}, \bm{z} )^T | \bm{x}_{-S} \right\} - \mathbb{E}_{\bm{\theta}^* } \left\{ \bm{T} (\bm{x}, \bm{z} )| \bm{x}_{-S} \right\} \mathbb{E}_{\bm{\theta}^* } \left\{ \bm{T} (\bm{x}, \bm{z} )| \bm{x}_{-S} \right\}^T \right\} \\ &= \mathbb{E}_{\bm{\theta}^* } \left\{ \mathbb{E}_{\bm{\theta}^* } \left\{ \bm{T} (\bm{x}, \bm{z} )| \bm{x}_{-S} \right\} \mathbb{E}_{\bm{\theta}^* } \left\{ \bm{T} (\bm{x}, \bm{z} )| \bm{x}_{-S} \right\}^T \right\}\\ &~~~ - \mathbb{E}_{\bm{\theta}^* } \left\{ \mathbb{E}_{\bm{\theta}^* } \left\{ \bm{T} (\bm{x}, \bm{z} )| \bm{x}_{- (S - \{ i\})} \right\} \mathbb{E}_{\bm{\theta}^* } \left\{ \bm{T} (\bm{x}, \bm{z} )| \bm{x}_{- (S - \{ i\})} \right\}^T\right\} \text{ with the law of total expectation.} \end{split}\end{aligned}$$ For $\bm{u} \in \mathbb{R}^d$, we consequently have: $$\begin{aligned} \begin{split} \bm{u}^T\nabla^2_{\bm{\theta}} \mathbb{E}_{\bm{\theta}^* } \{ \log Q_S^{i} (\bm{\theta}^* )\} \bm{u} &= \mathbb{E}_{\bm{\theta}^* } \left\{ \left( \mathbb{E}_{\bm{\theta}^* } \left\{ \bm{u}^T \bm{T} (\bm{x}, \bm{z} )| \bm{x}_{- S} \right\} \right)^2 - \left( \mathbb{E}_{\bm{\theta}^* } \left\{ \bm{u}^T \bm{T} (\bm{x}, \bm{z} )| \bm{x}_{- S \cup \{ i\}} \right\} \right)^2 \right\} \end{split}\end{aligned}$$ The first term only depends upon $\bm{x}_{-S}$ while the second also depends upon $x_i$. We can then use properties of the conditional expectation and Jensen’s inequality to derive: $$\begin{aligned} \begin{split} \mathbb{E}_{\bm{\theta}^*} \left\{ \left( \mathbb{E}_{\bm{\theta}^* } \left\{ \bm{u}^T \bm{T} (\bm{x}, \bm{z} ) | \bm{x}_{-S} \right\} \right)^2 \right\} &= \mathbb{E}_{\bm{\theta}^*} \left\{ \left( \mathbb{E}_{\bm{\theta}^* } \left\{ \mathbb{E}_{\bm{\theta}^* } \left\{\bm{u}^T \bm{T} (\bm{x}, \bm{z} ) | \bm{x}_{- S \cup \{ i\} } \right\} \big \rvert \bm{x}_{- S } \right\} \right)^2 \right\}\\ &\le \mathbb{E}_{\bm{\theta}^*} \left\{ \mathbb{E}_{\bm{\theta}^* } \left\{ \left( \mathbb{E}_{\bm{\theta}^* } \left\{ \bm{u}^T \bm{T} (\bm{x}, \bm{z} ) | \bm{x}_{- S \cup \{ i\} } \right\} \right)^2 \big \rvert \bm{x}_{- S } \right\} \right\} \\ &=\mathbb{E}_{\bm{\theta}^*} \left\{ \left( \mathbb{E}_{\bm{\theta}^* } \left\{ \bm{u}^T \bm{T} (\bm{x}, \bm{z} ) | \bm{x}_{- S \cup \{ i\} }\right\} \right)^2 \right\}. \end{split}\end{aligned}$$ We then have: $$\bm{u}^T\nabla^2_{\bm{\theta}} \mathbb{E}_{\bm{\theta}^* } \{ \log Q_S^{i} (\bm{\theta}^* ) \} \bm{u} \le 0, \forall \bm{u} \in \mathbb{R}^d.$$ Hence, the symmetric Hessian $\nabla^2_{\bm{\theta}} \mathbb{E}_{\bm{\theta}^* } \{ \log Q_S^{i} (\bm{\theta}^* ) \}$ is negative semidefinite, and so is $\nabla^2_{\bm{\theta}} \bar{J} (\bm{\theta}^*)$. Proof of Lemmas \[lemma\_grad\] and \[lemma\_hessian\] {#sec:proof-lemma} ------------------------------------------------------ We prove the two Lemmas \[lemma\_grad\] and \[lemma\_hessian\] simultaneously. We consider a non-empty subset $S \subset \{ 1, \ldots, n\}$, an index $i \in S$ and an index $j$. We assume without loss of generality that $S = \{1, \ldots, |S| \}$. For an exponential family model with natural parameter $\bm{\theta} \in \mathbb{R}^d$, we have defined: $$\begin{aligned} \begin{split} D_S (\bm{\theta} ) &= p(\bm{x}_{-S} | \bm{\theta} ) Z(\bm{\theta}) = \sum_{ \substack{\bm{\xi}_S \in \{1, \ldots, K\}^{|S|}, \\ \bm{z} \in \{1, \ldots, L \}^m} } p( \bm{\xi}_S, \bm{x}_{-S}, \bm{z} | \bm{\theta} ) Z(\bm{\theta}) = \sum_{\bm{\xi}_S, \bm{z} } ~ \prod_{k=1}^d \exp\left( \theta_k T_k(\bm{\xi}_S, \bm{x}_{-S}, \bm{z} ) \right), \end{split} \end{aligned}$$ where we have used the model definition in Equation . For each term, exactly one factor depends upon $\theta_{j}$. We consequently evaluate the partial derivative of $\log D_S$ with respect to $\theta_{j}$: $$\begin{aligned} \begin{split} \frac{\partial \log D_S }{\partial \theta_{j}} (\bm{\theta}) &= \frac{ \sum_{ \bm{\xi}_S, \bm{z} } T_j(\bm{\xi}_S, \bm{x}_{-S}, \bm{z} ) \prod_{k=1}^d \exp\left( \theta_k T_k(\bm{\xi}_S, \bm{x}_{-S}, \bm{z} ) \right) }{ \sum_{ \bm{\xi}_S, \bm{z} } \prod_{k=1}^d \exp\left( \theta_k T_k(\bm{\xi}_S, \bm{x}_{-S}, \bm{z} ) \right) }\\ &= \sum_{ \bm{\xi}_S, \bm{z} } T_j(\bm{\xi}_S, \bm{x}_{-S}, \bm{z} ) \frac{ p( \bm{\xi}_S, \bm{x}_{-S}, \bm{z} | \bm{\theta} ) }{ p(\bm{x}_{-S} | \bm{\theta} ) }. \end{split} \end{aligned}$$ We then derive Lemma \[lemma\_grad\]: $$\frac{\partial \log D_S }{\partial \theta_{j }} (\bm{\theta}) = \mathbb{E}_{\bm{\theta}}\left( T_j(\bm{x}, \bm{z} ) | \bm{x}_{-S} \right).$$ Let $\ell$ be another index. A similar computation leads to: $$\begin{aligned} \begin{split} \frac{\partial^2 \log D_S }{\partial \theta_{j} \partial \theta_{\ell} } (\bm{\theta}) &= \frac{ \sum_{ \bm{\xi}_S, \bm{z} } T_j(\bm{\xi}_S, \bm{x}_{-S}, \bm{z} ) T_{\ell}(\bm{\xi}_S, \bm{x}_{-S}, \bm{z} ) \prod_{k=1}^d \exp\left( \theta_k T_k(\bm{\xi}_S, \bm{x}_{-S}, \bm{z} ) \right) }{ \sum_{ \bm{\xi}_S, \bm{z} } \prod_{k=1}^d \exp\left( \theta_k T_k(\bm{\xi}_S, \bm{x}_{-S}, \bm{z} ) \right) }\\ &~~~ - \frac{ \sum_{ \bm{\xi}_S, \bm{z} } T_j(\bm{\xi}_S, \bm{x}_{-S}, \bm{z} ) \prod_{k=1}^d \exp\left( \theta_k T_k(\bm{\xi}_S, \bm{x}_{-S}, \bm{z} ) \right) }{ \sum_{ \bm{\xi}_S, \bm{z} } \prod_{k=1}^d \exp\left( \theta_k T_k(\bm{\xi}_S, \bm{x}_{-S}, \bm{z} ) \right) } \\ &~~~~~~ \times \frac{ \sum_{ \bm{\xi}_S, \bm{z} } T_{\ell}(\bm{\xi}_S, \bm{x}_{-S}, \bm{z} ) \prod_{k=1}^d \exp\left( \theta_k T_k(\bm{\xi}_S, \bm{x}_{-S}, \bm{z} ) \right) }{ \sum_{ \bm{\xi}_S, \bm{z} } \prod_{k=1}^d \exp\left( \theta_k T_k(\bm{\xi}_S, \bm{x}_{-S}, \bm{z} ) \right) }. \end{split} \end{aligned}$$ Hence, we derive Lemma \[lemma\_hessian\]: $$\frac{\partial^2 \log D_S }{\partial \theta_{j} \partial \theta_{\ell} } (\bm{\theta}) = \mathbb{E}_{\bm{\theta}}\left( T_j(\bm{x}, \bm{z} ) T_{\ell}(\bm{x}, \bm{z} ) | \bm{x}_{-S} \right) - \mathbb{E}_{\bm{\theta}}\left( T_j(\bm{x}, \bm{z} ) | \bm{x}_{-S} \right) \mathbb{E}_{\bm{\theta}}\left( T_{\ell}(\bm{x}, \bm{z} ) | \bm{x}_{-S} \right).$$ [^1]: Use footnote for providing further information about author (webpage, alternative address)—*not* for acknowledging funding agencies. [^2]: Email: `{miguel,wolfgang,nishad,stannis,antoine,dileep}@vicarious.com` [^3]: Note that the inference network is typically designed independently of the generative model, despite both being tightly coupled. [^4]: If the answers to these queries is approximate (as it will be the case with QT), different factorizations of a joint density will result in different approximations. [^5]: For a fully connected graph, the number of messages is quadratic in the number of variables, showing the advantage of a sparse connectivity pattern, which can be easily encoded in the PGM architecture. [^6]: In the general case, $m_{ij}^{(n)}$ and $\theta_i$ are vectors. We only encode matrix in bold, to represent aggregate messages [^7]: This parameterization is in one-to-one correspondence with the standard and results in simpler QT-NN equations as shown in the Supplementary Material. Most readers can ignore this detail. [^8]: See previous footnote. [^9]: The MNIST dataset can be found at <http://yann.lecun.com/exdb/mnist/>. [^10]: For the purpose of this metric, we consider foreground those pixels labeled (or estimated) as either `IN`, `CONTOUR`. Higher is better. [^11]: In practice we clip the messages so that $u_i = -1000, ~0, ~1000$ for the values $\phi(1, x_j) = 0, ~0.5, ~1$.
--- abstract: 'We give an exceptionally short derivation of Schroedinger’s equation by replacing the idealization of a point particle by a density distribution.' author: - 'C. Baumgarten' title: 'A Two-Page “Derivation” of Schroedinger’s Equation' --- § \#1 \#1 Introduction: Point Particles ============================= Quantum mechanics(QM) is often distinguished from classical mechanics (CM) by claims stating that the former is weird and counter-intuitive while the latter is visualizable and intuitive. Here we suggest a different point of view based on a critic of the idealization of the notion of a point particle. As well known, a point charge, regarded from the standpoint of classical electrodynamics, implies infinite self-energy. On the other hand, two classical uncharged mass points have, from the geometrical-mechanical point of view, a vanishing reaction cross-section and can therefore not collide. In short: The idealization of a point particle is neither very “intuitive” nor physically consistent [^1]. It can only be regarded as a mathematical idealization which is legitimized on a macroscopic level where the microscopic details are negligible. It is not well suited to serve as a faithful description of microscopic particles. Extended Particles ================== Due to these problems with the concept of point particles, let us consider the alternative concept of extended particles. The problem with the idea of indivisible but spatially extended particles is the logical tension between the role of a spatial position as fundamental distinction in contrast to the self-identity of whatever forms the “inside” of the object: Either two regions of space are identical or they are not. If a self-identical object is extended in space, then the spatial position “inside” the object can have no meaning. The spatial position is then fundamental outside but meaningless “inside” an extended object, which undermines the (classical) logic of space as a continuum. One might exemplify it as a mechanical paradox: If we visualize extended fundamental objects as being elastic, then we have to assume internal structure, distinguishable regions of different density and pressure, i.e. no more a simple but a complex object with distinguished internal structure. Then the idea of fundamental simplicity, unity and indivisiblility is lost on logical grounds. Hence extended but fundamental particles, “billard balls”, can not be elastic. Infinite hardness, on the other hand, would result in infinite forces in the very moment of a collision. If one wishes to avoid unphysical infinities, also extended [*fundamentally indivisible*]{} objects are logically problematic as long as the spatial position is understood as fundamental. The impediment for a self-consistent description of extended particles embedded in space can be traced down to the parameterization by spatial coordinates. Identification of physical substance by its (supposed fundamental and absolute) spatial position is the root of the problem: If matter is identified by position, then it is questionable how extended matter could, though extended, nonetheless form the self-identical unity of an indivisible particle: To “fill out space” is not a consistent fundamental “mode of existence”. It is therefore a logical requirement to change adress space. Schroedingers Equation ====================== Consider that an extended classical particle is described by a normalizable spatial density $\rho(t,\vec x)$ (t,x)d\^3x=1. The density is naturally assumed to be a positive definite quantity: $\rho\ge 0$. To get rid of this boundary condition, we express the density as (the sum of) the square of auxiliary functions $\psi(t,\vec x)$ such that (t,x)=\^2(t,x). or likewise (t,x)=\_i\_i\^2(t,x), or, using complex numbers, by (t,x)=\^. The auxiliary function $\psi(t,\vec x)$ is therefore by construction square integrable and has an Fourier transform of $\psi(t,\vec x)$: (t,x)=[1(2)\^[3/2]{}]{}(k)d\^3k. The Fourier transform is a unitary and bijective change of address space: Changes in Fourier space have consequences in real space and vice versa. Hence such a transformation might allow to introduce the desired physical constraints using new variables. It follows from the reversibility of the Fourier transform that it is merely a [*reformulation*]{} of the same problem. One just describes a spatial distribution as a wave-packet, i.e. by the use of different variables. The idea behind this kind of reparametrization is essentially that of a regularization: To find new variables for which infinities or singular points, related to the original variables, disappear. Regularizations are well known in physics [@reg]. Kustaanheimo and Stiefel for instance used a description by spinors as a regularization of the classical Kepler problem [@KS]. Now we need a sensible physical [*constraint*]{}, formulated in the new variables. The constraint could either be some law of motion or allow to derive a law of motion. It is well known the center of a “particle” described by a “wave packet” moves with the so-called [*group velocity*]{} v\_[gr]{}=\_k(k)=([k\_x]{},[k\_y]{},[k\_z]{})\^T. \[eq\_dispersion\] This so-called dispersion relation (Eq. \[eq\_dispersion\]) has precisely the well-known form of the velocity equation of classical Hamiltonian mechanics, which states that the velocity of a classical point particle is given by the gradient of the energy (i.e. the Hamiltonian function) in momentum space: v=\_p[H]{}(p) This remarkable formal similarity provides evidence, that the following constraint is physically adequate for the use in combination with the Fourier transformation: \_k(k)=\_p[H]{}(p). \[eq\_base\] Equation \[eq\_base\] is valid in case of proportionality ${\cal H}\propto\w$ and $\vec p~\propto\vec k$, which suggests the introduction of a constant proportionality factor with the dimension of [*action*]{}. Action preserving processes are well known in classical physics. Classical Hamiltonian (symplectic) motion for instance preserves the phase space volume. Max Born refered to the classical adiabatic invariance of the phase space volume $\Phi=\mathrm{const}$ and to the fact that energy (change) and frequency (change) are, in such processes, proportional to each other $\delta{\cal E}=\Phi\,\delta\w$ [@Born]. Furthermore it is known since long that the components of the Schroedinger wave function are subject to Hamiltonian motion (HM) [@Strocchi; @Ralston1989]. Instead of showing that Schroedinger’s equation implies HM, we reverse the order: if one presumes the validity of classical HM as the basis of wave function dynamics, then Eq. \[eq\_base\] is automatically valid in adiabatic approximation. Hence we arrive unceremoniously at the de-Broglie relations, if the invariant phase space volume $\Phi$ can be identified with $\hbar$ [$$\begin{split} {\cal E}&=\hbar\,\w\\ \vec p&=\hbar\,\vec k\,. \label{eq_deBroglie} \end{split}$$]{} Inserting this into the Fourier transform gives: (t,r)=[1(2)\^[3/2]{}\^3]{}(k)d\^3p. \[eq\_ft1\] Once this is written, it is obvious that energy is given by the time derivative, and momentum by the spatial gradient [$$\begin{split} {\cal E}\,\psi(t,\vec r)&=i\,\hbar{\d\over\d t}\psi(t,\vec r)\\ {\vec p}\,\psi(t,\vec r)&=-i\,\hbar\,\vec\nabla\,\psi(t,\vec r)\,. \end{split}$$]{} Using these relations to express the classical (kinetic) energy of a free particle ${\cal E}={\vec p^2\over 2\,m}$ results in Schroedinger’s equation for a free particle: i(t,x)=-[\^22m]{}\^2(t,r). Adding a potential energy (density) $\rho(t,\vec x)\,V(\vec x)$ readily yields Schroedingers equation for a particle in potential $V(\vec x)$: i(t,x)=(-[\^22m]{}\^2+V(x))(t,r). The “derivation”, as presented here, is short and physically rigorous. However, today it is known that Schroedinger’s equation is not the most fundamental equation, but has to be derived from a Dirac type equation in a more general relativistic setting. Only then full compatibility with electromagnetic theory can be expected. We have shown elsewhere how the Dirac equation can be derived from “first” (logical) principles [@qed_paper; @osc_paper; @uqm_paper]. The derivation automatically yields the Lorentz transformations, the Lorentz force law [@rdm_paper; @geo_paper; @lt_paper] and even Maxwell’s equations [@qed_paper] in a single coherent framework.\ Summary and Conclusions ======================= A sober assessment reveals that the alleged intuitiveness and logic of the classical notion of the point particle fails, on closer inspection, to be physically consistent. Schroedingers equation can be regarded as a kind of [*regularization*]{} that allows to circumvent the irrational infinities of the classical point-particle-idealization. It is non-classical only insofar as it calls the role of a spatial position as [*the absolute and fundamental*]{} adress into question, at least in a microscopic context. This, however, contrasts only with classical [*metaphysics*]{} of absolute and fundamental space. The used math is fully classical. [9]{} Max Born “The Mechanics of the Atom”; G. Bell and Sons, London 1927, chap. 10. F. Strocchi “Complex Coordinates and Quantum Mechanics”; Rev. Mod. Phys. 38, Issue 1 (1966), pp. 36-40. John P. Ralston “Berry’s phase and the symplectic character of quantum evolution”; Phys. Rev. A 40, Issue 9 (1989), pp. 4872-4884. Their exists a rich literature concerning regularizing transformations. We only give a few randomly selected references [@KS; @Zare; @Davtyan; @Komarov; @Davtyan2; @BB]. P. Kustaanheimo and E. Stiefel “Pertubation Theory of Kepler Motion Based on Spinor Regularization”; J. reine ang. Math. Vol. 1965, Issue 218 (1965), 204-219. K. Zare and V. Szebehely “Time Transformations in the Extended Phase Space”; Cel. Mech. Vol. 11 (1975), pp. 469-482. L.C. Davtyan, L.G. Mardoyan, G.S. Pogosyan, A.N. Sissakian and V.M. Ter-Antonyan; “Generalized KS transformation: from five-dimensional hydrogen atom to eight-dimensional isotrope oscillator”; J. Phys. A: Math. Gen. Vol. 20 (1987), pp. 6121-6125. L.I. Komarov and Le Van Hoang “Generalized Kustaanheimo-Stiefel Transformations”; Theor. and Math. Phys. Vol. 99, No. 1 (1994), pp. 437-440. L.S. Davtyan, A.N. Sissakian and V.M. Ter-Antonyan “The Hurwitz Transformation: Nonbilinear version”; J. Math. Phys. Vol. 36 (1995), pp. 884-894. Sergio Blanes and Chris J. Budd “Explicit Adaptive Symplectic (Easy) Integrators: A Scaling Invariant Generalization of the Levi-Civita and KS Regularization”; Celest. Mech. and Dyn. Astron. 89 (2004), 383-405. C. Baumgarten “Relativity and (Quantum-) Electrodynamics from (Onto-) Logic of Time” in: “Quantum Structural Studies”, Edt. Ruth E Kastner, Jasmina Jeknic-Dugic and George Jaroszkiewicz, World Scientific (2017), ISBN: 978-1-78634-140-2. Preprint arXiv:1409.5338v5 (2014/2015). C. Baumgarten; “Old Game, New Rules: Rethinking the Form of Physics”, Symmetry 2016, 8(5), 30; doi:10.3390/sym8050030. C. Baumgarten “How to (Un-) Quantum Mechanics”; arxiv:1810.06981. C. Baumgarten “Use of Real Dirac Matrices in 2-dimensional Coupled Linear Optics”; Phys. Rev. ST Accel. Beams. 14, 114002 (2011). C. Baumgarten “Geometrical method of decoupling”; Phys. Rev. ST Accel. Beams. 15, 124001 (2012). C. Baumgarten “The Simplest Form of the Lorentz Transformations”; arxiv:1801.01840. [^1]: Remarkably there is little quest for an “interpretation” of classical mechanics, despite some obvious inconsistencies and irrational infinities.
--- abstract: 'We consider a time-consistent mean-variance portfolio selection problem of an insurer and allow for the incorporation of basis (mortality) risk. The optimal solution is identified with a Nash subgame perfect equilibrium. We characterize an optimal strategy as solution of a system of partial integro-differential equations (PIDEs), a so called extended Hamilton-Jacobi-Bellman (HJB) system. We prove that the equilibrium is necessarily a solution of the extended HJB system. Under certain conditions we obtain an explicit solution to the extended HJB system and provide the optimal trading strategies in closed-form. A simulation shows that the previously found strategies yield payoffs whose expectations and variances are robust regarding the distribution of jump sizes of the stock. The same phenomenon is observed when the variance is correctly estimated, but erroneously ascribed to the diffusion components solely. Further, we show that differences in the insurance horizon and the time to maturity of a longevity asset do not add to the variance of the terminal wealth.' address: '\*Institute of Insurance Science and Institute of Financial Mathematics, Ulm University Helmholtzstrasse 20, 89081 Ulm, Germany' author: - 'Frank Bosserhoff\*, Mitja Stadje\*' bibliography: - 'bibliography.bib' title: 'Mean-variance hedging of unit linked life insurance contracts in a jump-diffusion model' --- [^1] Introduction ============ Two major risks faced by life insurance companies are longevity and asset risk. Longevity risk refers to the risk that the future changes in the mortality rates are incorrectly estimated while asset risk refers to the possibility of a future loss in the investment portfolio. Increases in the life expectancy might among others stem from sudden changes in environmental or medical conditions that are not foreseeable upon the contract initiation. Clearly, these changes need to be accounted for by the mortality model. An adequate way to do so is the modeling of the force of mortality with a diffusion process supplemented by jumps, see [@vigna] and [@blake] for a detailed discussion. Another practical challenge of hedging longevity risk is basis risk. The payoff of for instance a longevity bond depends on a particular mortality rate that is related to but certainly not identical with the insurer’s portfolio, see [@br1] and [@br2] for empirical studies on this issue. Thus, buying a longevity asset can only provide a partial hedge against an insurance company’s mortality exposure.\ Asset risk stems from the fact that the premiums paid by the insured are to be gainfully invested at the capital market inducing financial risk. Empirical evidence suggests that returns are non-gaussian and leptokurtic, see e.g. [@ct], [@schoutens] and references therein. Thus, in order to properly capture this risk, an insurer should base a stock price model on a Brownian component and additionally allow for jumps. Such a financial market is known to be generically incomplete. Consequently, an insurance company facing the aforementioned risks cannot perfectly hedge its obligations. Hence, a way to quantify risk needs to be specified. In this paper we identify risk with the variance of the terminal wealth. The identification of risk with the variance of the terminal payoff has a long tradition in academia as well as industry and dates back already to [@markowitz]. Mean-variance portfolio selection is intuitively appealing and analytically tractable. A major drawback, however, is *time-inconsistency*, which means that due to the non-linearity and non-recursiveness of the variance part, the dynamic programming approach fails. An investor might initiate a dynamic strategy because it is optimal at a particular point in time, knowing fully well that she will deviate from this strategy later on. Investors ignoring the sub-optimality of a previously found strategy are said to *pre-commit*, see [@strotz] for fundamentals on this problem. In [@xyz] and [@lim], the pre-commitment version of a mean-variance portfolio selection problem in a continuous-time economy is solved. The question of dynamically optimal, i.e., time-consistent mean-variance policies has been addressed for example by [@basak]. Their solution approach is based on a recursive formulation allowing for the application of dynamic programming. The authors point out that the same solution could be found as the Nash subgame perfect equilibrium outcome whereby the investor is playing a game with a future incarnation of herself, that is, a game with infinitely many players. Thereby a strategy is a Nash subgame perfect equilibrium if at some given point in time an investor knows that every future “player” will follow a certain strategy, then it is optimal for her not to deviate. For a general game-theoretic background we refer to [@hp] and references therein. The reformulation of time-inconsistent control problems in game-theoretic terms has been originally proposed by [@ekeland] and [@bm]. This line of research has been followed by [@basak], [@wang], [@czichowsky], [@bensoussan] and [@lindensjo].\ It is well known that the optimal value function of a standard time-consistent stochastic control problem can be characterized as the unique solution of a non-linear partial integro-differential equation (PIDE), see [@oksjump], known as Hamilton-Jacobi-Bellman (HJB) equation. In [@bm] it is shown that the reformulation of a time-inconsistent control problem in game-theoretic terms leads to a system of nonlinear PIDEs, the so called *extended HJB system*. Further, they provide a verification result showing that solving the extended HJB system is a sufficient condition for being an equilibrium control law. [@lindensjo] proves that under certain regularity assumptions solving the extended HJB system is a necessary condition for being an equilibrium; however, the proof is restricted to the diffusion case.\ We consider a continuous-time Markovian economy in which an insurer trades in an arbitrary quantity of risky financial assets, a zero-coupon longevity bond and a riskless asset in order to hedge some terminal payout with regard to mean-variance optimality. Thereby the underlying financial assets as well as the force of mortality are modeled by jump-diffusions. Our first main contribution is an extension of the work of [@lindensjo] by proving that an equilibrium necessarily solves the extended HJB system. Secondly, for the case that an insurer neglects the hedge of some terminal payoff, we are able to present explicit closed-form solutions for the optimal trading strategies, the equilibrium value function and the expected terminal wealth. Thirdly, we exemplify our findings along a tractable model and provide numerical as well as graphical illustrations. When the jumps are erroneously neglected while the expected values and variances of the stock price and the force of mortality are correctly determined, the expected optimal terminal payoff and its variance hardly change. Thus, the time-consistent mean-variance optimal terminal wealth is robust regarding the consideration of jumps. A similar result is found when changing the distribution of the jump sizes of the stock. As the market for longevity assets is relatively illiquid, one can certainly not expect to find a hedging instrument whose time to maturity coincides with the insurance horizon. We find that this does not effect the expectation and variance of the optimal final payoff in our setup. Moreover, we find that our strategies significantly outperform the gain from investing in the riskless asset only.\ The remainder of this paper is structured as follows: In Section 2 we introduce the financial and longevity market under consideration and clarify what is meant by an admissible trading strategy. In the following section we turn to the optimization problem by gradually defining several auxiliary functions, operators and the notion of equilibrium control. With these at hand, Section 3 closes with the specification of the extended HJB system. In Section 4 our first main result (Theorem \[thm\_nec\]) is stated and proved. In Section 5 we neglect the hedge of a terminal payout and present an explicit equilibrium solution, the main result here is Theorem \[thm\_strategies\]. The final section contains the numerical applications.\ **Notation.** Denote by $\mathbb{R}_{+}$ the positive real numbers and by $\mathbb{R}^{m}_{+}$ its $m$-fold Cartesian product. The zero vector in any Euclidean space $\mathbb{R}^{m}$ is written as $0$. For $x \in \mathbb{R}^{m}$ we use $||x||_{2}:= \sqrt{\sum_{i=1}^{m} |x_{i}|^{2}}$. The symbol $\mathbb{R}^{m \times n}$ denotes the space of real-valued matrices with $m$ rows and $n$ columns. If $x \in \mathbb{R}^{m}$, the matrix Diag($x$) $\in \mathbb{R}^{m \times m}$ is the square matrix with the entries of $x$ on the diagonal and all off-diagonal elements being equal to zero. For $A \in \mathbb{R}^{m \times n}$, the symbol $A^{{\intercal}}$ means the transposed of $A$. If $A$ is a square matrix, we write ${\text{Tr}}(A)$ for its trace. For any $T > 0,\ t \in (0,T)$ and some function $f: [0,T] \times \mathbb{R}^{m} \to \mathbb{R}$ such that $f \in C^{1,2}$, we define $\dot{f}(t,\cdot) := \frac{\partial}{\partial t}f(t,\cdot)$ and for any $x \in \mathbb{R}^{m}$ we denote for arbitrary $j \in \{1,\dots,m\}$ by $f_{x_{j}}(\cdot,x)$ the first order partial derivative of $f$ w.r.t. the $j$th component of $x$. Moreover, let the gradient of $f$ w.r.t. $x$ be denoted by $\nabla_{x}f(\cdot,x)$ and the Hessian matrix by $H_{x}f(\cdot,x).$ Model Setup =========== Let $T > 0$ be the planning horizon. Consider the probability space $(\Omega, \mathcal{F}, \mathbb{P})$ that is equipped with a standard $d+1$-dimensional Brownian motion $\hat{W}:=(W^{1}, \dots, W^{d},W^{d+1})^{{\intercal}},$ whereby we define $W:=(W^{1},\dots,W^{d})^{\intercal}$ and $\bar{W}:=W^{d+1},$ and a Poisson random measure $J_{\hat{X}}(dt,d\hat{x})$ on $[0,T] \times \mathbb{R}^{k+1} \backslash \{0\},$ independent of $\hat{W},$ with respective intensity measure $\vartheta_{\hat{X}}(d\hat{x})dt.$ Denote its compensated version by $\tilde{J}_{\hat{X}}(dt,d\hat{x}) = J_{\hat{X}}(dt,d\hat{x}) - \vartheta_{\hat{X}}(d\hat{x})dt.$ Let $(\mathcal{F}_{t})_{t \in [0,T]}$ be the right-continuous completion of the filtration generated by $\hat{W}$ and $J_{\hat{X}}$. Throughout this paper we impose the following condition: The Lévy measure $\vartheta_{\hat{X}}$ is such that $$\int_{{\mathbb{R}^{k+1} \backslash \{0\}}} |\hat{x}|^{2}\ \vartheta_{\hat{X}}(d\hat{x}) < \infty. \label{eq:secmom_fin}$$ Under , let $\hat{X} := (X^{1},\dots,X^{k},X^{k+1})^{{\intercal}}$ be a vector of pure-jump independent $(\mathcal{F}_{t})$-martingales with $X_{t}^{j} = \int_{0}^{t} \int_{{\mathbb{R}^{k+1} \backslash \{0\}}}\hat{x}^{j}\ \tilde{J}_{\hat{X}}(ds,d\hat{x})$, $j=1,\dots,k,k+1,$ whereby $\hat{x}^{j}$ is the $j$th coordinate of $\hat{x} \in \mathbb{R}^{k+1}.$ We define $X:=(X^{1},\dots,X^{k})^{{\intercal}}$ and $\bar{X}:= X^{k+1},$ so $\hat{X}=(X,\bar{X})^{{\intercal}}.$ Further, we may write $J_{\hat{X}}(dt,d\hat{x}) = J_{X,\hat{X}}(dt,dx,d\bar{x}) = \mathbbm{1}_{\bar{x} = 0}\ J_{X}(dt,dx) + \mathbbm{1}_{x=0}\ J_{\bar{X}}(dt,d\bar{x}).$ For any $E \subseteq \mathbb{R}^{k+1},$ the independence of $X$ and $\bar{X}$ implies that (cf. [@ct], Proposition 5.3) $$\vartheta_{X,\bar{X}}(E) = \vartheta_{X}(E_{X}) + \vartheta_{\bar{X}}(E_{\bar{X}}),$$ with $$\begin{aligned} E_{X} &:= \{x \in \mathbb{R}^{k}: (x,0) \in E\}, \\ E_{\bar{X}} &:= \{\bar{x} \in \mathbb{R}: (0,\bar{x}) \in E\}.\end{aligned}$$ Assume that $x^{j} > -1$ for all $j \in \{1,\cdots,k\}.$ The financial market under consideration consists of a bank-account paying interest at a deterministic rate $r \geq 0$ and $m$ risky stocks, $1 \leq m \leq \min \{d,k\},$ with price processes $S^{i} = (S_{t}^{i})_{t \in [0,T]}$, $i = 1,\dots,m,$ satisfying the SDEs given by $$\frac{dS_{t}^{i}}{S_{t{\text{-}}}^{i}} = \mu_{i} \ dt + \sum_{j=1}^{d} \sigma_{ij}\ dW_{t}^{j} + \sum_{j=1}^{k} \rho_{ij} \ dX_{t}^{j}, \label{eq:S}$$ where $S_{0}^{i} = s_{i} \in \mathbb{R}_{+},\ \mu_{i} \in \mathbb{R},\ \sigma_{ij} \in \mathbb{R}_{+}$ and $\rho_{ij} \in \mathbb{R}_{+}$ such that $\sum_{j=1}^{k} \rho_{ij} \leq 1$ respectively denote the initial price of stock $i$, the rate of appreciation, the volatilities and the jump-sensitivities. We assume that the financial market is free of arbitrage, i.e., there exists a measure $\mathbb{Q}$ that is equivalent to $\mathbb{P}$ such that the discounted stock price processes $(S_{t}^{i} / e^{rt})_{t \in [0,T]}, i=1,\dots,m,$ are $(\mathcal{F}_{t})$- martingales under $\mathbb{Q}$.\ In addition to the financial market, we consider an arbitrage-free mortality market on which an investor can buy a zero-coupon longevity bond. We use the Brownian motion $\bar{W}$ and the jump component $\bar{X}$ to model the force of mortality. In particular, the force of mortality $\lambda$ shall be given as the solution of the SDE $$d\lambda_{t} = \mu_{\lambda}(t,\lambda_{t}) \ dt + \sigma_{\lambda}(t,\lambda_{t}) \ d\bar{W}_{t} + \int_{{\mathbb{R} \backslash \{0\}}} \tilde{\sigma}_{\lambda}(t,\lambda_{t{\text{-}}},\bar{x}) \ \tilde{J}_{\bar{X}}(dt,d\bar{x}), \label{eq:fom}$$ whereby $\lambda_{0} \in \mathbb{R}_{+},$ and $\mu_{\lambda}, \sigma_{\lambda}$ and $\tilde{\sigma}_{\lambda}$ satisfy Assumption \[ass\_func\_lambda\] below. We remark that the force of mortality can become negative with positive probability. In practical applications it is therefore common to chose $\mu_{\lambda}$ high and $\sigma_{\lambda}$ as well as $\tilde{\sigma}_{\lambda}$ small enough, see [@vigna] for a discussion on calibration. We take $\lambda_{t} > 0$ for all $t \in [0,T].$ We assume that $\mu_{\lambda}, \sigma_{\lambda}: [0,T] \times \mathbb{R}_{+} \to \mathbb{R}$ and $\tilde{\sigma}_{\lambda}:[0,T] \times \mathbb{R}_{+} \times \mathbb{R} \backslash \{0\} \to \mathbb{R}$ satisfy the following conditions:\ (i) (At most linear growth) There exists a constant $B_{1} < \infty$ such that for all $a \in \mathbb{R}_{+}$ it holds that $$\begin{aligned} |\mu_{\lambda}(t,a)|^{2} + |\sigma_{\lambda}(t,a)|^{2} + |\tilde{\sigma}_{\lambda}(t,a,\bar{x})|^{2} &\leq B_{1}(1+|a|^{2}). \end{aligned}$$ (ii) (Uniform Lipschitz continuity) There exists a constant $C_{1} < \infty$ such that for all $a,b \in \mathbb{R}_{+}$ it holds that $$\begin{aligned} &|\mu_{\lambda}(t,a) - \mu_{\lambda}(t,b)|^{2} + |\sigma_{\lambda}(t,a) - \sigma_{\lambda}(t,b)|^{2}\\ & \ + \int_{\mathbb{R} \backslash \{0\}}|\tilde{\sigma}_{\lambda}(t,a,\bar{x}) - \tilde{\sigma}_{\lambda}(t,b,\bar{x})|^{2}\ \vartheta_{\bar{X}}(d\bar{x}) \leq C_{1} |b-a|^{2}.\end{aligned}$$ \[ass\_func\_lambda\] We consider a longevity bond where the reference cohort is assumed to satisfy the following: - at time $t=0$, all members of the cohort are of the same age, - the force of mortality of the cohort is entirely described by $\lambda$, - the cohort is sufficiently large such that the idiosyncratic risk is pooled away. In addition, we assume that the insurance’s planning horizon $T$ and the time to maturity of the longevity bond coincide. An investor who has bought the zero-coupon longevity bond at time $0 \leq t_{1} \leq T$ paying $L_{\lambda}(t_{1},T)$ receives $\exp\left(- \int_{t_{1}}^{T} \lambda_{s}\ ds \right)$ at time $T$. Consequently, the price $L_{\lambda}(t_{1},T)$ is given by $$L_{\lambda}(t_{1},T) = \mathbb{E}_{\mathbb{Q}} \left[e^{-\int_{t_{1}}^{T}(\lambda_{s}+r) \ ds} \big| \mathcal{F}_{t_{1}} \right]. \label{eq:L_lambda}$$ Let $0 \leq t_{1} < t_{2} \leq T,$ suppose investor $A$ has bought the longevity bond at time $t_{1}$ at price $L_{\lambda}(t_{1},T)$ and there is a second investor, say $B$, who has bought the bond at time point $t_{2}$ paying $L_{\lambda}(t_{2},T).$ As the final payoff depends on the length of the holding period, it is clear that investor $A$ would not have sold her bond to $B$ at price $L_{\lambda}(t_{2},T)$ at time $t_{2}$, but she would have demanded a price of $\exp\left(-\int_{t_{1}}^{t_{2}} \lambda_{s}\ ds \right) L_{\lambda}(t_{2},T).$ Therefore, if an investor has bought the longevity asset at time $t_{1}$, the dollar value of her investment at any time $t_{2} > t_{1}$ is given by $Y_{t_{2}} := \exp\left(-\int_{t_{1}}^{t_{2}} \lambda_{s}\ ds \right) L_{\lambda}(t_{2},T).$ We name the dollar value process $Y$ from now on; the discounted version of $Y$ should be a martingale under the same risk-neutral measure $\mathbb{Q}.$ We assume that $(\lambda, Y)$ is a Markovian Itô jump-diffusion satisfying $$\frac{dY_{t}}{Y_{t-}} = (r + \nu_{L}(t,\lambda_{t},Y_{t})) \ dt + \sigma_{L}(t,\lambda_{t},Y_{t}) \ d\bar{W}_{t} + \int_{\mathbb{R} \setminus \{0\}} \eta_{L}(t,\lambda_{t{\text{-}}},Y_{t{\text{-}}},\bar{x}) \ \tilde{J}_{\bar{X}}(dt,d\bar{x}), \label{eq:Y}$$ with $Y_{0} = L_{\lambda}(0,T)$ and deterministic functions $\nu_{L}, \sigma_{L}, \eta_{L}.$ \[ass\_markov\] Note that the function $\nu_{L}$ in is the market price of longevity risk. We further need the following assumption: We assume that $\nu_{L}, \sigma_{L}: [0,T] \times \mathbb{R}_{+} \times \mathbb{R}_{+} \to \mathbb{R}$ and $\eta_{L}: [0,T] \times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R} {\backslash}\{0\} \to \mathbb{R}$ satisfy the following conditions: (i) (At most linear growth) There exists a constant $B_{2} < \infty$ such that for all $a,b \in \mathbb{R}_{+}$ it holds that $$\begin{aligned} |\nu_{L}(t,a,b)|^{2} + |\sigma_{L}(t,a,b)|^{2} + \int_{\mathbb{R} \backslash \{0\}} |\eta_{L}(t,a,b,\bar{x})|^{2} \ \vartheta_{\bar{X}}(d\bar{x}) &\leq B_{2}(1+|a|^{2}+|b|^{2}). \end{aligned}$$ (ii) (Uniform Lipschitz continuity) There exists a constant $C_{2} < \infty$ such that for all $a_{1},a_{2},b_{1},b_{2} \in \mathbb{R}_{+}$ it holds that $$\begin{aligned} &|b_{1}\nu_{L}(t,a_{1},b_{1}) - b_{2}\nu_{L}(t,a_{2},b_{2})| + |b_{1}\sigma_{L}(t,a_{1},b_{1}) - b_{2}\sigma_{L}(t,a_{2},b_{2})|\\ &\ + \int_{\mathbb{R} \backslash \{0\}}|b_{1}\eta_{L}(t,a_{1},b_{1},\bar{x}) - b_{2}\eta_{L}(t,a_{2},b_{2},\bar{x})|^{2}\ \vartheta_{\bar{X}}(d\bar{x}) \leq C_{2}\ ||(a_{1},b_{1})-(a_{2},b_{2})||_{2}.\end{aligned}$$ \[ass\_functions\_Y\] We now consider an insurer who can invest in the $m$ risky stocks, deposit money in the bank account and use the longevity asset to partially hedge against its mortality exposure. Let $U \subseteq \mathbb{R}^{m+1}.$ An allocation rule is a predictable function $u:[0,T] \to U,\ t \mapsto (u_{S}(t),u_{Y}(t))^{\intercal},$ whereby $u_{S} = (u_{S^{1}},\dots,u_{S^{m}})^{{\intercal}}$ denotes the dynamic allocation process that indicates the total wealth that is invested in the stocks $1,\dots,m,$ and $u_{Y}$ the total wealth invested in the longevity asset. The portfolio process of the insurance company using the allocation rule $u$ is denoted by $P^{u} = (P^{u}_{t})_{t \in [0,T]}$ and fulfills the SDE $$dP_{t}^{u} = u_{S}^{\intercal}(t{\text{-}})\ \frac{dS_{t}}{S_{t{\text{-}}}} + u_{Y}(t{\text{-}}) \ \frac{dY_{t}}{Y_{t{\text{-}}}} + (P_{t}^{u} - u_{S}^{\intercal}(t) \textbf{1}- u_{Y}(t))r \ dt, \label{eq:sde_portf_proc}$$ with initial wealth $P_{0}^{u} = p > 0$ and **1**$ \in \mathbb{R}^{m}$ denotes a column vector of ones. Observe that $P^{u}$ as defined in is self-financing. An allocation rule $u$ is *admissible* if for any point $(t,p) \in [0,T) \times \mathbb{R}_{+}$ there exists a unique càdlàg adapted solution $P^{u}$ to such that $\mathbb{E}[|P_{t}^{u}|^{2}] < \infty$ for all $t$. We denote by $\mathcal{U}$ the set of admissible allocation rules. Optimization Problem ==================== Classical mean-variance portfolio selection aims at finding a strategy that simultaneously maximizes the expected terminal payoff of a portfolio while minimizing its variance. We first consider the more general case where an insurance company trades in the financial and longevity market in order to hedge a terminal condition. Before rigorously defining what is meant by an equilibrium control in a stochastic optimization problem, we need some more notation and a target functional. Let $Z := (S^{1},\dots,S^{m},\lambda,Y)\in \mathbb{R}^{m+2}_{+}$ be the vector containing the traded assets as well as the force of mortality $\lambda$. Let $H = (H_{t})_{t \in [0,T]}$ be an $l$-dimensional Markovian jump-diffusion adapted to $(\mathcal{F}_{t})$ and $D: \mathbb{R}^{l} \to \mathbb{R}$ some function. The goal is a mean-variance optimal hedge of $D(H_{T})$ using $P^{u}$. Consider a process $\hat{\lambda} = (\hat{\lambda}_{t})_{t \in [0,T]}$ solving the SDE $$d\hat{\lambda}_{t} = \mu_{\hat{\lambda}}(t,\hat{\lambda}_{t}) \ dt + \sigma_{\hat{\lambda}}(t,\hat{\lambda}_{t}) \ d\bar{W}_{t} + \int_{{\mathbb{R} \backslash \{0\}}} \tilde{\sigma}_{\hat{\lambda}}(t,\hat{\lambda}_{t{\text{-}}},\bar{x}) \ \tilde{J}_{\bar{X}}(dt,d\bar{x}),$$ with $\hat{\lambda}_{0} > 0.$ Suppose $\hat{\lambda}$ describes the force of mortality of the pool of insured persons, let $m=1$ for ease of exposition. If the insurance needs to deliver one share of $S$ to each person in its pool that has survived until the terminal time $T$, then the obligation $D(H_{T}) = S_{T}\ e^{-\int_{0}^{T} \hat{\lambda}_{s} \ ds}$ is to be hedged. \[ex\_H\] We write $\mathbb{E}_{t,p,z,h}[\cdot] = \mathbb{E}[\cdot | P_{t} = p, Z_{t} = z, H_{t} = h]$ for the conditional expectation given $(t,p,z,h) \in [0,T) \times \mathbb{R}_{+} \times \mathbb{R}^{m+2}_{+} \times \mathbb{R}^{l}$ and ${\text{Var}}_{t,p,z,h}$ denotes the conditional variance accordingly. Let $\gamma > 0$ be a risk-aversion parameter. For each $u \in \mathcal{U}$ and $\gamma > 0,$ we define the functions $F_{u}, g_{u}: [0,T] \times \mathbb{R}_{+} \times \mathbb{R}^{m+2}_{+} \times \mathbb{R}^{l} \to \mathbb{R}$ by $$\begin{aligned} \begin{split} g_{u}(t,p,z,h) &= \mathbb{E}_{t,p,z,h}[P_{T}^{u}-D(H_{T})], \\ F_{u}(t,p,z,h) &= \mathbb{E}_{t,p,z,h}\left[P_{T}^{u} - \frac{\gamma}{2} (P_{T}^{u})^{2} + \gamma P_{T}^{u} D(H_{T}) - D(H_{T}) - \frac{\gamma}{2} D(H_{T})^{2}\right]. \label{eq:aux_fct} \end{split}\end{aligned}$$ \[aux\_fct\] We also need to define the following differential operator: To any vector $u \in \mathcal{U}$ we associate the operator $\mathcal{A}^{u}: f \mapsto \mathcal{A}^{u}f$ mapping $C^{1,2,2,2}([0,T] \times \mathbb{R}^{+} \times \mathbb{R}^{m}_{+} \times \mathbb{R}^{l}, \mathbb{R})$ to $C^{0,0,0,0}([0,T] \times \mathbb{R}^{+} \times \mathbb{R}^{m}_{+} \times \mathbb{R}^{l}, \mathbb{R})$ given by $$\mathcal{A}^{u}f(t,p,z,h) = \lim_{\epsilon \downarrow 0} \frac{\mathbb{E}_{t,p,z,h}[f(t+\epsilon,P_{t+\epsilon}^{u},Z_{t+\epsilon},H_{t+\epsilon})] - f(t,p,z,h)}{\epsilon}\ \text{(if the limit exists)}. \label{eq:def_gen}$$ \[def\_gen\] The differential operator introduced in Definition \[def\_gen\] is known as *infinitesimal generator of the graph* of the process $(P^{u},Z,H).$ Two further differential operators are needed; we presume them to act on suitably differentiable functions $f:$ - $\mathcal{L}^{u}f(t,p,z,h) := \mathcal{A}^{u}f(t,p,z,h) - \dot{f}(t,p,z,h),$ which is called *infinitesimal generator* of the process $(P^{u},Z,H)$, - $\mathcal{G}^{u}f(t,p,z,h) := \gamma f(t,p,z,h) \mathcal{L}^{u} f(t,p,z,h) - \frac{\gamma}{2} \mathcal{L}^{u} f^{2}(t,p,z,h).$ We define the *value function* $J:[0,T]\times \mathbb{R}_{+} \times \mathbb{R}^{m+2}_{+} \times \mathbb{R}^{l} \times \mathcal{U} \to \mathbb{R} $ by $$\begin{aligned} J(t,p,z,h,u) &:= \mathbb{E}_{t,p,z,h}[P_{T}^{u} - D(H_{T})] - \frac{\gamma}{2}\ \emph{Var}_{t,p,z,h}[P_{T}^{u} - D(H_{T})] \\ &= F_{u}(t,p,z,h) + \frac{\gamma}{2}\ g_{u}^{2}(t,p,z,h).\end{aligned}$$ \[def\_value\_fct\] The second equality in Definition \[def\_value\_fct\] easily follows from . Finding some $u^{\star} \in \mathcal{U}$ such that $J(t,p,z,h,u)$ is *maximal* is a time-inconsistent control problem and induces a path an investor would not follow. Therefore we next introduce the concept of equilibrium control. - A trading strategy $u^{\star} \in \mathcal{U}$ is an *equilibrium control* if $$\liminf_{c \to 0} \frac{J(t,p,z,h,u^{\star}) - J(t,p,z,h,u_{t+c})}{c} \geq 0, \label{eq:equ_control}$$ for any $(t,p,z,h) \in [0,T) \times \mathbb{R}_{+} \times \mathbb{R}^{m+2}_{+} \times \mathbb{R}^{l}$ and for all $$u_{t+c} := \begin{cases} u, \ \ on\ [t,t+c] \times B_{p} \times B_{z} \times B_{h}, \\ u^{\star}, \ \ on\ \{[t,t+c] \times B_{p} \times B_{z} \times B_{h}\}^{c}, \end{cases}$$ $t+c \leq T,$ where $u \in \mathcal{U}$ and $B_{p},B_{z},B_{h}$ are some arbitrary balls centered at respectively $p,z,h$.\ - The *equilibrium value function* is defined by $$V(t,p,z,h) := J(t,p,z,h,u^{\star}).$$ - An equilibrium policy $u^{\star}$ is of *feedback type* if, for some *feedback function*\ $u_{\star}: [0,T] \times \mathbb{R}_{+}\times \mathbb{R}^{m+2}_{+} \times \mathbb{R}^{l} \to \mathcal{U},$ we have $$u_{t}^{\star} = u_{\star}(t,P_{t-}^{\star},Z_{t-},H_{t-}), \ t \in [0,T], \label{eq:feedback}$$ with $P_{0-}^{\star} = p$, $Z_{0-} = Z_{0}$ and $H_{0-} = H_{0}.$ \[def\_equ\] We see from that a strategy is an equilibrium if a deviation is suboptimal given the knowledge that every future player will obey that strategy. In the sequel we will search for an equilibrium control law of feedback type. Recall that the optimal value function of a standard time-consistent stochastic optimal control problem is the solution of a partial integro-differential equation (PIDE) known as Hamilton-Jacobi-Bellman (HJB) equation. In [@bm] a similar approach for time-inconsistent stochastic optimal control problems is introduced leading to a system of PIDEs. The system to be solved is subsequently referred to as *extended HJB system* and reduces to the classical case for a time-consistent problem. We now specify the extended HJB system corresponding the value function from Definition \[def\_value\_fct\]. For $(t,p,z,h) \in [0,T] \times \mathbb{R}^{+} \times \mathbb{R}^{m+2}_{+} \times \mathbb{R}^{l}, $ $$\begin{aligned} \begin{split} \dot{V}(t,p,z,h) + \sup_{u \in U}\ \{\mathcal{L}^{u}V(t,p,z,h) + \mathcal{G}^{u}g_{u}(t,p,z,h)\} &= 0, \\ V(T,p,z,h) &= p-D(h), \\ \mathcal{A}^{\hat{u}}g_{\hat{u}}(t,p,z,h) & =0, \\ g_{\hat{u}}(T,p,z,h) = \mathbb{E}_{T,p,z,h}[P_{T}^{\hat{u}}-D(H_{T})] &=p-D(h), \end{split} \label{eq:ext_hjb}\end{aligned}$$ where $\hat{u} = \operatorname*{arg\,sup}_{u \in U}\{\mathcal{L}^{u}V(t,p,z,h) + \mathcal{G}^{u}g_{u}(t,p,z,h)\}.$ \[ext\_hjb\] A solution to the extended HJB-system is the quadruple\ $(\hat{u}, V(t,p,h,z), F_{\hat{u}}(t,p,h,z), g_{\hat{u}}(t,p,h,z)).$ Sufficiency and Necessity ========================= Before proving two verification results, we need the following assumption: The limit $\mathcal{A}^{u^{\star}}V(t,p,z,h)$ defined in exists. A *regular equilibrium* is a quadruple $(u^{\star}, V(t,p,z,h), F_{u^{\star}}(t,p,z,h),\\ g_{u^{\star}}(t,p,z,h)),$ with $u^{\star}$ being an equilibrium control of feedback type with corresponding equilibrium value function $V$ (cf. Definition \[def\_equ\]). \[def\_reg\] The first verification theorem says that if the extended HJB-system given by Definition \[ext\_hjb\] has a solution, then it must be the equilibrium control law for the mean-variance hedge. In other words, the solvability of the extended HJB-system is *sufficient* for the existence of an equilibrium control. Suppose $F_{u^{\star}}(t,p,z,h), g_{u^{\star}}(t,p,z,h) \in C^{1,2,2,2}([0,T) \times \mathbb{R}^{+} \times \mathbb{R}^{m}_{+} \times \mathbb{R}^{l}, \mathbb{R}) \cap C^{0,0,0,0}([0,T] \times \mathbb{R}^{+} \times \mathbb{R}^{m}_{+} \times \mathbb{R}^{l}, \mathbb{R})$ and $V, F_{u^{\star}}, g_{u^{\star}}$ solve the extended HJB-system in Definition \[ext\_hjb\]. Assume the control law $u^{\star}$ realizes the supremum in the first row for every quadruple $(t,p,z,h) \in [0,T] \times \mathbb{R}_{+} \times \mathbb{R}^{m+2}_{+} \times \mathbb{R}^{l}$. Then there exists an equilibrium control law $u^{\star}$ in the sense of Definition \[def\_equ\] and it is given by the optimal $u$ in the first row of . Moreover, $V$ is the corresponding equilibrium value function and $F_{u^{\star}}$ and $g_{u^{\star}}$ are given by . The proof can be conducted similarly to the proof of Theorem 7.1 in [@bm] and is therefore omitted. Next we show that an equilibrium control is *necessarily* a solution of the extended HJB-system. Such a proof is provided in [@lindensjo] for a diffusion case and we extend it to the present jump-diffusion setting including the hedge of the terminal condition. A regular equilibrium $(u^{\star}, V(t,p,z,h), F_{u^{\star}}(t,p,z,h), g_{u^{\star}}(t,p,z,h))$ in the sense of Definiton \[def\_reg\] necessarily solves the extended HJB-system and $u^{\star}$ realizes the supremum in the first row. \[thm\_nec\] The proof is delivered in several steps. We start by introducing two sequences of stopping times that will be repeatedly needed in the sequel. Let $(c_{k})_{k \in \mathbb{N}}$ be a strictly positive monotone sequence satisfying $\lim_{k \to \infty} c_{k} = 0.$ Let $(t,p,z,h,u)\in [0,T) \times \mathbb{R}_{+} \times \mathbb{R}^{m+2}_{+} \times \mathbb{R}^{l} \times \mathcal{U}$ arbitrary and denote by $B_{p},B_{z},B_{h}$ balls centered at respectively $p,z,h.$ Define the sequence of stopping times $(\sigma_{k}^{u})$ by $$\sigma_{k}^{u} := \inf\{s > t: (s,P_{s}^{u},Z_{s},H_{s}) \notin [t,t+c_{k}) \times B_{p} \times B_{z} \times B_{h}\} \wedge T. \label{eq:typ_el}$$ Consider the sequence of stopping times $(\sigma_{k}^{u})$ with a typical element given by . It holds that $$\sigma_{k}^{u} > t\ \text{a.s.}$$ This is an immediate consequence of the càdlàg property of the mapping\ $t \mapsto (t,P_{t}^{u},Z_{t},H_{t})$ (cf. [@applebaum], p.106): let $k \in \mathbb{N}$ and $\omega \in \Omega$ arbitrary. For any $\epsilon > 0$ there exists some $ \delta > 0$ such that for all $s \in (t,t+\delta)$ it holds that $$||(s,P^{u}_{s},Z_{s},H_{s}) - (t,P_{t}^{u},Z_{t},H_{t})||_{2} < \epsilon,$$ thus, $\sigma_{k}^{u} \geq s > t$ a.s. Observe that $\lim_{k \to \infty} \sigma_{k}^{u} = t.$ Let $(a_{k})_{k \in \mathbb{N}}$ be another positive monotone sequence satisfying $\lim_{k \to \infty} a_{k} = 0$ such that the sequence of events $(A_{k})_{k \in \mathbb{N}}$ is characterized by $$\begin{aligned} \begin{split} A_{k} &:= \{\omega \in \Omega: \sigma_{k}^{u} > t + a_{k}\}, \\ \mathbb{P}(A_{k}) &\geq 1- \frac{1}{k^{2}}. \label{eq:A_{k}} \end{split}\end{aligned}$$ Consider the event $A_{k}$ and its probability of occurrence defined by . Then it holds that $$\mathbbm{1}_{A_{k}}(\omega) = 1\ \text{a.s.},$$ for all but finitely many $k$. \[lem\_set\] Observe that $\mathbb{P}(A_{k}^{c}) \leq \frac{1}{k^{2}}$ and therefore $\sum_{k=1}^{\infty} \mathbb{P}(A_{k}^{c}) \leq \frac{\pi^{2}}{6} < \infty.$ The *Borel-Cantelli lemma* implies that $\mathbbm{1}_{A_{k}^{c}}(\omega) = 0$ for all but finitely many $k$ and the claim follows. Define a typical element of the sequence of stopping times $(\tau_{k}^{u})_{k \in \mathbb{N}}$ by $$\tau_{k}^{u} := \min \{\sigma_{k}^{u}, t+ a_{k}\}. \label{eq:tau_k}$$ Let $u^{\star}$ be an equilibrium control and consider the function $g_{u}$ defined by . Then it holds that $$\mathcal{A}^{u^{\star}}g_{u^{\star}}(t,p,z,h) = 0. \label{eq:kolmog}$$ \[lem\_kol\] Using *Dynkin’s Formula* (see e.g. [@oksjump], p.12), we find that $$\begin{aligned} g&_{u^{\star}}(t,p,z,h) \\ &= \mathbb{E}_{t,p,z,h}\left[g_{u^{\star}}(\tau_{k}^{u^{\star}},P^{u^{\star}}_{\tau_{k}^{u^{\star}}},Z_{\tau_{k}^{u^{\star}}},H_{\tau_{k}^{u^{\star}}}) - \int_{t}^{\tau_{k}^{u^{\star}}}\mathcal{A}^{u^{\star}} g_{u^{\star}}(s,P_{s}^{u^{\star}},Z_{s},H_{s}) \ ds\right]. $$ It is a simple consequence of the *Tower Property* that $$\begin{aligned} \mathbb{E}&_{t,p,z,h}[g_{u^{\star}}(\tau_{k}^{u^{\star}},P^{u^{\star}}_{\tau_{k}^{u^{\star}}},Z_{\tau_{k}^{u^{\star}}},H_{\tau_{k}^{u^{\star}}})] \\ &= \mathbb{E}_{t,p,z,h}\left[\mathbb{E}_{\tau_{k}^{u^{\star}},P^{u^{\star}}_{\tau_{k}^{u^{\star}}},Z_{\tau_{k}^{u^{\star}}},H_{\tau_{k}^{u^{\star}}}}[P_{T}^{u^{\star}}-D(H_{T})]\right] \\ &= \mathbb{E}_{t,p,z,h}[P_{T}^{u^{\star}}-D(H_{T})] = g_{u^{\star}}(t,p,z,h).\end{aligned}$$ Combining the previous two results, we find that $$\mathbb{E}_{t,p,z,h}\left[\frac{\int_{t}^{\tau_{k}^{u^{\star}}} \mathcal{A}^{u^{\star}} g_{u^{\star}}(s,P_{s}^{u^{\star}},Z_{s},H_{s}) \ ds}{a_{k}} \right] = 0.$$ Consider the sequence of random variables $$\left(\frac{\int_{t}^{\tau_{k}^{u^{\star}}} \mathcal{A}^{u^{\star}} g_{u^{\star}}(s,P_{s}^{u^{\star}},Z_{s},H_{s}) \ ds}{a_{k}} \right)_{k \in \mathbb{N}},$$ and note that the integrand is bounded on the interval $[t,\tau_{k}^{u^{\star}}],$ even if there is a large jump at $\tau_{k}^{u^{\star}}$ since this point has Lebesgue measure zero. Therefore we can use *Dominated Convergence* to see that $$\begin{aligned} \lim_{k \to \infty}\ &\mathbb{E}_{t,p,z,h}\left[\frac{\int_{t}^{\tau_{k}^{u^{\star}}} \mathcal{A}^{u^{\star}} g_{u^{\star}}(s,P_{s}^{u^{\star}},Z_{s},H_{s}) \ ds}{a_{k}} \right] \\ &= \mathbb{E}_{t,p,z,h} \left[\lim_{k \to \infty} \frac{\int_{t}^{\tau_{k}^{u^{\star}}} \mathcal{A}^{u^{\star}} g_{u^{\star}}(s,P_{s}^{u^{\star}},Z_{s},H_{s}) \ ds}{a_{k}} \right] \\ &= \mathbb{E}_{t,p,z,h} \left[\lim_{k \to \infty} \mathbbm{1}_{A_{k}}(\omega)\ \frac{\int_{t}^{t+a_{k}} \mathcal{A}^{u^{\star}} g_{u^{\star}}(s,P_{s}^{u^{\star}},Z_{s},H_{s}) \ ds}{a_{k}} \right] \\ &\ \ + \mathbb{E}_{t,p,z,h} \left[\lim_{k \to \infty} \mathbbm{1}_{A_{k}^{c}}(\omega)\ \frac{\int_{t}^{\sigma_{k}^{u^{\star}}} \mathcal{A}^{u^{\star}} g_{u^{\star}}(s,P_{s}^{u^{\star}},Z_{s},H_{s}) \ ds}{a_{k}}\right]. \end{aligned}$$ According to Lemma \[lem\_set\] we have for arbitrary but fixed $\omega \in \Omega$ that $\mathbbm{1}_{A_{k}^{c}}(\omega) \neq 0$ for only finitely many $k$, therefore $$\lim_{k \to \infty} \mathbbm{1}_{A_{k}^{c}}(\omega) \frac{\int_{t}^{\sigma_{k}^{u^{\star}}} \mathcal{A}^{u^{\star}} g_{u^{\star}}(s,P_{s}^{u^{\star}},Z_{s},H_{s}) \ ds}{a_{k}} =0.$$ Further, $$\begin{aligned} 0 &= \mathbb{E}_{t,p,z,h} \left[\lim_{k \to \infty} \frac{\int_{t}^{t+a_{k}} \mathcal{A}^{u^{\star}} g_{u^{\star}}(s,P_{s}^{u^{\star}},Z_{s},H_{s}) \ ds}{a_{k}} \right] \\ &= \mathbb{E}_{t,p,z,h} \left[\mathcal{A}^{u^{\star}}g_{u^{\star}}(t,p,z,h) \right] = \mathcal{A}^{u^{\star}}g_{u^{\star}}(t,p,z,h),\end{aligned}$$ whereby the second equality is justified by *Lebesgue’s Differentiation Theorem* (cf. [@rudin], Chapter 7) and since $(t,p,z,h)$ has been arbitrarily chosen, is established. Let $\tilde{u}_{\tau_{k}^{u}}$ be an allocation rule that is equal to $u(t) \equiv u \in U$ (a constant) on the interval $[t,\tau_{k}^{u}]$ and equal to the equilibrium $u^{\star}$ outside that interval, that is $$\begin{aligned} \tilde{u}_{\tau_{k}^{u}}(s) &= u\ \mathbbm{1}_{[t,\tau_{k}^{u})}(s) + u^{\star}(s) \ \mathbbm{1}_{[\tau_{k}^{u},T]}(s). \label{eq:aux_stopping} \\ &= \left(u\ \mathbbm{1}_{[t,\sigma_{k}^{u})}(s) + u^{\star}(s) \ \mathbbm{1}_{[\sigma_{k}^{u},T]}(s) \right) \mathbbm{1}_{A_{k}^{c}}(\omega) + \left(u\ \mathbbm{1}_{[t,t+a_{k})}(s) + u^{\star}(s) \ \mathbbm{1}_{[t+a_{k},T]}(s) \right) \mathbbm{1}_{A_{k}}(\omega) \label{eq:aux_stopping2}\\ &= \left(u\ \mathbbm{1}_{[t,\sigma_{k}^{u})}(s) + u^{\star}(s) \ \mathbbm{1}_{[\sigma_{k}^{u},T]}(s) \right) \mathbbm{1}_{A_{k}^{c}}(\omega) + u_{t+a_{k}}(s) \mathbbm{1}_{A_{k}}(\omega) \label{eq:aux_stopping3},\end{aligned}$$ where follows from the definition of $\tau_{k}^{u}$, cf. . Moreover, as the right-hand bracket of is for each fixed $k \in \mathbb{N}$ easily seen to be a function of feedback type as $u_{t+c}$ in Definition \[def\_equ\], we can equate it to . Consider an equilibrium control $u^{\star}$, the control $\tilde{u}_{\tau_{k}^{u}}$ given by and the function $F_{u}$ defined by . Then we have $$\lim_{k \to \infty} \frac{F_{u^{\star}}(t,p,z,h)-F_{\tilde{u}_{\tau_{k}^{u}}}(t,p,z,h)}{a_{k}} = - \mathcal{A}^{u^{\star}}F_{u^{\star}}(t,p,z,h).$$ According to *Dynkin’s Formula*, $$\begin{aligned} \mathbb{E}&_{t,p,z,h}[F_{u^{\star}}(\tau_{k}^{u},P_{\tau_{k}^{u}}^{\tilde{u}_{\tau_{k}^{u}}},Z_{\tau_{k}^{u}},H_{\tau_{k}^{u}})] \\ &= F_{u^{\star}}(t,p,z,h) + \mathbb{E}_{t,p,z,h} \left[\int_{t}^{\tau_{k}^{u}} \mathcal{A}^{\tilde{u}_{\tau_{k}^{u}}} F_{u^{\star}}(s,P_{s}^{\tilde{u}_{\tau_{k}^{u}}},Z_{s},H_{s})\ ds \right],\end{aligned}$$ and we observe that - the integral limits in the previous equation are $t$ and $\tau_{k}^{u},$ therefore we can denote $P_{s}^{\tilde{u}_{\tau_{k}^{u}}}$ by $P_{s}^{u}$ and $\mathcal{A}^{\tilde{u}_{\tau_{k}^{u}}}$ by $\mathcal{A}^{u}$ on the random interval $(t,\tau_{k}^{u}).$ - as the starting time point is $\tau_{k}^{u},$ it holds that $$\begin{aligned} F&_{u^{\star}}(\tau_{k}^{u},P_{\tau_{k}^{u}}^{\tilde{u}_{\tau_{k}^{u}}},Z_{\tau_{k}^{u}},H_{\tau_{k}^{u}}) \\ &= \mathbb{E}_{\tau_{k}^{u},P_{\tau_{k}^{u}}^{\tilde{u}_{\tau_{k}^{u}}},Z_{\tau_{k}^{u}},H_{\tau_{k}^{u}}}\left[P_{T}^{u^{\star}} - \frac{\gamma}{2} (P_{T}^{u^{\star}})^{2} + \gamma P_{T}^{u^{\star}} D(H_{T}) - D(H_{T}) - \frac{\gamma}{2} D(H_{T})^{2}\right] \\ &= \mathbb{E}_{\tau_{k}^{u},P_{\tau_{k}^{u}}^{\tilde{u}_{\tau_{k}^{u}}},Z_{\tau_{k}^{u}},H_{\tau_{k}^{u}}}\left[P_{T}^{\tilde{u}_{\tau_{k}^{u}}} - \frac{\gamma}{2} (P_{T}^{\tilde{u}_{\tau_{k}^{u}}})^{2} + \gamma P_{T}^{\tilde{u}_{\tau_{k}^{u}}} D(H_{T}) - D(H_{T}) - \frac{\gamma}{2} D(H_{T})^{2}\right]. \end{aligned}$$ Using these two observations, we rewrite $$\begin{aligned} F&_{u^{\star}}(t,p,z,h) + \mathbb{E}_{t,p,z,h} \left[\int_{t}^{\tau_{k}^{u}} \mathcal{A}^{u} F_{u^{\star}}(s,P_{s}^{u},Z_{s},H_{s})\ ds \right] \\ &= \mathbb{E}_{t,p,z,h}[F_{u^{\star}}(\tau_{k}^{u},P_{\tau_{k}^{u}}^{\tilde{u}_{\tau_{k}^{u}}},Z_{\tau_{k}^{u}},H_{\tau_{k}^{u}})]\\ &= \mathbb{E}_{t,p,z,h}\left[\mathbb{E}_{\tau_{k}^{u},P_{\tau_{k}^{u}}^{\tilde{u}_{\tau_{k}^{u}}},Z_{\tau_{k}^{u}},H_{\tau_{k}^{u}}}\left[P_{T}^{\tilde{u}_{\tau_{k}^{u}}} - \frac{\gamma}{2} (P_{T}^{\tilde{u}_{\tau_{k}^{u}}})^{2} + \gamma P_{T}^{\tilde{u}_{\tau_{k}^{u}}} D(H_{T}) - D(H_{T}) - \frac{\gamma}{2} D(H_{T})^{2}\right] \right] \\ &=\mathbb{E}_{t,p,z,h}\left[P_{T}^{\tilde{u}_{\tau_{k}^{u}}} - \frac{\gamma}{2} (P_{T}^{\tilde{u}_{\tau_{k}^{u}}})^{2} + \gamma P_{T}^{\tilde{u}_{\tau_{k}^{u}}} D(H_{T}) - D(H_{T}) - \frac{\gamma}{2} D(H_{T})^{2}\right] \\ &=F_{\tilde{u}_{\tau_{k}^{u}}}(t,p,z,h).\end{aligned}$$ Finally, we use *Dominated Convergence* and *Lebesgue’s Differentiation Theorem* similarly as in the proof of Lemma \[lem\_kol\] to deduce that $$\begin{aligned} &\lim_{k \to \infty} \frac{F_{u^{\star}}(t,p,z,h)-F_{\tilde{u}_{\tau_{k}^{u}}}(t,p,z,h)}{a_{k}} \\ &= \lim_{k \to \infty} \frac{-\mathbb{E}_{t,p,z,h} \left[\mathbbm{1}_{A_{k}}(\omega)\int_{t}^{t+a_{k}} \mathcal{A}^{u} F_{u^{\star}}(s,P_{s}^{u},Z_{s},H_{s})\ ds \right]}{a_{k}} \\ &= \mathbb{E}_{t,p,z,h}\left[\lim_{k \to \infty} \mathbbm{1}_{A_{k}}(\omega) \frac{-\int_{t}^{t+a_{k}} \mathcal{A}^{u} F_{u^{\star}}(s,P_{s}^{u},Z_{s},H_{s})\ ds}{a_{k}}\right]\\ &= - \mathcal{A}_{t,p,z,h}^{u^{\star}}F_{u^{\star}}(t,p,z,h),\end{aligned}$$ which is what we have set out to prove. Consider an equilibrium control $u^{\star}$, the control $\tilde{u}_{\tau_{k}^{u}}$ given by and the function $g_{u}$ defined by . Then we have $$\lim_{k \to \infty} \frac{g_{u^{\star}}(t,p,z,h)^{2}-g_{\tilde{u}_{\tau_{k}^{u}}}(t,p,z,h)^{2}}{a_{k}} = - 2\ g_{u^{\star}}(t,p,z,h) \ \mathcal{A}^{u^{\star}} g_{u^{\star}}(t,p,z,h).$$ Using similar techniques as before, the following calculation yields $$\begin{aligned} &\lim_{k \to \infty} \frac{g_{u^{\star}}(t,p,z,h)^{2}-g_{\tilde{u}_{\tau_{k}^{u}}}(t,p,z,h)^{2}}{a_{k}} \\ &= - \lim_{k \to \infty} \frac{g_{\tilde{u}_{\tau_{k}^{u}}}(t,p,z,h)^{2}-g_{u^{\star}}(t,p,z,h)^{2}}{a_{k}} \\ &=- \lim_{k \to \infty} \frac{\left(\mathbb{E}_{t,p,z,h}[P_{T}^{\tilde{u}_{\tau_{k}^{u}}}-D(H_{T})]\right)^{2}-g_{u^{\star}}(t,p,z,h)^{2}}{a_{k}} \\ &\stackrel{\text{T.P.}}{=} - \lim_{k \to \infty} \frac{\left(\mathbb{E}_{t,p,z,h}\left[\mathbb{E}_{\tau_{k},P_{\tau_{k}}^{\tilde{u}_{\tau_{k}^{u}}},Z_{\tau_{k}},H_{\tau_{k}}}\left[P_{T}^{\tilde{u}_{\tau_{k}^{u}}}-D(H_{T})\right]\right]\right)^{2}-g_{u^{\star}}(t,p,z,h)^{2}}{a_{k}} \\ &= - \lim_{k \to \infty} \frac{\left(\mathbb{E}_{t,p,z,h}\left[\mathbb{E}_{\tau_{k},P_{\tau_{k}}^{\tilde{u}_{\tau_{k}^{u}}},Z_{\tau_{k}},H_{\tau_{k}}}\left[P_{T}^{u^{\star}}-D(H_{T})\right]\right]\right)^{2}-g_{u^{\star}}(t,p,z,h)^{2}}{a_{k}}\\ & = - \lim_{k \to \infty} \frac{\left(\mathbb{E}_{t,p,z,h}\left[g_{u^{\star}}(\tau_{k},P_{\tau_{k}}^{\tilde{u}_{\tau_{k}^{u}}},Z_{\tau_{k}},H_{\tau_{k}})\right]\right)^{2}-g_{u^{\star}}(t,p,z,h)^{2}}{a_{k}} \\ &= - \lim_{k \to \infty} \frac{\left(g_{u^{\star}}(t,p,z,h) + \mathbb{E}_{t,p,z,h} \left[\int_{t}^{\tau_{k}^{u}} \mathcal{A}^{u} g_{u^{\star}}(s,P_{s}^{u},Z_{s},H_{s})\ ds \right]\right)^{2}-g_{u^{\star}}(t,p,z,h)^{2}}{a_{k}} \\ &= - 2\ g_{u^{\star}}(t,p,z,h) \ \mathcal{A}^{u^{\star}} g_{u^{\star}}(t,p,z,h),\end{aligned}$$ where the abbreviation T.P. stands for *Tower Property*. Consider an equilibrium control $u^{\star}$, the control $\tilde{u}_{\tau_{k}^{u}}$ given by and the value function $J$ specified in Definition \[def\_value\_fct\]. Then it holds that $$- \lim_{k \to \infty} \frac{J(t,p,z,h,u^{\star}) - J(t,p,z,h,\tilde{u}_{\tau_{k}^{u}})}{a_{k}} = \mathcal{A}^{u^{\star}}V(t,p,z,h)+ \mathcal{G}^{u^{\star}}g_{u^{\star}}(t,p,z,h) .$$ \[lem\_inequ\] $$\begin{aligned} &- \lim_{k \to \infty} \frac{J(t,p,z,h,u^{\star}) - J(t,p,z,h,\tilde{u}_{\tau_{k}^{u}})}{a_{k}} \\ &= - \lim_{k \to \infty} \frac{F_{u^{\star}}(t,p,z,h)-F_{\tilde{u}_{\tau_{k}^{u}}}(t,p,z,h) + \frac{\gamma}{2} \left(g_{u^{\star}}(t,p,z,h)^{2}-g_{\tilde{u}_{\tau_{k}^{u}}}(t,p,z,h)^{2} \right)}{a_{k}} \\ &= \mathcal{A}^{u^{\star}} F_{u^{\star}}(t,p,z,h) + \gamma g_{u^{\star}}(t,p,z,h) \mathcal{A}^{u^{\star}} g_{u^{\star}}(t,p,z,h) \\ &= \dot{F}_{u^{\star}}(t,p,z,h) + \mathcal{L}^{u^{\star}} F_{u^{\star}}(t,p,z,h) + \gamma g_{u^{\star}}(t,p,z,h) \left(\dot{g}_{u^{\star}}(t,p,z,h) + \mathcal{L}^{u^{\star}} g_{u^{\star}}(t,p,z,h)\right) \\ &= \underbrace{\dot{F}_{u^{\star}}(t,p,z,h) + \gamma g_{u^{\star}}(t,p,z,h) \dot{g}_{u^{\star}}(t,p,z,h)}_{=\dot{V}(t,p,z,h)} + \underbrace{\mathcal{L}^{u^{\star}} F_{u^{\star}}(t,p,z,h) + \frac{\gamma}{2} \mathcal{L}^{u^{\star}}g_{u^{\star}}(t,p,z,h)^{2}}_{=\mathcal{L}^{u^{\star}}V(t,p,z,h)} \\ & + \underbrace{\gamma g_{u^{\star}}(t,p,z,h) \mathcal{L}^{u^{\star}} g_{u^{\star}}(t,p,z,h) - \frac{\gamma}{2} \mathcal{L}^{u^{\star}}g_{u^{\star}}(t,p,z,h)^{2}}_{= \mathcal{G}^{u^{\star}}g_{u^{\star}}(t,p,z,h)}.\end{aligned}$$ The proof is conducted in four steps:\ <span style="font-variant:small-caps;">Step $1$:</span> We show the boundary conditions.\ The boundary conditions $V(T,p,z,h) = p-D(h)$ and $g_{u^{\star}}(T,p,z,h) = p-D(h)$ are met by the equilibrium control law $u^{\star}$, which follows from Definition \[aux\_fct\] and Definition \[def\_equ\].\ <span style="font-variant:small-caps;">Step $2$:</span> Observe that $\mathcal{A}^{u^{\star}}g_{u^{\star}}(t,p,z,h) = 0$ is stated by Lemma \[lem\_kol\].\ <span style="font-variant:small-caps;">Step $3$:</span> We show that $\mathcal{A}^{u^{\star}}V(t,p,z,h) + \mathcal{G}^{u^{\star}}g_{u^{\star}}(t,p,z,h) = 0.$\ Recall from Definition \[def\_value\_fct\] and Definition \[def\_equ\] that $V(t,p,z,h) = F_{u^{\star}}(t,p,z,h) + \frac{\gamma}{2} g_{u^{\star}}^{2}(t,p,z,h).$ Following a similar line of reasoning as in the proof of Lemma \[lem\_kol\], one can show that $\mathcal{A}^{u^{\star}}F_{u^{\star}}(t,p,z,h) = 0.$ So we have $$\begin{aligned} \mathcal{A}&^{u^{\star}}V(t,p,z,h) + \mathcal{G}^{u^{\star}}g_{u^{\star}}(t,p,z,h) \\ &= \frac{\gamma}{2} \mathcal{A}^{u^{\star}}g_{u^{\star}}^{2}(t,p,z,h) + \gamma g_{u^{\star}}(t,p,z,h) \mathcal{L}^{u^{\star}} g_{u^{\star}}(t,p,z,h) - \frac{\gamma}{2} \mathcal{L}^{u^{\star}} g_{u^{\star}}^{2}(t,p,z,h) \\ &= \frac{\gamma}{2} \mathcal{A}^{u^{\star}}g_{u^{\star}}^{2}(t,p,z,h) + \gamma g_{u^{\star}}(t,p,z,h) (\mathcal{A}^{u^{\star}} g_{u^{\star}}(t,p,z,h) - \dot{g}_{u^{\star}}(t,p,z,h)) \\ &\ \ - \frac{\gamma}{2} (\mathcal{A}^{u^{\star}} g_{u^{\star}}^{2}(t,p,z,h) - 2g_{u^{\star}}(t,p,z,h) \dot{g}_{u^{\star}}(t,p,z,h)) \\ &= \gamma g_{u^{\star}}(t,p,z,h)\ \mathcal{A}^{u^{\star}} g_{u^{\star}}(t,p,z,h) =0,\end{aligned}$$ whereby the last equality follows from <span style="font-variant:small-caps;">Step 2</span>.\ So far, we have shown that the regular equilibrium $(u^{\star},V(t,p,z,h),F_{u^{\star}}(t,p,z,h),\\ g_{u^{\star}}(t,p,z,h))$ is a prospective solution of the extended HJB-system . Therefore we are left showing that $u^{\star}$ is indeed maximal in the first row of .\ <span style="font-variant:small-caps;">Step 4:</span> We show that $0 \geq \mathcal{A}^{u^{\star}}V(t,p,z,h) + \mathcal{G}^{u^{\star}}g_{u^{\star}}(t,p,z,h).$\ In the following calculation, the first inequality follows by Definition of the equilibrium control $u^{\star}$, cf. Definition \[def\_equ\]. Observe that $$\begin{aligned} 0 & \geq - \liminf_{c \searrow 0} \frac{J(t,p,z,h,u^{\star})-J(t,p,z,h,u_{t+c})}{c} \\ &= - \liminf_{c \searrow 0} \Bigg(\frac{J(t,p,z,h,u^{\star})-J(t,p,z,h,u_{t+c})}{c} \mathbbm{1}_{A_{k}}(\omega) \\ &\ + \frac{J(t,p,z,h,u^{\star})-J(t,p,z,h,u_{t+c})}{c} \mathbbm{1}_{A_{k}^{c}}(\omega) \Bigg) \\ &= - \liminf_{k \to \infty} \frac{J(t,p,z,h,u^{\star})-J(t,p,z,h,u_{t+a_{k}})}{a_{k}} \mathbbm{1}_{A_{k}}(\omega)\\ &\ - \liminf_{k \to \infty} \frac{J(t,p,z,h,u^{\star})-J(t,p,z,h,u_{t+c_{k}})}{c_{k}} \mathbbm{1}_{A_{k}^{c}}(\omega).\end{aligned}$$ Note that Lemma \[lem\_set\] implies that $$\liminf_{k \to \infty} \frac{J(t,p,z,h,u^{\star})-J(t,p,z,h,u_{t+c_{k}})}{c_{k}} \mathbbm{1}_{A_{k}^{c}}(\omega) = 0.$$ Consequently, we deduce from that $$\begin{aligned} &- \liminf_{k \to \infty} \frac{J(t,p,z,h,u^{\star})-J(t,p,z,h,u_{t+a_{k}})}{a_{k}} \mathbbm{1}_{A_{k}}(\omega) \\ &= - \liminf_{k \to \infty} \frac{J(t,p,z,h,u^{\star}) - J(t,p,z,h,\tilde{u}_{\tau_{k}^{u}})}{a_{k}},\end{aligned}$$ and Lemma \[lem\_inequ\] concludes the proof. Explicit solution ================= In the sequel, let $D \equiv 0$, i.e., we consider an investor aiming at receiving a high expected payoff while keeping its variance low. In this special case the extended HJB-system admits explicit closed-form solutions. Some notational definitions are in order: - $\sigma_{S} := (\sigma_{ij})_{1 \leq i \leq m, \\ 1\leq j \leq d},$ i.e., $\sigma_{S} \in \mathbb{R}^{m \times d}.$ - $\tilde{\sigma}_{S} := \sigma_{S} \sigma_{S}^{\intercal},$ i.e., $\tilde{\sigma}_{S} \in \mathbb{R}^{m \times m}.$ Note that $\tilde{\sigma}_{S}$ is a symmetric matrix. - $\tilde{\sigma}_{S^{i}} := (\tilde{\sigma}_{S_{i1}},\dots,\tilde{\sigma}_{S_{im}})^{\intercal}$, i.e., $\tilde{\sigma}_{S^{i}} \in \mathbb{R}^{m}.$ - $\sigma_{S^{i}} := (\sigma_{i1},\dots,\sigma_{id})^{\intercal},$ i.e., $\sigma_{S}^{i} \in \mathbb{R}^{d}$ for every $i \in \{1,\dots,m\}.$ - $\rho_{S} := (\rho_{ij})_{1 \leq i \leq m, 1 \leq j \leq k},$ i.e., $\rho_{S} \in \mathbb{R}^{m \times k}.$ - $\tilde{\rho}_{S} := \rho_{S} \rho_{S}^{\intercal}.$ - $\rho_{S^{i}} := (\rho_{i1},\dots,\rho_{ik})^{\intercal},$ i.e., $\rho_{S^{i}} \in \mathbb{R}^{k}.$ - $\mu := (\mu_{1},\dots,\mu_{n})^{\intercal},$ i.e., $\mu \in \mathbb{R}^{n}.$ - $\tilde{\mu} := \mu - \textbf{1}r.$ - $\Delta P_{t}^{u}(x,\bar{x}) := u_{S}^{\intercal}(t) \rho_{S} x + u_{Y}(t) \eta_{L}(t,\lambda_{t-},Y_{t-},\bar{x}),$ i.e., $\Delta P^{u}_{t}(x,\bar{x}) \in \mathbb{R}.$ - $\Delta Z_{t}(x,\bar{x}) := (\text{Diag}(S_{t-})\rho_{S}x, \tilde{\sigma}_{\lambda}(t,\lambda_{t-},\bar{x}), Y_{t-} \eta_{L}(t,\lambda_{t-},Y_{t-})),$ i.e., $\Delta Z_{t}(x,\bar{x}) \in \mathbb{R}^{m+2}.$\ - $\mu_{i}(t,Z_{t}) := \begin{cases} \mu_{i}S_{t}^{i},\ &i=1,\dots,m,\\ \mu_{\lambda}(t,\lambda_{t}), \ &i=m+1, \\ (r+\nu_{L}(t,\lambda_{t},Y_{t}))Y_{t}, \ &i=m+2, \end{cases}$\ i.e., $\mu(t,Z_{t}) = (\mu_{1}(t,Z_{t}),\dots,\mu_{m+2}(t,Z_{t})) \in \mathbb{R}^{m+2}.$\ - $\sigma_{ij}(t,Z_{t}) := \begin{cases} \sigma_{ij}S_{t}^{i}, \ & 1\leq i\leq m, 1\leq j, \leq d, \\ \sigma_{\lambda}(t,\lambda_{t}), & i=m+1, j=d+1, \\ \sigma_{L}(t,\lambda_{t},Y_{t})Y_{t}, & i=m+2, j=d+1, \\ 0, & \text{else,} \end{cases}$\ i.e., $\sigma(t,Z_{t}) \in \mathbb{R}^{(m+2) \times (d+1)}.$\ - $Q^{u}_{i}(t,Z_{t}) := \begin{cases}S_{t}^{i} u_{S}^{\intercal}(t) \sigma_{S} \sigma_{S^{i}}, & 1 \leq i \leq m, \\ u_{Y}(t) \sigma_{\lambda}(t,\lambda_{t}) \sigma_{L}(t,\lambda_{t},Y_{t}), & i=m+1, \\ u_{Y}(t) \sigma_{L}^{2}(t,\lambda_{t},Y_{t}) Y_{t}, & i=m+2, \end{cases}$\ i.e., $Q^{u}(t,Z_{t})=(Q^{u}_{1}(t,Z_{t}),\dots,Q^{u}_{m+2}(t,Z_{t})) \in \mathbb{R}^{m+2}.$ Inspired by [@basak] and [@bm], we make the following *Ansatz*: $$\begin{aligned} \begin{split} V(t,p,z) &= A(t) p + B(t,z), \\ g(t,p,z) &= a(t) p + b(t,z). \end{split} \label{eq:ansatz}\end{aligned}$$ The goal is finding the functions $A,a,B,b$ as well as the equilibrium control laws of feedback type. Clearly, the functions $A,a,B,b$ are assumed to satisfy the necessary regularity conditions and the limits induced by applying the operators $\mathcal{A}, \mathcal{L}$ and $\mathcal{G}$ are assumed to exist accordingly. Consider the first line in the system and define $$\Xi(u_{S}(t), u_{Y}(t)) := \mathcal{L}^{u} V(t,p,z) + \mathcal{G}^{u}g(t,p,z).$$ Omitting details at this stage, an application of Itô’s formula for jump-diffusions yields $$\begin{aligned} \begin{split} &\Xi(u_{S}(t), u_{Y}(t)) \\ &= A(t) (pr+\tilde{\mu}^{\intercal} u_{S}(t) + u_{Y}(t) \nu_{L}(t,\lambda_{t},Y_{t})) + \nabla_{Z}B(t,Z_{t})^{\intercal} \mu(t,Z_{t})\\ &\ + \frac{1}{2}\ {\text{Tr}}\Big(\sigma(t,Z_{t})^{\intercal} H_{Z}(B(t,Z_{t})) \sigma(t,Z_{t}) \Big) - \frac{\gamma}{2}a(t)^{2} (u_{S}^{\intercal}(t) \tilde{\sigma}_{S} u_{S}(t) + u_{Y}^{2}(t) \sigma_{L}^{2}(t,\lambda_{t},Y_{t})) \\ & \ -\frac{\gamma}{2} {\text{Tr}}\left(\sigma(t,Z_{t})^{\intercal} \nabla_{Z}b(t,Z_{t})^{\intercal} \nabla_{Z}b(t,Z_{t}) \sigma(t,Z_{t}) \right) - \gamma a(t) \nabla_{Z}b(t,Z_{t})^{\intercal} Q^{u}(t,Z_{t}) \\ &\ + \int_{\mathbb{R}^{k+1} \setminus \{0\}} \left( B(t,Z_{t}+\Delta Z_{t}(x,\bar{x})) - B(t,Z_{t}) - (\Delta Z_{t}(x,\bar{x}))^{\intercal} \nabla_{Z}B(t,Z_{t})\right)\ \vartheta_{X,\bar{X}}(dx,d\bar{x}) \\ &\ - \frac{\gamma}{2} \int_{\mathbb{R}^{k+1} \setminus \{0\}} \left(a(t) \Delta P_{t}^{u}(x,\bar{x}) + b(t,Z_{t}+\Delta Z_{t}(x,\bar{x})) - b(t,Z_{t}) \right)^{2}\ \vartheta_{X,\bar{X}}(dx,d\bar{x}). \end{split} \label{eq:first_line}\end{aligned}$$ Note that the maximization of w.r.t. $u_{S}(t)$ and $u_{Y}(t)$ is a static optimization problem in $m+1$ variables, so we solve the corresponding first order conditions (FOC). First, observe that $$\begin{aligned} \frac{\partial u_{S}^{\intercal}(t) \tilde{\sigma}_{S} u_{S}(t)}{\partial u_{S^{i}}(t)} &= 2 \sum_{j=1}^{m} \tilde{\sigma}_{S_{ij}} u_{S^{j}}(t) = 2 \tilde{\sigma}_{S^{i}}^{\intercal} u_{S}(t) , \\ \frac{\partial u_{S}^{\intercal}(t) \rho_{S} x}{\partial u_{S^{i}}(t)} &= \sum_{j=1}^{k} \rho_{ij} x^{j} = \rho_{S^{i}}^{\intercal} x.\end{aligned}$$ For arbitrary $i \in \{1,\dots,m\}$, we consider the following FOC. Observe that the interchange of differentiation and integration is justified by our assumptions. $$\begin{aligned} &\frac{\partial \Xi}{\partial u_{S^{i}}(t)}\\ &= A(t)\tilde{\mu}_{i} - \gamma a(t)^{2} \tilde{\sigma}_{S^{i}}^{\intercal} u_{S}(t) -\gamma a(t) \sigma_{S^{i}}^{\intercal} \sum_{j=1}^{m} b_{z_{j}}(t,Z_{t}) S_{t}^{j} \sigma_{S^{j}} \\ &\ - \gamma a(t) \int_{\mathbb{R}^{k+1} \setminus \{0\}} \big(a(t) \Delta P_{t}^{u}(x,\bar{x}) + b(t,Z_{t}+\Delta Z_{t}(x,\bar{x})) - b(t,Z_{t})\big)\rho_{S^{i}}^{\intercal}x \ \vartheta_{X,\bar{X}}(dx,d\bar{x}) \\ &= A(t)\tilde{\mu}_{i} - \gamma a(t)^{2} \tilde{\sigma}_{S^{i}}^{\intercal} u_{S}(t) -\gamma a(t) \sigma_{S^{i}}^{\intercal} \sum_{j=1}^{m} b_{z_{j}}(t,z) S_{t}^{j} \sigma_{S^{j}} \\ &\ - \gamma a(t) \int_{\mathbb{R}^{k} \setminus \{0\}} \big(a(t) u_{S}^{\intercal}(t) \rho_{S} x + b(t,Z_{t}+\Delta Z_{t}(x,0)) - b(t,Z_{t})\big)\rho_{S^{i}}^{\intercal}x \ \vartheta_{X}(dx) \equiv 0 \\ & \Leftrightarrow A(t)\tilde{\mu}_{i} - \gamma a(t) \sigma_{S^{i}}^{\intercal} \sum_{j=1}^{m} b_{z_{j}}(t,z) S_{t}^{j} \sigma_{S^{j}} \\ &\ - \gamma a(t) \int_{\mathbb{R}^{k} \setminus \{0\}} (b(t,Z_{t}+\Delta Z_{t}(x,0)) - b(t,Z_{t}))\ x^{\intercal} \ \vartheta_{X}(dx)\ \rho_{S^{i}} \\ & \stackrel{*}{=} \gamma a(t)^{2}\left( \tilde{\sigma}_{S^{i}}^{\intercal} +\xi\ \rho_{S^{i}}^{\intercal} \ \rho_{S}^{\intercal} \right) u_{S}(t).\end{aligned}$$ We use the following abbreviations in the sequel: $$\begin{aligned} \xi &:= \int_{\mathbb{R}^{k} \setminus \{0\}} x x^{\intercal}\ \vartheta_{X}(dx) \notag, \\ \tilde{\eta}_{L}(t,\lambda_{t},Y_{t}) &:= \int_{\mathbb{R} \setminus \{0\}} \eta_{L}(t,\lambda_{t},Y_{t},\bar{x})^{2} \ \vartheta_{\bar{X}}(d\bar{x}) \notag, \\ b_{1}(t,Z_{t}) &:= \int_{\mathbb{R}^{k} \setminus \{0\}} (b(t,Z_{t}+\Delta Z_{t}(x,0)) - b(t,Z_{t}))\ x \ \vartheta_{X}(dx), \label{eq:b1}\\ b_{2}(t,Z_{t}) &:= \int_{\mathbb{R} \setminus \{0\}} (b(t,Z_{t}+\Delta Z_{t}(0,\bar{x})) - b(t,Z_{t}))\ \eta_{L}(t,\lambda_{t},Y_{t},\bar{x}) \ \vartheta_{\bar{X}}(d\bar{x}). \label{eq:b2}\end{aligned}$$ Note that in the optimum an equality of type $*$ needs to hold for every $u_{S^{i}}(t),$ so using the just defined functions and matrix-vector notation, we see that the vector $u_{S}^{\star}(t)$ has to satisfy $$\begin{aligned} u_{S}^{\star}(t) &= \frac{(\tilde{\sigma}_{S} +\tilde{\rho}_{S} \xi)^{-1}}{\gamma a(t)^{2}}\ \Big( A(t)\tilde{\mu}-\gamma a(t) \sigma_{S} \sum_{j=1}^{m} b_{z_{j}}(t,Z_{t}) S_{t}^{j} \sigma_{S^{j}} - \gamma a(t) \rho_{S} b_{1}(t,Z_{t}) \Big), \label{eq:uS_prelim}\end{aligned}$$ where the symbol $\star$ indicates the optimality of the strategy. Next we compute $$\begin{aligned} &\frac{\partial \Xi}{\partial u_{Y}(t)} \\ &= A(t) \nu_{L}(t,\lambda_{t},Y_{t}) - \gamma a(t)^{2} \sigma_{L}^{2}(t,\lambda_{t},Y_{t}) u_{Y}(t) - \gamma a(t)\big(b_{z_{m+1}}(t,Z_{t}) \sigma_{\lambda}(t,\lambda_{t}) \sigma_{L}(t,\lambda_{t},Y_{t})\\ &\ + b_{z_{m+2}}(t,Z_{t}) \sigma^{2}_{L}(t,\lambda_{t},Y_{t}) Y_{t} \big) \\ &\ - \gamma a(t) \int_{\mathbb{R} \setminus \{0\}} \Big(a(t) u_{Y}(t) \eta_{L}(t,\lambda_{t},Y_{t},\bar{x}) + b(t,Z_{t}+\Delta Z_{t}(0,\bar{x}))-b(t,Z_{t})\Big)\eta_{L}(t,\lambda_{t},Y_{t},\bar{x})\ \vartheta_{\bar{X}}(d\bar{x}) \\ & = A(t) \nu_{L}(t,\lambda_{t},Y_{t}) - \gamma a(t)^{2} \sigma_{L}^{2}(t,\lambda_{t},Y_{t}) u_{Y}(t) - \gamma a(t)\big(b_{z_{m+1}}(t,Z_{t}) \sigma_{\lambda}(t,\lambda_{t}) \sigma_{L}(t,\lambda_{t},Y_{t})\\ &\ + b_{z_{m+2}}(t,Z_{t}) \sigma^{2}_{L}(t,\lambda_{t},Y_{t}) Y_{t} \big)- \gamma a(t)^{2} \tilde{\eta}_{L}(t,\lambda_{t},Y_{t}) u_{Y}(t) - \gamma a(t) b_{2}(t,Z_{t}) \equiv 0 \\ &\Leftrightarrow u_{Y}^{\star}(t) = \frac{A(t) \nu_{L}(t,\lambda_{t},Y_{t}) -\gamma a(t) \big(b_{z_{m+1}}(t,Z_{t}) \sigma_{\lambda}(t,\lambda_{t}) \sigma_{L}(t,\lambda_{t},Y_{t}) + b_{z_{m+2}}(t,Z_{t}) \sigma^{2}_{L}(t,\lambda_{t},Y_{t}) Y_{t} \big)}{\gamma a(t)^{2} (\sigma_{L}^{2}(t,\lambda_{t},Y_{t}) + \tilde{\eta}_{L}(t,\lambda_{t},Y_{t}))} \\ &\ \ - \frac{\gamma a(t) b_{2}(t,Z_{t})}{\gamma a(t)^{2} (\sigma_{L}^{2}(t,\lambda_{t},Y_{t}) + \tilde{\eta}_{L}(t,\lambda_{t},Y_{t}))}.\end{aligned}$$ Observe that the optimal control does not depend on $p$. We next plug $u^{\star}$ into . Then we can apply separation of variables to the first line of . This leads to an ordinary differential equation (ODE) for $A$ and a PIDE for $B$. The ODE for $A$ is given by $$\begin{aligned} \dot{A}(t) + A(t)r & =0, \\ A(T) & =1,\end{aligned}$$ and the solution is easily seen to be $A(t) = e^{r(T-t)}.$ The PIDE for $B$ is given by $$\begin{aligned} \begin{split} &\dot{B}(t,Z_{t}) + A(t) \left(\tilde{\mu}^{\intercal} u_{S}^{\star}(t) + u_{Y}^{\star}(t) \nu_{L}(t,\lambda_{t},Y_{t}) \right) + \nabla_{Z}B(t,Z_{t})^{\intercal} \mu(t,Z_{t})\\ & + \frac{1}{2}\ {\text{Tr}}\Big(\sigma(t,Z_{t})^{\intercal} H_{Z}(B(t,Z_{t})) \sigma(t,Z_{t}) \Big) - \frac{\gamma}{2} \left(u_{S}^{\star}(t)^{\intercal} \tilde{\sigma}_{S} u_{S}^{\star}(t) + u_{Y}^{\star}(t)^{2} \sigma_{L}^{2}(t,\lambda_{t},Y_{t}) \right) a(t)^{2}\\ & -\frac{\gamma}{2} {\text{Tr}}\left(\sigma(t,Z_{t})^{\intercal} \nabla_{Z}b(t,Z_{t})^{\intercal} \nabla_{Z}b(t,Z_{t}) \sigma(t,Z_{t}) \right) - \gamma a(t) \nabla_{Z}b(t,Z_{t})^{\intercal} Q^{u^{\star}}(t,Z_{t}) \\ &+ \int_{\mathbb{R}^{k+1} \setminus \{0\}} \left( B(t,Z_{t}+\Delta Z_{t}(x,\bar{x})) - B(t,Z_{t}) - (\Delta Z_{t}(x,\bar{x}))^{\intercal} \nabla_{Z}B(t,Z_{t})\right)\ \vartheta_{X,\bar{X}}(dx,d\bar{x}) \\ &- \frac{\gamma}{2} \int_{\mathbb{R}^{k+1} \setminus \{0\}} \left(a(t) \Delta P_{t}^{u^{\star}}(x,\bar{x}) + b(t,Z_{t}+\Delta Z_{t}(x,\bar{x})) - b(t,Z_{t}) \right)^{2}\ \vartheta_{X,\bar{X}}(dx,d\bar{x}) =0. \end{split} \label{eq:static}\end{aligned}$$ Note that $\Delta P_{t}^{u^{\star}}(x,\bar{x})$ in the last line of means the jump of the portfolio process where the investor is allocating optimally. For solving the latter PIDE, we need to find the functions $a$ and $b$. To do so, we use the third equation of the system (for the special case $D \equiv 0$), namely $\mathcal{A}^{u^{\star}}g_{u^{\star}}(t,p,z) = 0.$ Following the *Ansatz* $$g(t,p,z) = a(t)p + b(t,z),$$ we obtain $$\begin{aligned} \begin{split} &\dot{a}(t)p + \dot{b}(t,Z_{t}) + a(t)(pr + \tilde{\mu}^{\intercal} u_{S}^{\star}(t) + \nu_{L}(t,\lambda_{t},Y_{t}) u_{Y}^{\star}(t)) + \nabla_{Z}b(t,Z_{t})^{\intercal} \mu(t,Z_{t})\\ & + \frac{1}{2}\ {\text{Tr}}\left( \sigma(t,Z_{t})^{\intercal} H_{Z}(b(t,Z_{t})) \sigma(t,Z_{t}) \right) \\ &+ \int_{\mathbb{R}^{k+1} \setminus \{0\}} b(t,Z_{t}+\Delta Z_{t}(x,\bar{x})) - b(t,Z_{t}) - (\Delta Z_{t}(x,\bar{x}))^{\intercal} \nabla_{Z}b(t,Z_{t}) \ \vartheta_{X,\bar{X}}(dx,d\bar{x}) =0, \end{split} \label{eq:pide}\end{aligned}$$ with suitable boundary conditions for $a$ and $b$. Using separation of variables again, we find the ODE $$\begin{aligned} \dot{a}(t) + a(t)r &=0, \\ a(T) &=1,\end{aligned}$$ leading to $a(t) = e^{r(T-t)}.$ Observe that $A(t)=a(t),$ so we can cancel some terms in the optimal strategies. Several further definitions are in order: - $\Theta_{1}^{\intercal} := \tilde{\mu}^{\intercal} (\tilde{\sigma}_{S} +\tilde{\rho}_{S} \xi)^{-1}$, i.e., $\Theta_{1}^{\intercal} \in \mathbb{R}^{m},$ - $\Theta_{2}(t,\lambda_{t},Y_{t}) := \frac{\nu_{L}(t,\lambda_{t},Y_{t})}{\sigma_{L}^{2}(t,\lambda_{t},Y_{t}) + \tilde{\eta}_{L}(t,\lambda_{t},Y_{t})},$ - $ C(t,\lambda_{t},Y_{t},x,\bar{x}) := \Theta_{1} \rho_{S} x + \Theta_{2}(t,\lambda_{t},Y_{t}) \eta_{L}(t,\lambda_{t},Y_{t},\bar{x}),$\ - $\phi_{1_{i}}(t,Z_{t}) := \begin{cases} \Theta_{1} \sigma_{S} \sigma_{S^{i}} S_{t}^{i},\ &i=1,\dots,m,\\ \Theta_{2}(t,\lambda_{t},Y_{t}) \sigma_{\lambda}(t,\lambda_{t}) \sigma_{L}(t,\lambda_{t},Y_{t}) , \ &i=m+1, \\ \Theta_{2}(t,\lambda_{t},Y_{t}) \sigma_{L}^{2}(t,\lambda_{t},Y_{t})Y_{t}, \ &i=m+2, \end{cases}$\ i.e., $\phi_{1}(t,Z_{t}) = (\phi_{1_{1}}(t,Z_{t}),\dots,\phi_{1_{m+2}}(t,Z_{t}))\in \mathbb{R}^{m+2}.$\ Inserting $u_{S}^{\star}$ and $u_{Y}^{\star}$ into and manipulating terms, we find that $$\begin{aligned} &\dot{b}(t,Z_{t}) + \frac{\Theta_{1}}{\gamma} \tilde{\mu} \ - \Theta_{1} \sigma_{S} \sum_{j=1}^{m} b_{z_{j}}(t,Z_{t}) S_{t}^{j} \sigma_{S^{j}} - \Theta_{1} \rho_{S} b_{1}(t,Z_{t}) + \frac{\Theta_{2}(t,\lambda_{t},Y_{t}) \nu_{L}(t,\lambda_{t},Y_{t})}{\gamma}\notag \ \\ &\ - \Theta_{2}(t,\lambda_{t},Y_{t}) \big(b_{z_{m+1}}(t,Z_{t}) \sigma_{\lambda}(t,\lambda_{t}) \sigma_{L}(t,\lambda_{t},Y_{t}) + b_{z_{m+2}}(t,Z_{t}) \sigma^{2}_{L}(t,\lambda_{t},Y_{t}) Y_{t}\big)\notag \ \\ &\ - \Theta_{2}(t,\lambda_{t},Y_{t}) b_{2}(t,Z_{t}) + \nabla_{Z}b(t,Z_{t})^{\intercal} \mu(t,Z_{t}) + \frac{1}{2}\ {\text{Tr}}\left( \sigma(t,Z_{t})^{\intercal} H_{Z}(b(t,Z_{t})) \sigma(t,Z_{t}) \right) \notag \ \\ &\ + \int_{\mathbb{R}^{k+1} \setminus \{0\}} b(t,Z_{t}+\Delta Z_{t}(x,\bar{x})) - b(t,Z_{t}) - (\Delta Z_{t}(x,\bar{x}))^{\intercal} \nabla_{Z}b(t,Z_{t}) \ \vartheta_{X,\bar{X}}(dx,d\bar{x}) =0 \notag \ \\ & \Rightarrow \dot{b}(t,Z_{t}) + \frac{\Theta_{1}}{\gamma} \tilde{\mu} + \frac{\Theta_{2}(t,\lambda_{t},Y_{t}) \nu_{L}(t,\lambda_{t},Y_{t})}{\gamma} - \Theta_{1} \sigma_{S}\sum_{j=1}^{m} b_{z_{j}}(t,Z_{t}) S_{t}^{j} \sigma_{S^{j}}\notag \ \\ &\ - \Theta_{2}(t,\lambda_{t},Y_{t}) \big(b_{z_{m+1}}(t,Z_{t}) \sigma_{\lambda}(t,\lambda_{t}) \sigma_{L}(t,\lambda_{t},Y_{t}) + b_{z_{m+2}}(t,Z_{t}) \sigma^{2}_{L}(t,\lambda_{t},Y_{t}) Y_{t}\big) \notag \ \\ &\ + \nabla_{Z}b(t,Z_{t})^{\intercal} \mu(t,Z_{t}) + \frac{1}{2}\ {\text{Tr}}\left( \sigma(t,Z_{t})^{\intercal} H_{Z}(b(t,Z_{t})) \sigma(t,Z_{t}) \right) \notag \ \\ &\ + \int_{\mathbb{R}^{k+1} \setminus \{0\}} \Big(\big(b(t,Z_{t}+\Delta Z_{t}(x,\bar{x})) - b(t,Z_{t})\big) (1-\Theta_{1} \rho_{S}x - \Theta_{2}(t,\lambda_{t},Y_{t}) \eta_{L}(t,\lambda_{t},Y_{t},\bar{x})) \notag \ \\ &\ - (\Delta Z_{t}(x,\bar{x}))^{\intercal} \nabla_{Z}b(t,Z_{t})\Big) \ \vartheta_{X,\bar{X}}(dx,d\bar{x}) =0 \notag \ \\ & \Rightarrow \dot{b}(t,Z_{t}) + \frac{\Theta_{1}}{\gamma} \tilde{\mu} + \frac{\Theta_{2}(t,\lambda_{t},Y_{t}) \nu_{L}(t,\lambda_{t},Y_{t})}{\gamma} - \Theta_{1} \sigma_{S} \sum_{j=1}^{m} b_{z_{j}}(t,Z_{t}) S_{t}^{j} \sigma_{S^{j}}\notag \ \\ &\ - \Theta_{2}(t,\lambda_{t},Y_{t}) \big(b_{z_{m+1}}(t,Z_{t}) \sigma_{\lambda}(t,\lambda_{t}) \sigma_{L}(t,\lambda_{t},Y_{t}) + b_{z_{m+2}}(t,Z_{t}) \sigma^{2}_{L}(t,\lambda_{t},Y_{t}) Y_{t}\big) \notag \ \\ &\ + \nabla_{Z}b(t,Z_{t})^{\intercal} \mu(t,Z_{t}) + \frac{1}{2}\ {\text{Tr}}\left( \sigma(t,Z_{t})^{\intercal} H_{Z}(b(t,Z_{t})) \sigma(t,Z_{t}) \right) \notag \ \\ &\ + \int_{\mathbb{R}^{k+1} \setminus \{0\}} \Big(\big(b(t,Z_{t}+\Delta Z_{t}(x,\bar{x})) - b(t,Z_{t})\notag \ \\ &\ - (\Delta Z_{t}(x,\bar{x}))^{\intercal} \nabla_{Z}b(t,Z_{t})\big) (1-C(t,\lambda_{t},Y_{t},x,\bar{x}))\Big)\ \vartheta_{X,\bar{X}}(dx,d\bar{x}) \notag \ \\ &\ - \int_{\mathbb{R}^{k+1} \setminus \{0\}} (\Delta Z_{t}(x,\bar{x}))^{\intercal} \nabla_{Z}b(t,Z_{t}) C(t,\lambda_{t},Y_{t},x,\bar{x})\ \vartheta_{X,\bar{X}}(dx,d\bar{x}) =0 \notag \\ \begin{split} & \Rightarrow \dot{b}(t,Z_{t}) + \frac{\Theta_{1} \tilde{\mu} + \Theta_{2}(t,\lambda_{t},Y_{t})\nu_{L}(t,\lambda_{t},Y_{t})}{\gamma} + \nabla_{Z}b(t,Z_{t})^{\intercal} \Big(\mu(t,Z_{t}) - \phi_{1}(t,Z_{t}) \\ &\ - \int_{\mathbb{R}^{k+1} \setminus \{0\}}\Delta Z_{t}(x,\bar{x}) C(t,\lambda_{t},Y_{t},x,\bar{x})\ \vartheta_{X,\bar{X}}(dx,d\bar{x}) \Big) + \frac{1}{2}\ {\text{Tr}}\left( \sigma(t,Z_{t})^{\intercal} H_{Z}(b(t,Z_{t})) \sigma(t,Z_{t}) \right) \\ &\ + \int_{\mathbb{R}^{k+1} \setminus \{0\}} \Big(\big(b(t,Z_{t}+\Delta Z_{t}(x,\bar{x})) - b(t,Z_{t}) \\ &\ - (\Delta Z_{t}(x,\bar{x}))^{\intercal} \nabla_{Z}b(t,Z_{t})\big) (1-C(t,\lambda_{t},Y_{t},x,\bar{x}))\Big) \vartheta_{X,\bar{X}}(dx,d\bar{x}) = 0. \label{eq:pide_b} \end{split}\end{aligned}$$ Observe that is a linear PIDE and therefore solvable. In the sequel, we present a Feynman-Kac solution ([@fk]): $$\begin{aligned} \begin{split} b(t,z) &= \mathbb{E}^{\mathbb{P}^{*}}_{t,z}\left[\int_{t}^{T}\frac{\Theta_{1}\tilde{\mu} + \Theta_{2}(s,\lambda^{*}_{s},Y^{*}_{s})\nu_{L}(s,\lambda^{*}_{s},Y^{*}_{s})}{\gamma}\ ds\right]\\ & = \frac{\Theta_{1}\tilde{\mu}}{\gamma}(T-t) + \mathbb{E}^{\mathbb{P}^{*}}_{t,z}\left[\int_{t}^{T} \frac{\Theta_{2}(s,\lambda^{*}_{s},Y^{*}_{s}) \nu_{L}(s,\lambda^{*}_{s},Y^{*}_{s})}{\gamma}\ ds\right]. \end{split} \label{eq:b}\end{aligned}$$ Note that this form of the function $b$ implies that the last two terms inside the brackets in formula for $u_{S}^{\star}(t)$ given by are equal to zero. The dynamics of $Z^{*}$ under the measure $\mathbb{P}^{*}$ read $$\begin{aligned} dZ^{*}_{t} &= \left(\mu(t,Z^{*}_{t}) - \phi^{(1)}(t,Z_{t}^{*}) - \int_{\mathbb{R}^{k+1} \setminus \{0\}} \Delta Z_{t}^{*}(x,\bar{x}) C(t,\lambda_{t}^{*},Y_{t}^{*},x,\bar{x})\ \vartheta_{X,\bar{X}}(dx,d\bar{x}) \right) dt\\ &\ + \sigma(t,Z_{t}^{*})\ dW^{*}_{t} + \int_{\mathbb{R}^{k+1} \setminus \{0\}} \Delta Z_{t}^{*}(x,\bar{x})\ \tilde{J}^{*}_{X,\bar{X}}(dt,dx,d\bar{x}),\end{aligned}$$ with $Z_{0} = (S_{0},\lambda_{0},Y_{0})^{\intercal} \in \mathbb{R}^{m+2}.$ The density process $$\Phi_{t} = \frac{d\mathbb{P}}{d\mathbb{P}^{*}}\Bigg|_{\mathcal{F}_{t}}$$ solves the SDE $$d\Phi_{t} = \Phi_{t} \psi^{(1)}(t,Z_{t})\ d\hat{W}_{t} + \Phi_{t-} \int_{\mathbb{R}^{k+1} \setminus \{0\}} \psi^{(2)}(t,Z_{t},x,\bar{x})\ \tilde{J}_{X,\bar{X}}(dt,dx,d\bar{x}), \label{eq:dens_b}$$ with $\Phi_{0} = 1.$ Moreover, $\psi^{(1)}(t,z)$ is given as solution of the system of equations\ $$\underbrace{\sigma(t,z)}_{(m+2)\times (d+1)} \cdot \underbrace{\psi^{(1)}(t,z)}_{(d+1)\times (1)} = \underbrace{-\phi^{(1)}(t,z)}_{(m+2) \times (1)}.$$ Note that there exists at least one solution to this system because $m \leq d$ (cf. the explanations preceding ) and $\lambda$ and $Y$ are driven by the same Brownian motion $\bar{W}.$. In particular, it holds that $\psi^{(1)}_{d+1}(t,Z_{t}) = \frac{- \phi^{(1)}_{m+1}(t,Z_{t})}{\sigma_{\lambda}(t,\lambda_{t})} = \frac{- \phi^{(1)}_{m+2}(t,Z_{t})}{\sigma_{L}(t,\lambda_{t},Y_{t})Y_{t}}.$ Provided that $C(t,\lambda_{t},Y_{t},x,\bar{x}) < 1$ for Lebesgue-almost all $t \in [0,T]$ and $\vartheta_{X,\bar{X}}(dx,d\bar{x})$-a.s., it holds that $$\psi_{2}(t,Z_{t},x,\bar{x}) = -C(t,\lambda_{t},Y_{t},x,\bar{x}).$$ If $\Phi$ is a positive martingale (see e.g. [@kazamaki]), then, according to the *Girsanov theorem*, $\mathbb{P}^{*}$ is equivalent to $\mathbb{P}$ and $$\begin{aligned} d\hat{W}_{t} &= -\psi_{1}(t,Z_{t})\ dt + dW^{*}_{t}, \\ \vartheta^{*}_{X,\bar{X}}(dx,d\bar{x})&= (1-C(t,\lambda_{t},Y_{t},x,\bar{x}))\ \vartheta_{X,\bar{X}}(dx,d\bar{x}).\end{aligned}$$ Finally, we can represent the solution to the PIDE as $$\begin{aligned} \begin{split} &B(t,z)\\ &:= \mathbb{E}_{t,z}\Bigg[\int_{t}^{T}\Big(e^{r(T-s)}(\tilde{\mu}^{\intercal}u_{S}^{\star}(s) + u_{Y}^{\star}(s)\nu_{L}(s,\lambda_{s},Y_{s}) \\ &\ \ - \frac{\gamma}{2} \left(u_{S}^{\star}(s)^{\intercal} \tilde{\sigma}_{S} u_{S}^{\star}(s) + u_{Y}^{\star}(s)^{2} \sigma_{L}^{2}(s,\lambda_{s},Y_{s}) \right) e^{2r(T-s)}\\ &\ \ - \gamma e^{r(T-s)} \nabla_{Z}b(s,z)^{\intercal} Q^{u^{\star}}(s,Z_{s})\\ &\ \ -\frac{\gamma}{2} \int_{\mathbb{R}^{k+1} \setminus \{0\}} (e^{r(T-s)}\Delta P_{s}^{u^{\star}}(x,\bar{x}) + b(s,Z_{s-}+\Delta Z_{s}(x,\bar{x})) - b(s,Z_{s-}))^{2}\ \vartheta_{X,\bar{X}}(dx,d\bar{x}) \Big) ds\Bigg]. \end{split} \label{eq:sol_static}\end{aligned}$$ Note that the expectation in is calculated under the physical measure $\mathbb{P}.$ We summarize the most important part of the previous discussion in the following theorem: Consider the extended HJB system for the case $D \equiv 0.$ For any $t \in [0,T)$ the optimal amounts to be invested in the stocks and the longevity asset are given by $$u_{S}^{\star}(t) = \frac{\tilde{\mu} (\tilde{\sigma}_{S} +\tilde{\rho}_{S} \xi)^{-1}}{\gamma e^{r(T-t)}},$$ $$\begin{aligned} u_{Y}^{\star}(t) &= \frac{\nu_{L}(t,\lambda_{t},Y_{t}) -\gamma \big(b_{z_{m+1}}(t,z) \sigma_{\lambda}(t,\lambda_{t}) \sigma_{L}(t,\lambda_{t},Y_{t}) + b_{z_{m+2}}(t,z) \sigma^{2}_{L}(t,\lambda_{t},Y_{t}) Y_{t}\big) -\gamma b_{2}(t,z)}{\gamma (\sigma_{L}^{2}(t,\lambda_{t},Y_{t}) + \tilde{\eta}_{L}(t,\lambda_{t},Y_{t}))e^{r(T-t)}},\end{aligned}$$ with the function $b$ given by , while $b_{2}$ is defined by . In addition, the equilibrium value function $V$ decomposes into $$V(t,p,z) = A(t)p + B(t,z),$$ with $A(t) = e^{r(T-t)}$ and $B(t,z)$ given by . The expected optimal terminal wealth $g_{u^{\star}}$ decomposes into $$g_{u^{\star}}(t,p,z) = a(t)p + b(t,z),$$ with $a(t) = e^{r(T-t)}$ and $b(t,z)$ given by . \[thm\_strategies\] Numercal results ================ In this part we exemplify Theorem \[thm\_strategies\]. The pricing of the longevity asset is done under some pricing measure $\mathbb{Q}$ while the optimization is performed under the objective measure $\mathbb{P}.$ Hence, we need to know the dynamics of $\lambda$ and $Y$ under both measures. Further, we need to choose a process modeling the force of mortality $\lambda$ that is nonnegative a.s. The Cox-Ingersoll-Ross (CIR) process supplemented by positive jumps (we refer to it as JCIR process in the sequel) is a good candidate for several reasons. The CIR process cannot become negative and belongs to the class of affine models allowing for a closed-form formula of the zero-bond price and the JCIR process preserves these properties ([@brigo]). We model the positive jumps by a compound Poisson process. Thereby the number of jumps is counted by the homogeneous Poisson process $\bar{N}=(\bar{N}_{t})_{t \in [0,T]}$ with constant intensity $\varrho_{\lambda} > 0$ and the jump sizes are independent and follow an exponential distribution with mean $\varsigma > 0.$ Thus, the Lévy measure is given by $\vartheta_{\bar{X}}(d\bar{x}) = \varrho_{\lambda} f(d\bar{x}),$ with $f$ denoting the probability density function of an exponentially distributed random variable. Let $\psi_{1}(t,Z_{t}) = \kappa \sqrt{\lambda_{t}}, \kappa > 0,$ be the market price of Brownian risk in the longevity market and denote by $\psi_{2} > -1$ the market price of jump risk. For parameters $\beta, \sigma_{\lambda}, \theta > 0$ and $\tilde{\theta} := \theta + \frac{(1+\psi_{2})\varrho_{\lambda} \varsigma}{\beta},$ consider the dynamics of $\lambda$ given by $$d\lambda_{t}= \big[\beta \tilde{\theta} - (\beta + \kappa \sigma_{\lambda}) \lambda_{t} - \psi_{2} \varrho_{\lambda} \varsigma \big]\ dt + \sigma_{\lambda} \sqrt{\lambda_{t}}\ d\bar{W}_{t} + \int_{\mathbb{R} \setminus \{0\}} \bar{x} \tilde{J}_{\bar{X}}(dt,d\bar{x}), \label{eq:lambda_jcir_p}$$ with $\lambda_{0} > 0.$ Defining $$\begin{aligned} \begin{split} d\bar{W}_{t} &= dW^{\mathbb{Q}}_{t} + \kappa \sqrt{\lambda_{t}}\ dt, \\ \vartheta^{\mathbb{Q}}_{\bar{X}}(d\bar{x}) &= (1+\psi_{2})\ \vartheta_{\bar{X}}(d\bar{x}), \end{split} \label{eq:transformation_jcir}\end{aligned}$$ a straightforward calculation shows that the $\mathbb{Q}$-dynamics of $\lambda$ reads $$d\lambda_{t} = \beta[\theta - \lambda_{t}]\ dt + \sigma_{\lambda} \sqrt{\lambda_{t}}\ dW^{\mathbb{Q}}_{t} + \int_{\mathbb{R} \setminus \{0\}} \bar{x} \ J_{\bar{X}}(dt,d\bar{x}), \label{eq:lambda_Q}$$ $\lambda_{0} > 0$, which is the classical JCIR model under $\mathbb{Q}.$ From this we can easily deduce that an appropriate choice of parameters and the starting value $\lambda_{0}$ preserves nonnegativity. Note that we have specified the market prices of risk such that the model is tractable under both measures. In particular, the representation is consistent with the general setup in , and is the starting point for pricing. For $r \geq 0$ and $t \in [0,T),$ we are interested in the price of the zero-coupon longevity bond $$L_{\lambda}(t,T) = \mathbb{E}_{\mathbb{Q}}\left[e^{-\int_{t}^{T}(\lambda_{s}+r)\ ds}\Big| \mathcal{F}_{t} \right].$$ Defining the auxiliary process $$\tilde{L}_{\lambda}(t,T) := e^{r(T-t)} L_{\lambda}(t,T) = \mathbb{E}_{\mathbb{Q}}\left[e^{-\int_{t}^{T}\lambda_{s}\ ds}\Big| \mathcal{F}_{t} \right],$$ the affine structure of yields that $$\tilde{L}_{\lambda}(t,T) = A_{\lambda}(t,T) \alpha_{\lambda}(t,T) e^{-B_{\lambda}(t,T) \lambda_{t}}, \label{eq:tildeL}$$ for deterministic functions $A_{\lambda}, \alpha_{\lambda},$ and $B_{\lambda}$ to be found in Chapter 22 of [@brigo]. Recall that the dollar value at time $t$ of an investment in $L_{\lambda}$ at time $t=0$ is given by $Y_{t} = e^{-\int_{0}^{t} \lambda_{s}\ ds} L_{\lambda}(t,T).$ The next proposition characterizes the dynamics of $Y$ assuming that $\lambda$ is modeled by a JCIR process. Consider the process $\lambda$ given by and the specification of the pricing measure $\mathbb{Q}.$ The dollar value process $Y$ of an investment in the longevity asset $L_{\lambda}$ at time $t=0$ is given by $$\begin{aligned} \begin{split} \frac{dY_{t}}{Y_{t-}} &= \left(r + B_{\lambda}(t,T) \sigma_{\lambda} \kappa \lambda_{t} - \psi_{2} \int_{\mathbb{R} \setminus \{0\}} (e^{-B_{\lambda}(t,T) \bar{x}} -1) \vartheta_{\bar{X}}(d\bar{x}) \right) dt - B_{\lambda}(t,T) \sigma_{\lambda} \sqrt{\lambda_{t}}\ d\bar{W}_{t} \\ &\ + \int_{\mathbb{R} \setminus \{0\}} \left(e^{-B_{\lambda}(t,T)\bar{x}}-1 \right) \tilde{J}_{\bar{X}}(dt,d\bar{x}), \end{split} \label{eq:dY}\end{aligned}$$ with $Y_{0} = L_{\lambda}(0,T).$ Using , we define $f(t,\lambda_{t}) = \tilde{L}_{\lambda}(t,T).$ Then a standard calculation shows that $$\begin{aligned} \begin{split} df(t,\lambda_{t}) &= \lambda_{t}f(t,\lambda_{t})\ dt + f'(t,\lambda_{t}) \sigma_{\lambda} \sqrt{\lambda_{t}} \ dW_{t}^{\mathbb{Q}} \\ &\ + \int_{\mathbb{R} \setminus \{0\}} \left(f(t,\lambda_{t-} + \bar{x}) - f(t,\lambda_{t-})\right) \tilde{J}^{\mathbb{Q}}_{\bar{X}}(dt,d\bar{x}). \end{split} \label{eq:df}\end{aligned}$$ We easily see from that $$\begin{aligned} f'(t,\lambda_{t}) &= -B_{\lambda}(t,\lambda_{t}) f(t,\lambda_{t}), \\ f(t,\lambda_{t-} + \bar{x}) - f(t,\lambda_{t-}) &= f(t,\lambda_{t-}) \left(e^{-B_{\lambda}(t,T)\bar{x}}-1 \right).\end{aligned}$$ Plugging this back into and translating $df$ to the measure $\mathbb{P}$, we obtain $$\begin{aligned} df(t,\lambda_{t}) &= f(t,\lambda_{t-}) \Bigg(\left(\lambda_{t} + B_{\lambda}(t,T) \sigma_{\lambda} \kappa \lambda_{t} - \psi_{2} \int_{\mathbb{R} \setminus \{0\}} (e^{-B_{\lambda}(t,T)\bar{x}}-1)\ \vartheta_{\bar{X}}(d\bar{x}) \right)\ dt \\ & \ - B_{\lambda}(t,T) \sigma_{\lambda} \sqrt{\lambda_{t}}\ d\bar{W}_{t} + \int_{\mathbb{R} \setminus \{0\}} (e^{-B_{\lambda}(t,T)\bar{x}}-1)\ \tilde{J}_{\bar{X}}(dt,d\bar{x}) \Bigg).\end{aligned}$$ Integration by parts then yields . We conclude the proof by remarking that the function $B_{\lambda}$ is suitably integrable. We further deduce from (cf. ) that $$\begin{aligned} \nu_{L}(t,\lambda_{t},Y_{t}) &= \nu_{L}(t,\lambda_{t}) = B_{\lambda}(t,T) \sigma_{\lambda} \kappa \lambda_{t} - \psi_{2} \int_{\mathbb{R} \setminus \{0\}} (e^{-B_{\lambda}(t,T) \bar{x}} -1)\ \vartheta_{\bar{X}}(d\bar{x}), \\ \sigma_{L}(t,\lambda_{t},Y_{t}) &= \sigma_{L}(t,\lambda_{t}) = - B_{\lambda}(t,T) \sigma_{\lambda} \sqrt{\lambda_{t}}, \\ \eta_{L}(t,\lambda_{t},Y_{t},\bar{x}) &= \eta_{L}(t,\bar{x}) = e^{-B_{\lambda}(t,T)\bar{x}}-1, \\ \tilde{\eta}_{L}(t,\lambda_{t},Y_{t}) &= \tilde{\eta}_{L}(t) = \int_{\mathbb{R} \setminus \{0\}} \eta_{L}(t,\bar{x})^{2}\ \vartheta_{\bar{X}}(d\bar{x}).\end{aligned}$$ In order to implement the strategies $u_{S}^{\star}$ and $u_{Y}^{\star},$ we need to specify the function $b(t,z) = b(t,\lambda)$ from . One can calculate that the density process is given as solution of the SDE $$\frac{d\Phi_{t}}{\Phi_{t-}} = -\frac{\nu_{L}(t,\lambda_{t})}{\sigma_{L}^{2}(t,\lambda_{t}) + \tilde{\eta}_{L}(t)} \sigma_{L}(t,\lambda_{t})\ d\bar{W}_{t} - \frac{\nu_{L}(t,\lambda_{t})}{\sigma_{L}^{2}(t,\lambda_{t}) + \tilde{\eta}_{L}(t)} \int_{\mathbb{R} \setminus \{0\}} \eta_{L}(t,\bar{x})\ \tilde{J}_{\bar{X}}(dt,d\bar{x}), \label{eq:dens_jcir}$$ $\Phi_{0}=1$, i.e., $\Phi_{t}$ is the stochastic exponential of the integrated right-hand side of .\ For simplicity we restrict to the case $d=k=1$ for the simulation, that is, there is only one stock traded on the market. We model the jumps of the stock price process by a homogeneous Poisson process $N = (N_{t})_{t \in [0,T]}$ with intensity $\varrho_{S} > 0.$ Denote the compensated version by $\tilde{N} = (\tilde{N}_{t})_{t \in [0,T]}.$ The dynamics of the stock price then reads $$\frac{dS_{t}}{S_{t-}} = \mu \ dt + \sigma\ dW_{t} + \rho \ d\tilde{N}_{t}, \label{eq:SN}$$ with $S_{0} > 0$. Finally, we see that $$\begin{aligned} \Theta_{1} &= \frac{\tilde{\mu}}{\sigma + \rho \xi}, \\ \Theta_{2}(t,\lambda_{t},Y_{t}) &= \Theta_{2}(t,\lambda_{t}) = \frac{\nu_{L}(t,\lambda_{t})}{\sigma_{L}^{2}(t,\lambda_{t}) + \tilde{\eta}_{L}(t)}.\end{aligned}$$ Therefore the function $b$ is given by $$b(t,\lambda_{t}) = \frac{\Theta_{1}\tilde{\mu}}{\gamma}(T-t) + \mathbb{E}_{t,\lambda_{t}}\left[ \frac{\Phi_{T}}{\Phi_{t}}\int_{t}^{T} \frac{\Theta_{2}(s,\lambda_{s}) \nu_{L}(s,\lambda_{s})}{\gamma}\ ds\right],$$ and Theorem \[thm\_strategies\] implies that the optimal strategies in this market setup read $$\begin{aligned} u_{S}^{\star}(t) &= \frac{\tilde{\mu}}{(\sigma^{2} + \rho^{2} \varrho_{S})\ \gamma e^{r(T-t)}}, \label{eq:uSjcir}\\ u_{Y}^{\star}(t) &= \frac{\nu_{L}(t,\lambda_{t}) - \gamma b_{\lambda}(t,\lambda_{t}) \sigma_{\lambda} \sqrt{\lambda_{t}} \sigma_{L}(t,\lambda_{t}) - \gamma \int_{\mathbb{R} \setminus \{0\}}(b(t,\lambda_{t} + \bar{x}) - b(t,\lambda_{t})) \eta_{L}(t,\bar{x})\ \vartheta_{\bar{X}}(d\bar{x})}{(\sigma_{L}^{2}(t,\lambda_{t}) + \tilde{\eta}_{L}(t))\ \gamma e^{r(T-t)}}. \label{eq:uYjcir}\end{aligned}$$ $P_{0}$ $T$ $r$ $\gamma$ $S_{0}$ $\mu$ $\sigma$ $\rho$ $\varrho_{S}$ $\beta$ $\sigma_{\lambda}$ $\theta$ $\kappa$ $\psi_{2}$ $\varrho_{\lambda}$ $\lambda_{0}$ $\varsigma$ --------- ------ ------- ---------- --------- ------- ---------- -------- --------------- --------- -------------------- ---------- ---------- ------------ --------------------- --------------- ------------- $1$ $10$ $.02$ $2$ $1$ $.06$ $.1$ $.1$ $3$ .4 .3 .1 .2 -.2 .5 .05 .001 : Parameter values[]{data-label="par_val"} $\mathbb{E}[S_{T}]$ $\text{Var}[S_{T}]$ $\mathbb{E}[\lambda_{T}]$ $\text{Var}[\lambda_{T}]$ --------------------- --------------------- --------------------------- --------------------------- $1.822$ 1.633 0.096 0.0102 : Expectation and Variance[]{data-label="E_Var"} $T=10$ $T=15$ $T=25$ ----------------------------- ---------- ---------- ---------- $\mathbb{E}[P^{\star}_{T}]$ $1.4371$ $1.6728$ $2.1852$ ROER $3.63\%$ $3.43\%$ $3.13\%$ $\text{Var}[P^{\star}_{T}]$ $0.1050$ $0.1616$ $0.2689$ : Expectation and Variance with different horizons[]{data-label="table_T"} (A) (B) (C) (D) (E) (F) ----------------------------- ---------- ---------- ---------- ---------- ---------- ---------- $\mathbb{E}[P^{\star}_{T}]$ $1.4244$ $1.4377$ $1.4399$ $1.4365$ $1.4379$ $1.4373$ ROER $3.54\%$ $3.63\%$ $3.65\%$ $3.62\%$ $3.63\%$ $3.63\%$ $\text{Var}[P^{\star}_{T}]$ $0.1009$ $0.1128$ $0.1077$ $0.1079$ $0.109$ $0.1083$ : Expectation and Variance: (A): without longevity asset; (B): ignoring jumps; (C): Brownian risk only; (D): std. normally distributed jump sizes of $S$; (E): $T_{L} = 15$; (F): $T_{L} = 25$[]{data-label="E_Var_P_opt_S"} In Table \[par\_val\] the assigned parameter values are summarized. We chose values that are typical in the literature. This leads to the expectations and variances of respectively the stock price and the force of mortality given in Table \[E\_Var\]. These quantities have been calculated using the formulas given by Lemma \[mom\_lam\]. In order to compare the expected payoffs under different scenarios, we consider the *rate of expected return* (ROER) given by ROER $= \ln(\mathbb{E}[P_{T}^{\star}])/T.$ Following the optimal strategies and yields the expectation, ROER and variance of the optimal terminal wealth displayed in the left panel of Table \[table\_T\]. Any modification made in the sequel will be compared against these values. The other two panels in Table \[table\_T\] show the expected value, ROER and variance of the terminal wealth when increasing the horizon to respectively $15$ and $25$ years. We see that the expected terminal payoffs and ROERs lie significantly above the final payoffs one would receive from investing in the riskless asset solely. The overall variance is of course slightly increasing when the investment horizon is prolonged, for that reason the insurance company is induced to invest less in the risky asset causing a minor decrease in the ROER over time.\ In Table \[E\_Var\_P\_opt\_S\] several further scenarios are analyzed. In Panel (A) the expectation, ROER and variance of the optimal terminal wealth without investing in the longevity asset are displayed. From we see that the amount invested in the stock is the same as in the previously described case while the money allocated to the longevity asset before is now invested in the riskless asset. Comparing the values in Panel (A) of Table \[E\_Var\_P\_opt\_S\] to the left panel of Table \[table\_T\], we see a slight decrease in the expected terminal wealth and in the ROER and a slight increase in the variance. Hence, the overall effect of not investing in the longevity asset is relatively small. Next, in Panel (B), we investigated the change in mean, ROER and variance of the optimal terminal wealth when keeping $\lambda$ and $S$ as in and respectively, but erroneously assuming that $\lambda$ and $S$ do not exhibit jumps. This corresponds to the scenario that an investor observes the variance of the stock and the force of mortality, but naively ascribes it to the Brownian components. Note that in this case the values of $u_{S}^{\star}$ do not change. We observe that the expected value ($1.4377$) and the ROER ($3.63\%)$ are not significantly different compared to the benchmark, while the variance increased to $0.1128$, which corresponds to a rise of $7.4 \%$. This slight increase in the variance stems from the fact that the hedging strategy does no longer account for the presence of jumps. However, the effect is relatively small, which is in line with our previous findings in Panel (A) of Table \[E\_Var\_P\_opt\_S\] because the impact of the investment in the longevity asset under the allocation rule $u_{Y}^{\star}$ on the mean, ROER and variance of the optimal terminal wealth is relatively low in general. Hence, buying the longevity asset on average yields a higher payoff than the riskless investment and leads to some diversification. When hedging a terminal condition linked to the mortality rate in the insurance pool as discussed in Example \[ex\_H\], the effect observed is likely to be stronger. In addition, our findings indicate that erroneously ignoring the jumps the force of mortality exhibits would then also lead to a significantly higher variance.\ In Panel (C) of Table \[E\_Var\_P\_opt\_S\] we investigated the effect of setting the jump intensity of $S$ and $\lambda$ to zero, so all uncertainty is stemming from Brownian risk. Thereby the expected values and variances of $S_{T}$ and $\lambda_{T}$ have been kept stable at the values depicted in Table \[E\_Var\] by adjusting the volatility parameters of the respective Brownian parts. However, the mean of the optimal terminal wealth just changed to $1.4399$, the ROER marginally increased to $3.65\%$, while the variance of the optimal terminal wealth slightly changed to $0.1077.$ The same phenomenon is shown in Panel (D): we replaced the constant jump of the Poisson process $N$ in the dynamics of $S$ by standard normally distributed jump sizes while keeping mean and variance of $S$ the same: we obtained a mean of $1.4365$, an ROER of $3.62\%$ and a variance of $0.1079$. Thus, we tentatively conclude that as long as we know mean and variance of the stock and the force of mortality, the expected optimal terminal wealth, the ROER and the variance are robust.\ So far we assumed that the time to maturity of the longevity asset coincides with the insurance horizon. We investigated the change of mean, ROER and variance of the optimal terminal wealth when the time to maturity of the longevity asset, say $T_{L},$ is longer than the insurance horizon, which has been kept fixed at $T=10.$ The results are displayed in Panel (E) and Panel (F) of Table \[E\_Var\_P\_opt\_S\]. We see that neither the expected value, the ROER nor the variance of the terminal optimal wealth change significantly. This result is highly important from a practical point of view because one cannot expect to find longevity assets whose times to maturity coincide with the insurance horizon, there are too few of them offered and traded. Our results show that picking an asset with a longer time to maturity does in particular not add to the variance of the terminal wealth.\ Finally, Figure \[fig:opt\_path\] displays a path of the optimal portfolio process. The jumps of the underlying processes are clearly visible and the path shows a positive trend. In Figure \[fig:strategies\], paths of the optimal dollar amounts to be invested are plotted. From we can see that the excess rate of return can become negative inducing the return of the longevity asset to be below the riskless rate. Thus, taking on a short position in $Y$ at several points in time is optimal and this can be clearly seen in Panel (B). Furthermore, as maturity is approaching, the optimal amount invested in the longevity asset is increasing because the function $B_{\lambda}(\cdot,T)$ in is tending to zero as $t$ approaches $T$. The economic meaning for this investment behavior is that close to maturity, the price of the longevity bond converges to $1$ (cf. ) and does not exhibit much variation anymore. Thus, the investment in the longevity bond becomes less risky.\ We conclude by remarking that a numerical analysis incorporating the hedging of a terminal condition should be conducted. In this case, closed-form solutions to the extended HJB system cannot be obtained anymore, so one needs to resort to more involved numerical methods. We leave this direction for future research. ![Optimal Portfolio Process[]{data-label="fig:opt_path"}](opt_path.png){width="60.00000%"} Appendix {#appendix .unnumbered} ======== We provide the formulas to calculate the moments of $\lambda_{T}$ displayed in Table \[E\_Var\]. Consider the JCIR process given by , i.e., $$d\lambda_{t}= \big[\beta \tilde{\theta} - (\beta + \kappa \sigma_{\lambda}) \lambda_{t} - \psi_{2} \varrho_{\lambda} \varsigma \big]\ dt + \sigma_{\lambda} \sqrt{\lambda_{t}}\ d\bar{W}_{t} + \int_{\mathbb{R} \setminus \{0\}} \bar{x}\ \tilde{J}_{\bar{X}}(dt,d\bar{x}),$$ with $\lambda_{0} > 0.$ We have that $$\begin{aligned} \begin{split} \mathbb{E}[\lambda_{t}] &= \frac{\beta \theta}{\beta + \kappa \sigma_{\lambda}}\left(1-e^{-(\beta + \kappa \sigma_{\lambda})T}\right) + \lambda_{0} e^{-(\beta + \kappa \sigma_{\lambda})T} + \frac{\varrho_{\lambda} \varsigma}{\beta + \kappa \sigma_{\lambda}}\left(1-e^{-(\beta + \kappa \sigma_{\lambda})T}\right), \\ \emph{Var}[\lambda_{t}] &= \frac{\beta \sigma_{\lambda}^{2}\theta}{2(\beta + \kappa \sigma_{\lambda})^{2}}\left(1-e^{-(\beta + \kappa \sigma_{\lambda})T}\right) + \frac{\lambda_{0} \sigma_{\lambda}^{2}}{\beta + \kappa \sigma_{\lambda}}e^{-(\beta + \kappa \sigma_{\lambda})T} \left(1-e^{-(\beta + \kappa \sigma_{\lambda})T}\right)\\ &\ \ + \frac{\varrho_{\lambda} \varsigma^{2}}{\beta + \kappa \sigma_{\lambda}} \left(1-e^{-2(\beta + \kappa \sigma_{\lambda})T} \right) + \frac{\sigma_{\lambda}^{2} \varrho_{\lambda} \varsigma}{\beta + \kappa \sigma_{\lambda}} \left(1-e^{-(\beta + \kappa \sigma_{\lambda})T} \right)\\ &\ \ + \frac{\sigma_{\lambda}^{2} \varrho_{\lambda} \varsigma}{2(\beta + \kappa \sigma_{\lambda})^{2}} \left(e^{-2(\beta + \kappa \sigma_{\lambda})T} -1 \right). \end{split} \label{eq:mom_lam}\end{aligned}$$ \[mom\_lam\] Denote the imaginary unit by $\i.$ For notational simplicity, we define the following: $$\begin{aligned} a &:= \beta + \kappa \sigma_{\lambda}, \\ b &:= \frac{\beta \theta}{a}, \\ g(t,y) &:= 1 + y\tilde{g}(t)\i,\\ \tilde{g}(t) &:= - \frac{\sigma_{\lambda}^{2}}{2a}\left(1-e^{-aT} \right), \\ \psi(t,y) &:= \frac{ye^{-at}\i}{g(t,y)}, \\ f_{1}(t,y) &:= g(t,y)^{-\frac{2ab}{\sigma_{\lambda}^{2}}}, \\ f_{2}(t,y) &:= \exp\left(\int_{0}^{t} \int_{0}^{\infty} \left(e^{\bar{x}\psi(s,y)}-1 \right)\ \vartheta_{\bar{X}}(d\bar{x})ds\right).\end{aligned}$$ According to [@rudiger], the characteristic function of $\lambda_{t}$ reads $$\mathbb{E}[e^{y \lambda_{t} \i}] = f_{1}(t,y)\ e^{\lambda_{0} \psi(t,y)}\ f_{2}(t,y).$$ So it holds that $$\begin{aligned} \frac{\partial}{\partial y} \mathbb{E}[e^{y \lambda_{t} \i}] &= f_{1}(t,y) e^{\lambda_{0} \psi(t,y)} f_{2}(t,y)\left[-\frac{2ab g'(t,y)}{\sigma_{\lambda}^{2}g(t,y)} + \frac{\lambda_{0}e^{-at}\i}{g(t,y)^{2}} + \int_{0}^{t} \int_{0}^{\infty} \frac{e^{\bar{x} \psi(s,y) - as}\bar{x} \i}{g(s,y)^{2}} \vartheta_{\bar{X}}(d\bar{x}) ds \right] \notag \\ &= \mathbb{E}[e^{y \lambda_{t} \i}] \cdot \left[\frac{b (1-e^{-at})\i}{g(t,y)} + \frac{\lambda_{0}e^{-at}\i}{g(t,y)^{2}} + \int_{0}^{t} \int_{0}^{\infty} \frac{e^{\bar{x} \psi(s,y) - as}\bar{x} \i}{g(s,y)^{2}} \vartheta_{\bar{X}}(d\bar{x}) ds \right] \label{eq:bracket},\end{aligned}$$ and from this we easily deduce the first moment of $\lambda_{t}$: $$\begin{aligned} \mathbb{E}[\lambda_{t}] &= (-\i)^{1} \cdot \frac{\partial}{\partial y} \mathbb{E}[e^{y \lambda_{t} \i}]\Bigg|_{y=0} \\ &=(-\i)^{1}\cdot 1\cdot \left[b (1-e^{-at})\i + \lambda_{0} e^{-at}\i + \i \frac{\varrho_{\lambda} \varsigma}{a} (1-e^{-at}) \right] \\ &= b (1-e^{-at}) + \lambda_{0} e^{-at} + \frac{\varrho_{\lambda} \varsigma}{a} (1-e^{-at}).\end{aligned}$$ Re-substitution of the abbreviations $a,b$ yields $\mathbb{E}[\lambda_{t}]$ given by . To calculate the second moment of $\lambda_{t}$, we need to determine the second derivative of the characteristic function w.r.t. $y$. The structure of the first derivative given by will turn out useful. For notational simplicity, we abbreviate the term in square brackets in by $[...]$ in the sequel. We find $$\begin{aligned} \frac{\partial^{2}}{\partial y^{2}} \mathbb{E}[e^{y \lambda_{t} \i}] &= \frac{\partial}{\partial y}\left( \mathbb{E}[e^{y \lambda_{t} \i}] \cdot [...] \right) \\ &= \left(\frac{\partial}{\partial y} \mathbb{E}[e^{y \lambda_{t} \i}]\right) \cdot [...] + \mathbb{E}[e^{y \lambda_{t} \i}]\cdot \left(\frac{\partial}{\partial y}[...]\right) \\ &= \mathbb{E}[e^{y \lambda_{t} \i}] \cdot [...]^{2} + \mathbb{E}[e^{y \lambda_{t} \i}]\cdot \left(\frac{\partial}{\partial y}[...]\right) \\ &= \mathbb{E}[e^{y \lambda_{t} \i}] \left([...]^{2} + \frac{\partial}{\partial y}[...] \right).\end{aligned}$$ We need to calculate that $$\begin{aligned} &\frac{\partial [...]}{\partial y} \\ &= \frac{b (1-e^{-at}) \tilde{g}(t)}{g(t,y)^{2}} + \frac{2\lambda_{0} e^{-at} \tilde{g}(t)}{g(t,y)^{3}}- \int_{0}^{t} \int_{0}^{\infty} \left(\bar{x}^{2} \frac{e^{\bar{x} \psi(s,y) - 2as}}{g(s,y)^{4}} - \frac{2\ e^{\bar{x} \psi(s,y) - as} \tilde{g}(s)}{g(s,y)^{3}} \bar{x} \right) \vartheta_{\bar{X}}(d\bar{x}) ds,\end{aligned}$$ in order to argue that the second moment is given by $$\begin{aligned} &\mathbb{E}[(\lambda_{t})^{2}] = (-\i)^{2} \cdot \frac{\partial^{2}}{\partial y^{2}} \mathbb{E}[e^{y \lambda_{t} \i}] \Bigg|_{y=0} \\ &= - \Big(-\mathbb{E}[\lambda_{t}]^{2} + b(1-e^{-at}) \tilde{g}(t)+2\lambda_{0} e^{-at} \tilde{g}(t) + \frac{\varsigma^{2} \varrho_{\lambda}}{a}(e^{-2at}-1) + 2 \frac{\varrho_{\lambda} \varsigma}{a} \tilde{g}(t)\\ &\ \ - \frac{\sigma_{\lambda}^{2} \varrho_{\lambda} \varsigma}{2a^{2}}(e^{-2at}-1)\Big) \\ &= \mathbb{E}[\lambda_{t}]^{2} - b(1-e^{-at}) \tilde{g}(t)-2\lambda_{0} e^{-at} \tilde{g}(t) - \frac{\varsigma^{2} \varrho_{\lambda}}{a}(e^{-2at}-1) - 2 \frac{\varrho_{\lambda} \varsigma}{a} \tilde{g}(t) + \frac{\sigma^{2} \varrho_{\lambda} \varsigma}{2a^{2}}(e^{-2at}-1).\end{aligned}$$ Finally, subtraction of the square of the first moment from the previous expression and re-substitution of the abbreviations $a,b,\tilde{g}(t)$ gives the formula for the variance of $\lambda_{t}$ provided by . [^1]: FB acknowledges support by *Stiftung WiMa Ulm*.
--- abstract: 'The hopes for scalable quantum computing rely on the “threshold theorem”: once the error per qubit per gate is below a certain value, the methods of quantum error correction allow indefinitely long quantum computations. The proof is based on a number of assumptions, which are supposed to be satisfied *exactly*, like axioms, e.g. *zero* undesired interactions between qubits, etc. However in the physical world no continuous quantity can be *exactly* zero, it can only be more or less small. Thus the “error per qubit per gate” threshold must be complemented by the required precision with which each assumption should be fulfilled. This issue was never addressed. In the absence of this crucial information, the prospects of scalable quantum computing remain uncertain.' author: - 'M.I. Dyakonov' title: Revisiting the hopes for scalable quantum computation --- The idea of quantum computing is to store information in the values of $2^N$ complex amplitudes describing the wavefunction of $N$ two-level systems (qubits), and to process this information by applying unitary transformations (quantum gates), that change these amplitudes in a precise and controlled manner [@steane1]. The value of $N$ needed to have a useful machine is estimated as $10^3$ or more. Note that even $2^{1000}\sim 10^{300}$ is much, much greater than the number of protons in the Universe. Since the qubits are always subject to various types of noise, and the gates cannot be perfect, it is widely recognized that large scale, i.e. useful, quantum computation is impossible without implementing error correction. This means that the $10^{300}$ continuously changing quantum amplitudes of the grand wavefunction describing the state of the computer must closely follow the desired evolution imposed by the quantum algorithm. The random drift of these amplitudes caused by noise, gate inaccuracies, unwanted interactions, etc., should be efficiently suppressed. Taking into account that all possible manipulations with qubits are not exact, it is not obvious at all that error correction can be done, even in principle, in an analog machine whose state is described by at least $10^{300}$ continuous variables. Nevertheless, it is generally believed (for example, see [@arda]) that the prescriptions for fault-tolerant quantum computation [@shor1; @presk; @got; @steane2] using the technique of error-correction by encoding [@shor2; @steane3] and concatenation (recursive encoding) give a solution to this problem. By active intervention, errors caused by noise and gate inaccuracies can be detected and corrected during the computation. The so-called “threshold theorem” [@ben-or; @kitaev; @knill] says that, once the error per qubit per gate is below a certain value estimated as $10^{-6} - 10^{-4}$, indefinitely long quantum computation becomes feasible. Thus, the theorists claim that the problem of quantum error correction is resolved, at least in principle, so that physicists and engineers have only to do more hard work in finding the good candidates for qubits and approaching the accuracy required by the threshold theorem [@alif; @knill1]. However, as it was clearly stated in the original work (but largely ignored later, especially in presentations to the general public, Ref. [@knill1] is one example) the mathematical proof of the threshold theorem is founded on a number of assumptions (axioms):\ 1. Qubits can be prepared in the $|00000...00\rangle$ state. New qubits can be prepared on demand in the state $|0\rangle$, 2\. The noise in qubits, gates, and measurements is uncorrelated in space and time, 3\. No undesired action of gates on other qubits, 4\. No systematic errors in gates, measurements, and qubit preparation, 5\. No undesired interaction between qubits, 6\. No “leakage” errors, 7\. Massive parallelism: gates and measurements are applied simultaneously to many qubits, and some others.\ While the threshold theorem is a truly remarkable mathematical achievement, one would expect that the underlying assumptions, considered as axioms, would undergo a close scrutiny to verify that they can be reasonably approached in the physical world. Moreover, the term “reasonably approached” should have been clarified by indicating with what precision each assumption should be fulfilled. So far, this has never been done (assumption 2 being an exception [@correlation; @staudt]), if we do not count the rather naive responses provided in the early days of quantum error correction [@presk2; @comment1; @presk4]. It is quite normal for a theory to disregard small effects whose role can be considered as negligible. But not when one specifically deals with errors and error correction. A method for correcting *some* errors on the assumption that other (unavoidable) errors are *non-existent* is not acceptable, because it uses fictitious ideal elements as a kind of golden standard [@dyakonov1]. Below are some trivial observations regarding manipulation and measurement of continuous variables. Suppose that we want to know the direction of a classical vector, like the compass needle. First, we never know exactly what our coordinate system is. We choose the $x,y,z$ axes related to some physical objects with the $z$ axis directed, say, towards the Polar Star, however neither this direction, nor the angles between our axes can be defined with an infinite precision. Second, the orientation of the compass needle with respect to the chosen coordinate system cannot be determined exactly. So, when we say that our needle makes an angle $\theta = 45^o $ with the $z$ axis, we understand that $\cos \theta$ is not exactly equal to the irrational number $1/\sqrt{2}$, rather it is somewhere around this value within some interval determined by our ability to measure angles and other uncertainties. We also understand that we cannot manipulate our needles perfectly, that no two needles can ever point exactly in the same direction, and that consecutive measurements of the direction of the same needle will give somewhat different results. In the physical world, continuous quantities can be neither measured nor manipulated exactly. In the spirit of the purely mathematical language of the quantum computing literature, this can be formulated in the form of the following\ **Axiom 1.** No continuous quantity can have an exact value. *Corollary.* No continuous quantity can be exactly equal to zero.\ To a mathematician, this might sound absurd. Nevertheless, this is the unquestionable reality of the physical world we live in [@comment3]. Note, that *discrete* quantities, like the number of students in a classroom or the number of transistors in the on-state, can be known exactly, and *this* makes the great difference between the digital computer and the hypothetical quantum computer [@onoff]. Axiom 1 is crucial whenever one deals with continuous variables (quantum amplitudes included). Each step in our technical instructions should contain an indication of the needed precision. Only then the engineer will be in a position to decide whether this is possible or not. All of this is quite obvious. Apparently, things are not so obvious in the magic world of quantum mechanics. There is a widespread belief that the $|1\rangle$ and $|0\rangle$ states “in the computational basis” are something absolute, akin to the on/off states of an electrical switch, or of a transistor in a digital circuit, but with the advantage that one can use quantum superpositions of these states. It is sufficient to ask: “With respect to [*which*]{} axis do we have a spin-up state?” to see that there is a serious problem with such a point of view. It should be stressed once more that the coordinate system, and hence the computational basis, cannot be exactly defined, and this has nothing to do with quantum mechanics. Suppose that, again, we have chosen the $z$ axis towards the Polar Star, and we measure the $z$-projection of the spin with a Stern-Gerlach beam-splitter. There will be inevitably some (unknown) error in the alignment of the magnetic field in our apparatus with the chosen direction. Thus, when we measure some quantum state and get $(0)$, we never know exactly to what state the wavefunction has collapsed. Presumably, it will collapse to the spin-down state with respect to the (not known exactly) direction of the magnetic field in our beam-splitter. However, with respect to the chosen $z$ axis (whose direction is not known exactly either) the wavefunction will always have the form $a|0\rangle+b|1\rangle$, where, hopefully, the unknown $b$ is small, $|b|^2\ll 1$. Another measurement with a similar instrument, or a consecutive measurement with the same instrument will give a different value of $b$. Quite obviously, the unwanted admixture of the $|1\rangle$ state is an error that *cannot be corrected*, since (contrary to the assumption 1 above) we can never have the standard *exact* $|0\rangle$ and $|1\rangle$ states to make the comparison. Thus, with respect to the consequences of imperfections, the situation is quite similar to what we have in classical physics. The classical statement “the exact direction of a vector is unknown” is translated into quantum language as “there is an unknown admixture of unwanted states”. The pure state $|0\rangle$ can never be achieved, just as a classical vector can be never be made to point *exactly* in the $z$ direction, and for the same reasons, since quantum measurements and manipulations are done with classical instruments. Clearly, the same applies to [*any*]{} desired state. Thus, when we contemplate the “cat state” $(|0000000\rangle + |1111111\rangle)/\sqrt{2}$, we should not take the $\sqrt{2}$ too seriously, and we should understand that *some* (maybe very small) admixture of e.g. $|0011001\rangle$ state must be necessarily present.\ [*Exact quantum states do not exist. Some admixtures of all possible states to any desired state are unavoidable.*]{}\ This fundamental fact described by Axiom 1 (nothing can be *exactly* zero!) should be taken into account in any prescriptions for quantum error correction. At first glance, it may seem that there *are* possibilities for achieving a desired state with an arbitrary precision. Indeed, using nails and glue, or a strong magnetic field, we can fix the compass needle so that it will not be subject to noise. We still cannot determine exactly the orientation of the needle with respect to our chosen coordinates, but we can take the needle’s direction as the $z$ axis. However: 1)we cannot align another fixed needle in exactly the same direction and 2)we cannot use fixed needles in an analog machine, to do this, they must be detached to allow for their free rotation. Quite similarly, in the quantum case we can apply a strong enough magnetic field to our spin at a low enough temperature, and wait long enough for the relaxation processes to establish thermodynamic equilibrium. Apparently, we will then achieve a spin-down $|0\rangle$ state with any desired accuracy (provided there is *no interaction* with other spins in our system, which is hardly possible). However “spin-down” refers to the (unknown exactly) direction of the magnetic field at the spin location. Because of the inevitable inhomogeneity of the magnetic field, we cannot use the direction of the field at the spin location to define the computational basis, since other spins within the same apparatus will be oriented slightly differently. Moreover, if we want to manipulate this spin, we must either switch off the magnetic field (during this process the spin state will necessarily change in an uncontrolled manner), or apply a resonant ac field at the spin precession frequency, making the two spin levels degenerate in the rotating frame. The high precision acquired in equilibrium will be immediately lost. Likewise, an atom at room temperature may be with high accuracy considered to be in its ground state. Atoms at different locations will be always subject to some fields and interactions, which mix the textbook ground and excited states. Also, such an atom is not yet a two-level system. In order for it to become a qubit, we must apply a resonant optical field, which will couple the ground state with an excited state. The accuracy of the obtained states will depend on the precision of the amplitudes, frequencies, and duration of optical pulses. This precision might be quite sufficient for many applications, but certainly it can never be *infinite*. Abstractions are intrinsic to Mathematics, and using them is probably the only way to develop a theoretical understanding of the physical world. However, when we specifically deal and try to fight with imperfections, noise, and errors, we should be extremely vigilant about mixing the abstractions and the physical reality, and especially about *attributing* our abstractions, like exact quantum states, $\sqrt 2$, decoherence free subspaces, etc. to the physical reality [@comment4]. Of course, *if* the assumptions underlying the threshold theorem are approached with a high enough precision, the prescriptions for error-correction could indeed work. So, the real question is: *what* is the required precision with which each assumption should be fulfilled to make scalable quantum computing possible? How small should be the undesired, but unavoidable: interaction between qubits, influence of gates on other qubits [@comment5], systematic errors of gates and measurements [@comment1], leakage errors, random and systematic errors in preparation of the initial $|0\rangle$ states? Quite surprisingly, not only is there no answer to these most crucial questions in the existing literature, but they have never even been seriously discussed! Obviously, this gap must be filled, and the “error per qubit per gate” threshold must be complemented by indicating the required precision for each assumption. Until this is done, one can only speculate about the final outcome of such a research. The optimistic prognosis would be that some additional threshold values $\epsilon_1, \epsilon_2 ...$ for corresponding precisions will be established, and that these values will be shown neither to depend on the size of the computation nor to be extremely small. In this case, the dream of factorizing large numbers by Shor’s algorithm might become reality in some distant future. The pessimistic view is that the required precision must increase with the size of computation, most probably in an exponential manner, and this would undermine the very idea of quantum computing. Classical physics gives us some enlightening examples regarding attempts to impose a prescribed evolution on quite simple continuous systems. For example, consider some number of hard balls in a box. At $t=0$ all the balls are on the left side and have some initial velocities. We let the system run for some time, and at $t=t_0$ we simultaneously reverse all the velocities. Classical mechanics tells us that at $t=2t_0$ the balls will return to their initial positions in the left side of the box. Will this ever happen in reality, or even in computer simulations? The known answer is: Yes, *provided* the precision of the velocity inversion is exponential in the number of collisions during the time $2t_0$. If there is some slight noise during the whole process, it should be exponentially small too. Thus, if there are only 10 collisions, our task is difficult but it still might be accomplished. But if one needs 1000 collisions, it becomes impossible, not because Newton laws are wrong, but rather because the final state is strongly unstable against very small variations of the initial conditions and very small perturbations. This classical example is not directly relevant to the quantum case (see Ref. [@gutz] for the relation between classical and quantum chaos). However it might give a hint to explain why, although some beautiful and hard experiments with small numbers of qubits have been done (see Ref. [@experiments] for recent results with 3 to 8 qubits), the goal of implementing a concatenated quantum error-correcting code with 50 qubits (set by the ARDA Experts Panel [@arda] for the year 2012) is still nowhere in sight. There are two recurrent themes in discussions of the perspectives for scalable quantum computing. One of them is: “Because there are no known fundamental obstacles to such scalability, it has been suggested that failure to achieve it would reveal new physics” [@knill1]. An alternative suggestion is that such a failure would reveal insufficient understanding of the role of uncertainties, and the inconsistency of a theory of error correction that carelessly replaces some *small* quantities by *zeros* [@comment6]. The other one consists in directly linking the possibility of scalable quantum computing to the laws of Quantum Mechanics, so that we are forced to either admit or reject both things together: “The accuracy threshold theorem for quantum computation establishes that scalability is achievable provided that the currently accepted principles of quantum physics hold and that the noise afflicting a quantum computer is neither too strong nor too strongly correlated” [@presk1; @comm]. Obviously, one can have full confidence in the principles of Quantum Mechanics, which are confirmed by millions of experimental facts, and at the same time have doubts about a theory of fault-tolerance which considers some unavoidable errors as non-existent. In summary, the proof of the threshold theorem is founded on a number of assumptions that are supposed to be fulfilled exactly. Since this is not possible, an examination of the required precision with which these assumptions should hold is indispensable. The prospects of scalable quantum computing crucially depend on the results of such a study. A. Steane, Reports Prog. Phys. [**61**]{}, 117 (1998); arXiv:quant-ph/9708022 A Quantum Information Science and Technology Roadmap, Part 1: Quantum Computation. *Report of the Quantum Information Science and Technology Experts Panel*, http://qist.lanl.gov/qcomp-map.shtml P. Shor, in: [*37th Symposium on Foundations of Computing*]{}, IEEE Computer Society Press, 1996, p. 56; arXiv:quant-ph/9605011 J. Preskill, in: [*Introduction to Quantum Computation and Information,*]{} H.-K. Lo, S. Papesku, and T. Spiller, eds., Singapore: World Scientific (1998), p. 213; arXiv:quant-ph/9712048 D. Gottesman, in: [*Quantum Computation: A Grand Mathematical Challenge for the Twenty-First Century and the Millennium*]{}, S. J. Lomonaco, Jr., ed. p. 221, American Mathematical Society, Providence, Rhode Island (2002); arXiv:quant-ph/0004072 A. M. Steane, in: [*Decoherence and its implications in quantum computation and information transfer*]{}, A. Gonis and P. Turchi (eds), p. 284, IOS Press, Amsterdam (2001); arXiv:quant-ph/0304016 A. R. Calderbank and P. W. Shor, Phys. Rev. A **54**, 1098 (1996) A. M. Steane, Phys. Rev. Lett. **77**, 793 (1996) D. Aharonov and M. Ben-Or, in: *Proc. 29th Annual ACM Symposium on the Theory of Computation*, p. 176, New York, ACM Press, (1998); arXiv:quant-ph/9611025; arXiv:quant-ph/9906129 A. Yu. Kitaev, Russ. Math. Surv., **52**, 1191 (1997); A. Yu. Kitaev. Quantum error correction with imperfect gates, in: *Quantum Communication, Computing, and Measurement*, eds: O. Hirota, A.S. Holevo and C.M. Caves, p. 181, Plenum Press, New York (1997) E. Knill, R. Laflamme, and W. Zurek, Proc. R. Soc. Lond. A **454**, 365 (1998); arXiv: quant-ph/9702058 P. Aliferis, D. Gottesman, and J. Preskill, Quantum Inf. Comput. **8**, 181 (2008); arXiv:quant-ph/0703264: “The theory of fault-tolerant quantum computation establishes that a noisy quantum computer can simulate an ideal quantum computer accurately. In particular, the quantum accuracy threshold theorem asserts that an arbitrarily long quantum computation can be executed reliably, provided that the noise afflicting the computer’s hardware is weaker than a certain critical value, the *accuracy threshold*.” E. Knill, Nature, **463**, 441 (2010): “As it turns out, it is possible to digitize quantum computations arbitrarily accurately, using relatively limited resources, by applying quantum error-correction strategies developed for this purpose.” No mention of any restrictions. Many publications were devoted to the study of different noise models in the context of quantum error correction, see Ref. [@staudt] for a review, and it was shown that assumption 2 can be somewhat relaxed by allowing for certain types of noise correlations. D. Staudt, arXiv:1111.1417 (2011) J. Preskill, Proc. Roy. Soc. London **A454**, 469 (1998); arXiv:quant-ph/9705031: “ In principle, systematic errors can be understood and eliminated.” There is not and never will be a single device dealing with continuous quantities that makes *zero* systematic errors. Moreover, for reasons that are not yet well understood, all electronic devices, even the most precise that we have, the atomic clock, suffer from the so-called flicker or 1/f noise. The parameters of the device slowly but chaotically change in time, and the longer we wait the more changes we see. J. Preskill, in: *Introduction to Quantum Computation*, H.-K. Lo, S. Popescu, and T. P. Spiller (eds), World Scientific (1998); arXiv:quant-ph/9712048: “Future quantum engineer will face the challenge of designing devices such that qubits in the same block are very well isolated from one another.” Before designing devices, he would like to know *how well* the qubits should be isolated, but he will not find any indications in the existing literature. M.I. Dyakonov, in: *Future Trends in Microelectronics. Up the Nano Creek*, S. Luryi, J. Xu, and A. Zaslavsky (eds), Wiley (2007), p. 4; arXiv:quant-ph/0610117 We are leaving aside the philosophical/semantic question of whether *in reality* the variable does have some exact value, and it is only the imperfection of our instruments that prevents us from *knowing* it exactly. In accordance with Axiom 1, there *is* some current in the off-state. However because of the enormous value of the on/off ratio this is not a problem. J. Preskill, Proc. Roy. Soc. London **A454**, 385 (1998); arXiv:quant-ph/9705031 As far as we know, the postulates of Quantum Mechanics are true. However, they are true in the same sense as is true the statement “The diagonal of a unit square is equal to $\sqrt{2}$”. It would be very naive to think that this literally applies to some physically *real* unit square which we can deal with. Similarly, the *exact* $|0\rangle$ state is a mathematical abstraction that has no place in reality. Just as the $\sqrt{2}$ diagonal, it can be only *approached with a certain limited precision*. Another mathematical abstraction is “arbitrary accurately” [@knill1]. This notion does not exist in the vocabulary of a physicist or an engineer. Due to the required massive parallelism, thousands of gates, which in practice are electromagnetic pulses, will be applied simultaneously, so that the quantum computer will resemble a huge microwave oven. It must be a rather difficult problem for the future quantum engineer to exclude the unwanted action of gates on other qubits. M.C. Gutzwiller, Chaos in classical and quantum mechanics, Springer, New-York (1990) P. Schindler et al, Science, **332**, 1059 (2011); E. Lucero et al, arXiv:1202.5707 (2012); M. D. Reed et al, Nature, **482**, 382 (2012); Xing-Can Yao et al, arXiv:1202.5459 (2012) A similar “aproximation” will allow the hard balls in a box to always return to their initial positions after velocity inversion. J. Preskill, arXiv:1207.6131 (2012) It should have been added: “and also that the assumptions on which the theorem relies are satisfied exactly”.
--- abstract: 'The significant excess recently found by the CDF Collaboration in the inclusive jet cross section for jet transverse energies $E_{T} \ge 200$ GeV over current QCD predictions can be explained either by possible production of excited bosons (excited gluons, weak bosons, Higgs scalars, etc.) or by that of excited quarks. The masses of the excited boson and the excited quark are estimated to be around 1600 GeV and 500 GeV, respectively.' address: - 'Department of Physics, Saitama Medical College, Kawakado, Moroyama, Saitama 350-04, Japan' - 'Institute for Nuclear Study, University of Tokyo, Midori-cho, Tanashi, Tokyo 188, Japan' author: - Keiichi Akama - Hidezumi Terazawa title: | Has the Substructure of Quarks Been Found\ by the Collider Detector at Fermilab? --- The CDF Collaboration at the Fermilab Tevatron collider [@1] has reported their data on the inclusive jet cross section for jet transverse energies, $E_{T}$, from 15 to 440 GeV, in the pseudorapidity region, $0.1 \le \mid \eta \mid \le 0.7$, with the significant excess over current predictions based on perturbative QCD calculations for $E_{T} \ge 200$ GeV, which may indicate the presence of quark substructure at the compositeness energy scale, $\Lambda_{C}$, of the order of 1.6 TeV. It can be taken as an exciting and already intriguing historical discovery of the substructure of quarks (and leptons), which has been long predicted, or as the first evidence for the composite models of quarks (and leptons), which has been long proposed since the middle of 1970’s [@2; @3; @4]. Note that such relatively low energy scale for $\Lambda_{C}$ of the order of 1 TeV has recently been anticipated rather theoretically [@5] or by precise comparison between currently available experimental data and calculations in the composite models of quarks (and leptons) [@6]. However, the CDF experimental observation may certainly be taken as a more direct evidence for the substructure of quarks. The purpose of this letter is to explain the observed excess either by possible production of excited bosons (excited gluons, weak bosons, Higgs scalars, .) or by that of excited quarks and to estimate the masses of the excited boson and the excited quark to be around 1600 GeV and 500 GeV, respectively. An important motivation for composite models of quarks and leptons is to explain the repetition of generation structure in the quark and lepton spectrum. The repetition of isodoublets of quarks and leptons suggests the possible existence of an isodoublet of subquarks, $w_{i} (i = 1,2)$ (called “wakems” standing for e lectroagnetic), while the repetition of color-quartets of quarks and leptons does that of a color-quartet of subquarks, $C_{\alpha}$ ($\alpha = 0,1,2,3$) (called “chroms” standing for colors) [@2]. Then, the quarks($q$) and leptons($l$) are taken as composites of subquarks including $w_{i}$ and $C_{\alpha}$. In this picture, the weak bosons ($W^{\pm}$ and $Z$), the gluons ($G^{a}$, $a = 1,2,\cdots,8$), the Higgs scalars $(\phi^{+}, \phi^{0})$ \[and even the photon ($\gamma$)\], can also be taken as composites of a subquark and an antisubquark such as $w_{i}$ and $\bar{w}_{j}$ or $C_{\alpha}$ and $\bar{C}_{\beta}$. In these models, we expect that there may appear not only exotic states and excited states of the fundamental fermions but also those of the fundamental bosons [@7]. Their expected properties and various effects have been studied extensively in Ref. [@8]. In what follows, we shall discuss the results of our investigation on the leading-order effects of such excited quarks and bosons to the inclusive jet production cross section for $p \bar{p}$ scattering of $p \bar{p} \rightarrow$ jet + anything. Let us first consider excited bosons or bosonic composites in more general. Let us denote the vector and color-octet, vector and color-singlet, scalar and color-octet, and scalar and color-singlet bosonic composites by $V_{\mu}^{a}$, $V_{\mu}$, $S^{a}$, and $S$, respectively. Then, the dimensionless couplings between these bosonic composites and quarks are given by the following interaction Lagrangian: $$\begin{aligned} L_{~{\rm \!\! int}~} & = & g_{\scriptscriptstyle V8} \overline{q} \frac{1}{2} \lambda_{a} {V}_\mu^{a}\gamma^\mu ({\eta}_{\scriptscriptstyle L8} {\gamma}_{\scriptscriptstyle L} + {\eta}_{\scriptscriptstyle R8} {\gamma}_{\scriptscriptstyle R}) q + {g}_{\scriptscriptstyle S8} \overline{q} \frac{1}{2} \lambda_{a} S^{a}q \nonumber \\ % & + & g_{\scriptscriptstyle V1} \overline{q} \frac{1}{2} \lambda_{0} {V}_\mu^{a}\gamma^\mu (\eta_{\scriptscriptstyle L1} \gamma_{\scriptscriptstyle L} + \eta_{\scriptscriptstyle R1} \gamma_{\scriptscriptstyle R}) q + g_{\scriptscriptstyle S1} \overline{q} \frac{1}{2} \lambda_{0} Sq \label{eq:1}\end{aligned}$$ where $\gamma_{\scriptscriptstyle L}=(1-\gamma_5)/2$, $\gamma_{\scriptscriptstyle R}=(1+\gamma_5)/2$, $g_{\scriptscriptstyle V8}$, $g_{\scriptscriptstyle S8}$, $g_{\scriptscriptstyle V1}$, $g_{\scriptscriptstyle S1}$ are coupling constants, $\lambda_{a}$ ($a=1,2,\cdots, 8$) are the Gell-Mann matrices for color SU(3), $\lambda_{0}$ is the $\sqrt{2/3}$ times $3 \times 3$ unit matrix, and $(\eta_{\scriptscriptstyle L8}, \eta_{\scriptscriptstyle R8})$ or $(\eta_{\scriptscriptstyle L1}, \eta_{\scriptscriptstyle R1}) =$ $(1,1)$, $(1,-1)$, $(1,0)$, and $(0,1)$ for the vector, axial vector, left-handed, and right-handed couplings, respectively. $V_{\mu}^{a}$ and $V_{\mu}$ are hermitian fields while $S^{a}$ and $S$ are in general complex. These interactions respect the chiral symmetry of quarks. Note that the dimensionless coupling between gluons, $G^{a}$, and $V^{a}$ must have a form of $G^{a\mu \nu} (D_{\mu} V_{\nu}^{a} - D_{\nu} V_{\mu}^{a})$ and, therefore, have no physical effect since it can be absorbed into the kinetic term of $(G_{\mu \nu}^{a})^{2}$ after diagonalizing of $G^{a}$ and $V^{a}$. Also note that there exist no dimensionless couplings of $G^{a}$ and $V$, $G^{a}$ and $S^{a}$, or $G^{a}$ and $S$. Therefore, these bosonic composites contribute to $p \bar{p}$ scatterings only through $q \bar{q} \rightarrow q \bar{q}$ scatterings and their crossed channels. Let ($s,t,u$) and $z$ be the Mandelstam variables for the elementary process of $q \bar{q} \rightarrow q \bar{q}$ scattering and $\cos{\theta}$ with the scattering angle $\theta$ in the center-of-mass system. Then, the differential cross section for the scattering is given by $$\begin{aligned} \frac{d \sigma}{dz} = \frac{1}{36} \cdot \frac{1}{32 \pi s} \big[A_{L} (s,t,u) + A_{R} (s,t,u) + 2B (s,t,u) + 2B (t,s,u)\big] \label{eq:2}\end{aligned}$$ where $$\begin{aligned} A_{x} (s,t,u) & = & 4u^{2} \bigg\{ 2 \mid V_{8}^{xx}(s) \mid^{2} +2 \mid V_{8}^{xx}(t) \mid^{2} +9 \mid V_{1}^{xx}(s) \mid^{2} +9 \mid V_{1}^{xx}(t) \mid^{2} \nonumber \\ % & & \qquad + 2Re \bigg[- \frac{2}{3} V_{8}^{xx}(s)^{*} V_{8}^{xx}(t) + 4V_{8}^{xx}(s)^{*} V_{1}^{xx}(t) \nonumber \\ % & & \qquad \qquad \qquad \qquad + 4V_{1}^{xx}(s)^{*} V_{8}^{xx}(t) + 3V_{1}^{xx}(s)^{*} V_{1}^{xx}(t) \bigg] \bigg\}, \ \ \ (x=L,R) \label{eq:3} \\ %A_{R} (s,t,u) & = & A_{L} (s,t,u) \mid_{(L \rightarrow R)} %\label{eq:4} %\\ B\ (s,t,u)\ & = & t^{2} \bigg\{ 4\big[2 \mid V_{8}^{LR}(s) \mid^{2} + 9 \mid V_{1}^{LR}(s) \mid^{2}\big] + t^{2} \big[2 \mid S_{8}(t) \mid^{2} + 9 \mid S_{1}(t) \mid^{2}\big] \nonumber \\ % & & \qquad - 4 Re \bigg[-\frac{2}{3} V_{8}^{LR}(s)^{*} S_{8}(t) + 4V_{8}^{LR}(s)^{*} S_{1}(t) \nonumber \\ % & & \qquad \qquad \qquad \qquad + 4V_{1}^{LR}(s)^{*} S_{8}(t) + 3 V_{1}^{LR}(s)^{*} S_{1}(t)\bigg] \bigg\} \label{eq:5}\end{aligned}$$ with the propagators $$\begin{aligned} && V_{1}^{xy}(s) = \cases{\displaystyle \frac{\displaystyle e^{2}}{\displaystyle s} + \frac{\displaystyle g_{Zx}g_{Zy}} {\displaystyle s-M_{Z}^{2} + iM_{Z} \Gamma_{Z}} + \frac{\displaystyle g_{V1}^{2} \eta_{x1} \eta_{y1}} {\displaystyle s - M_{V1}^{2} + iM_{V1} \Gamma_{V1}}, & ($x,y = L,R$) \cr \frac{\displaystyle g_{W}g_{W}'} {\displaystyle s-M_{W}^{2} + iM_{W} \Gamma_{W}}, & ($x,y=L$) \cr } \label{eq:6} \\&& V_{8}^{xy}(s) = \frac{g^{2}}{s} + \frac{g_{V8}^{2} \eta_{x8} \eta_{y8}}{s-M_{V8}^{2} + iM_{V8} \Gamma_{V8}}, \,\,(x,y = L,R) \label{eq:7} \\&& S_{1}(s) = \frac{g_{S1}^{2}}{s-M_{S1}^{2} + iM_{S1} \Gamma_{S1}}, \label{eq:8} \\&& S_{8}(s) = \frac{g_{S8}^{2}}{s-M_{S8}^{2} + iM_{S8} \Gamma_{S8}}. \label{eq:9}\end{aligned}$$ Here, $e$ is the electromagnetic coupling constant, $g$ is the QCD coupling constant, $g_{\scriptscriptstyle ZL}$ and $g_{\scriptscriptstyle ZR}$ are the left- and right-handed coupling constants of $Z$ boson, $g_{W}$ is the weak gauge coupling constant times the relevant CKM matrix element, $M_{X}$ is the mass of particle $X$, and $\Gamma_{X}$ is the decay width. If the decay of the excited boson is dominated by the two body decay due to the interactions given in Eq. (\[eq:1\]), its decay width is given by $$\begin{aligned} && \Gamma_{V8} = \Gamma_{V1} = \frac{M_{V}}{48 \pi} \sum{ g_{V}^{2} \sqrt{1-\frac{4m^{2}}{M_{V}^{2}}} \bigg[\bigg(1-\frac{m^{2}}{M_{V}^{2}}\bigg) (\eta_{L}^{2} + \eta_{R}^{2}) + \frac{6m^{2}}{M_{V}^{2}} \eta_{L} \eta_{R}\bigg]}, \label{eq:10} \\&& \Gamma_{S8} = \Gamma_{S1} = \frac{M_{S}}{48 \pi} \sum{ g_{S}^{2} \sqrt{1-\frac{4m^{2}}{M_{S}^{2}}} \bigg(1-\frac{2m^{2}}{M_{S}^{2}}\bigg)}, \label{eq:11}\end{aligned}$$ where $\sum$ denotes the summation over flavors of final quarks and $m$’s are the final quark masses, all of which but the top quark mass can be practically neglected. Let us next consider excited quarks (of spin 1/2 for simplicity), which are denoted by $Q$’s. Then, the interaction of $Q$ with quarks ($q$) and gluons ($G_{\mu}^{a}$) is given by $$\begin{aligned} L_{int} = -\frac{g_Q}{2M_{Q}} \bigg[\overline{Q}\frac{1}{2}\lambda^{a}\sigma^{\mu\nu}G_{\mu \nu}^{a} q_{L} + \overline{q_{L}} \frac{1}{2} \lambda^{a} \sigma^{\mu \nu} G_{\mu \nu}^{a} Q + (L \leftrightarrow R)\bigg] \label{eq:12}\end{aligned}$$ where $g_Q$ is a coupling constant and $M_{Q}$ is the excited quark mass. Note that an excited quark coupling with $q_{L}$ and another excited quark coupling with $q_{R}$ must be different from one another if the chiral symmetry of quarks is preserved. If this is the case, the differential cross section for the scattering of $q \bar{q} \rightarrow GG$ is given by $$\begin{aligned} \frac{d\sigma}{dz} & = & \frac{1}{27 \pi s} \bigg[g^{4} (t^{2} + u^{2}) \bigg(\frac{1}{tu} - \frac{9}{4s^{2}}\bigg) \nonumber \\ % & & \qquad \quad +\frac{g^{2} g_Q^{2}}{4M_{Q}^{2}} Re\bigg(\frac{t^{2}}{t - M_{Q}^{2} + iM_{Q} \Gamma_{Q}} + (t \leftrightarrow u)\bigg) \nonumber \\ % & & \qquad \quad + \frac{g_Q^{4}ut}{M_{Q}^{4}} \bigg(\bigg| \frac{t}{t - M_{Q}^{2} + iM_{Q} \Gamma_{Q}} \bigg|^{2} + (t \leftrightarrow u)\bigg)\bigg]. \label{eq:13}\end{aligned}$$ If the decay of $Q$ is dominated by the interactions given in Eq. (\[eq:12\]), the decay width of $Q$ is given by $$\begin{aligned} \Gamma_{Q} = \frac{g_Q^{2}}{6 \pi} M_{Q} \bigg(1 - \frac{m^{2}}{M_{Q}^{2}}\bigg), \label{eq:14}\end{aligned}$$ where $m$ is the final quark mass, any one of which except the top quark mass can be practically neglected. Note that if an excited quark coupling with $q_{L}$ and another excited quark coupling with $q_{R}$ are not discriminated against one another, which leads to breaking of the chiral symmetry of quarks, the above differential cross section would need an additional term, $$\begin{aligned} \frac{1}{27 \pi s} \frac{g_Q^{4}ut}{M_{Q}^{4}} \bigg[\ \bigg| \frac{t}{t - M_{Q}^{2} + iM_{Q} \Gamma_{Q}} \bigg|^{2} \ \ \ + \ \ \ (t \leftrightarrow u)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber \\ % \qquad \qquad \qquad - \frac{4}{3} Re \bigg( \frac{tu}{(t-M_{Q}^{2} + iM_{Q} \Gamma_{Q})^{*}(u-M_{Q}^{2} + iM_{Q} \Gamma_{Q})}\bigg)\bigg] \label{eq:15}\end{aligned}$$ In this case, the decay width of $Q$ would also be changed to be twice as much as given in Eq. (\[eq:14\]). Note also that the differential cross sections for the crossed channels can easily be obtained by exchanging ($s,t,u$) appropriately and by rewriting the statistical factors due to the different spins and colors of initial (and final) quarks (or gluons). Now we evaluate the single jet $p_{T}$ inclusive distribution, dijet invariant mass distribution, and dijet angular distribution in the $p\bar p$ scattering in the CDF energy region. For the elementary processes, we take $2 \rightarrow 2$ processes of quarks, antiquarks, and gluons. Also, we assume that either one of $u$, $d$, $s$, $c$, $b$ quarks or gluons in the final states is to be observed as a jet. Although the authors of Ref. [@1] have found the excess at high $p_{T}$ in comparison of their data with the next-to-leading order calculations, we have restricted ourselves to the leading order contribution from composite models. Since higher order corrections are supposed to contribute almost equally both in the standard model calculations and in the composite model ones, the ratio of the composite model calculation to the standard model one may not be so much affected by higher order corrections and may be enough meaningful even if both of the calculations are only in the leading order. As for the parton distribution functions we use those of Glück-Reya-Vogt in Ref.[@9]. In FIG. 1, the predictions of the composite models with excited states for the single jet $E_{T}$ inclusive distribution divided by those of the standard model are compared with the corresponding CDF experimental result reported in Ref.[@1]. We have taken the same average over the pseudorapidity range of $0.1 \le \mid \eta \mid \le 0.7$ as the CDF experiment [@1]. Based on such comparison, we have performed detailed chi-square anlyses, and determined the allowed regions (95% confidence level) of the mass $M_{X}$ and coupling constant $\alpha_{X} (\equiv g_{X}^{2}/4 \pi)$ of various types of excited states $X$ (See FIG. 2). It indicates that the excited bosons with $\alpha_{X}>0.1$ and $M_{X}>1000$ GeV are allowed. The excess of the $E_T$ distribution is well fitted by the tail of the high mass resonance of the excited boson. On the other hand, there is no allowed region for a single excited quark, as far as we assume the two-jet decay mode dominates the decay. This is because the width (\[eq:14\]) is too narrow to fit the rather gentle slope of the observed data in FIG. 1. The width, however, can be broadened due to (i) other decay modes such as multi-jet or semi-jet processes, (ii) coexistence of several resonances, or (iii) limited resolution for the jet energy and momentum measurement. Let $r$ be the ratio of the total decay width to the partial width (\[eq:14\]) of the decay to the two-jet mode. In FIG. 2, we also show the allowed region for the $M_{X}$ and $\alpha_{X}$ of the excited quarks for the cases of $r=2$ and 3. It is restricted in the low-mass region 400 GeV $< M_{X} <$ 900 GeV and $0.03 < \alpha_{X} < 0.8$. To get more precise information, it may be extremely useful to investigate the dijet invariant mass and angular distributions. Figure 3 shows the predictions with the typical excited states for the dijet invariant mass ($E_{\rm dijet}$) distribution divided by those of the standard model. It predicts a significant excess in the high dijet mass region. Figure 4(a) shows the predicted dijet angular distribution as a function of $\chi \equiv (1+\cos{\theta})/(1-\cos{\theta})$ (normalized by the average over the region of $1 \le \chi \le 5$). The model with excited states predicts relative excess in low $\chi$ (i. e. large $\theta$) region, since the peak at $\theta=0$ due to exchange of light quarks and massless gluons is absent in the additional contributions from the excited state. To see it more clearly, we show in FIG. 4(b) the ratio of the number of the expected events for $\chi < 2.5$ to that for $\chi > 2.5$ as a function of the dijet invariant mass $E_{\rm dijet}$. To sum up, we have shown in this letter that the significant excess found by the CDF Collaboration can be explained either by possible production of excited bosons whose masses are around 1600 GeV or by that of excited quarks whose masses are around 500 GeV. The copious production of such excited particles can be expected in the future $e^+e^-$ NLC experiments and $p \bar{p}$ LHC experiments. In conclusion, we must mention that although we have assumed the excited quarks of spin 1/2 for simplicity, one can also assume those of spin 3/2, which has very recently been emphasized by Bander [@10]. Acknowledgements {#acknowledgements .unnumbered} ================ The authors would like to thank Professor Kunitaka Kondo for giving them the valuable information on the CDF experiment and for sending them the manuscripts before publication. One of the authors (H.T.) also wishes to thank Professor Stanley J. Brodsky and all the other staff members, especially Professors James D. Bjorken and Michael Peskin, of Theoretical Physics Group at Stanford Linear Accelerator Center, Stanford University not only for their useful discussions on the substructure of quarks but also for their warm hospitalities extended to him during his visit in July, 1996 when this work was completed. The other author (K.A) also wishes to thank Professors Yuichi Chikashige and Tadashi Kon for useful discussions and communications. [99]{} CDF Collaboration, F. Abe ., Phys. Rev. Lett. , 438 (1996). J.C. Pati and A. Salam, Phys. Rev. D, 275 (1974); H. Terazawa, Y. Chikashige, and K. Akama, Phys. Rev. D, 480 (1977); H. Terazawa, Phys. Rev. D, 184 (1980). For a review, see, for example, H. Terazawa, in [*Proc. XXII International Conference*]{} [*on High Energy Physics*]{}, Leipzig, 1984, edited by A. Meyer and E. Wieczorek (Akademie der Wissenschaften der DDR, Zeuthen, 1984), Vol. I, p. 63. For a recent review, see, for example, H. Terazawa, in [*Proc. of the Fourth International*]{} [*Conference on Squeezed States and Uncertainty Relations*]{}, Taiyuan, Shanxi, China, 1995, edited by D. Han ., NASA Conference Publication (NASA Goddard Space Flight Center, Greenbelt, Maryland, 1996), p. 179. See, for examples, H. Terazawa, Phys. Rev. Lett. , 823 (1990); K. Akama, H. Terazawa, and M. Yasuè, Phys. Rev. Lett. , 1826 (1992). See, for example, K. Akama and H. Terazawa, Phys. Lett. , 145 (1994); Mod. Phys. Lett. , 3423 (1994); H. Terazawa, Institute for Nuclear Study, University of Tokyo, Tokyo, Report No. INS-Rep.-1131, Jan. 1996. H. Terazawa, in Ref. \[2\]. K. Akama and T. Hattori, Phys. Rev. D, 3688 (1989); K. Akama, T. Hattori, and M. Yasuè, Phys. Rev. D, 789 (1990); D, 1702 (1991); S. Ishida and M. Sekiguchi, Prog. Theor. Phys. , 491 (1991); K. Akama and T. Hattori, Int. J. Mod. Phys. , 3503 (1994). M. Glück, E. Reya, and A. Vogt, Z. Phys. , 433 (1995). M. Bander, Phys. Rev. Lett. , 601 (1996). (10,10) (110,180)[ ${\displaystyle{d\sigma \over dE_T}({\rm composite\ model}) \over \displaystyle{d\sigma \over dE_T}({\rm standard\ model})}$ ]{} (205,90)[$Q$]{} (250,80)[$V8$]{} (315,80)[$V8'$]{} (340,60)[SM]{} (210,0)[$E_T$]{} (340,0)[GeV]{} \#1[[**\#1**]{}   ]{} The predictions of the composite models with excited states for the single jet $E_{T}$ inclusive distribution divided by those of the standard model. The label SM indicates the prediction of the standard model, $V8$ ($V8'$) indicates that with a vector octet excited boson with $M_{V8}$=1600GeV (2000GeV) and $\alpha_{V8}=g_{V8}^2/4\pi$=1, and $Q$ indicates that with the excited quark with $M_{Q}$=500GeV, $\alpha_{Q}=g_Q^2/4\pi$=0.2, and $r$=3, where $r$ is the ratio of the decay width to the partial width of the decay to the two-jet mode. The points with an error bar are the corresponding CDF experimental results reported in Ref.[@1]. (10,10) (235,75)[$V8$]{} (320,75)[$V1$]{} (260,185)[$S8$]{} (345,185)[$S1$]{} (130,170)[nonchiral]{} (160,155)[$r=2$]{} (150,140)[$r=3$]{} ( 90,120)[chiral]{} (100,105)[$r=3$]{} ( 90,90)[$r=2$]{} (250, 50)[excited boson]{} ( 90,200)[excited quark $Q$]{} (30,135)[$\alpha_X$]{} (210,0)[$M_X$]{} (340,0)[GeV]{} The allowed regions (95% confidence level) of the mass $M_{X}$ and coupling constant $\alpha_{X} (\equiv g_{X}^{2}/4 \pi)$ of various types of excited states $X$. The labels $V8$, $V1$, $S8$, and $S1$ indicate vector-octet, vector-singlet, scalar-octet, and scalar-singlet excited bosons, respectively. (10,10) (90,180)[ ${\displaystyle{ d\sigma \over dE_{\rm dijet}}({\rm composite\ model}) \over \displaystyle{ d\sigma \over dE_{\rm dijet}}({\rm standard\ model})}$ ]{} (300,60)[SM]{} (300,210)[$V8$]{} (325,115)[$V8'$]{} (170,90)[$Q$]{} (210,0)[$E_{\rm dijet}$]{} (340,0)[GeV]{} The predictions with the typical excited state for the dijet invariant mass ($=E_{\rm dijet}$) distribution divided by those of the standard model. The labels SM, $V8$, $V8'$, and $Q$ are the same as those in FIG. 1. -145mm (10,10) (25,250)[(a)]{} (210,250)[(b)]{} (90,220)[angular]{} (100,200)[distribution]{} (80,160)[$E_{\rm dijet}>625$GeV]{} (85,125)[ $\chi =\fcircle {1+\cos\theta \over 1-\cos\theta }$]{} (117,0)[$\chi $]{} (130,80)[SM]{} (160,65)[$V8'$]{} (160,50)[$V8$]{} (130,40)[$Q$]{} (250,210)[$\displaystyle { d\sigma (1<\chi <2.5)\over d\sigma (2.5<\chi <5)}$ ]{} (300,0)[$E_{\rm dijet}$]{} (360,0)[GeV]{} (350,40)[SM]{} (350,65)[$V8'$]{} (350,120)[$V8$]{} (340,165)[$Q$]{} \(a) The predicted dijet angular distribution as a function of $\chi$ normalized by the average over the region of $1 \le \chi \le 5$. (b) The ratio of the number of the expected events for $\chi < 2.5$ to that for $\chi > 2.5$ as a function of the dijet invariant mass $E_{\rm dijet}$. The labels SM, $V8$, $V8'$, and $Q$ are the same as those in FIG. 1.
--- abstract: 'In this work we use the in-situ accumulated stress monitoring technique to evaluate the evolution of the stress during the strain balancing of InAs/GaAs quantum dots and quantum posts. The comparison of these results with simulations and other strain balanced criteria commonly used indicate that it is necessary to consider the kinematics of the process, not only the nominal values for the deposited materials. We find that the substrate temperature plays a major role on the compensation process and it is necessary to take it into account in order to achieve the optimum compensation conditions. The application of the technique to quantum posts has allowed us to fabricate nanostructures of exceptional length (120 nm). In situ accumulated measurements show that, even in shorter nanostrcutures, relaxation processes can be inhibited with the resulting increase in the material quality.' author: - 'D. Alonso-Álvarez.' - 'B. Alén' - 'J. M. Ripalda' - 'A. Rivera' - 'A. G. Taboada' - 'J. M. Llorens' - 'Y. González' - 'L. González' - 'F. Briones' bibliography: - 'papers3.bib' title: 'In-situ accumulated stress measurements: application to strain balanced quantum dots and quantum posts' --- Introduction ============ In-situ characterization techniques are one of the most powerful tools to control and monitor the kinematics of the epitaxial growth of heterostructures. They can give real time information about the evolution of the growth front and the formation of nanostructures, among others. The reflection high energy electron diffraction (RHEED) system is a standard piece of equipment in most molecular beam epitaxy (MBE) reactors, and there are many works about its usage to monitor changes in surface reconstructions, [@Aspnes1987; @Liu1992; @Tatarenko1994] the formation of quantum dots (QDs) and quantum wires (QWRs), [@Lee2004] optimum conditions to fabricate them and, even, about how to get the size and shape of those nanostructures from the study of the diffracted pattern. [@Lee1998] In summary it is a mature technology with a long tradition in this field. Despite it was introduced back in the early 90’s by Schell-Sorokin and Tromp, probably less known is the in-situ accumulated stress measurement (ASM) technique. [@Schell1990] It basically consist on measuring the stress accumulated in a sample during the epitaxial growth by monitoring the changes in its curvature. The kind of information that can be extracted by this technique is very broad and ranges from the anisotropic strength of the surface reconstitutions, [@Fuster2006] the study of QDs, QWRs and quantum rings formation, [@Silveira2001; @Fuster2004] thermal expansion in heterostructures or the formation and evolution of dislocations and plastic deformation processes. [@Gonzalez2002a; @Gonzalez2002b] In its basic form, the bending of he sample is measured using the deflection of a laser beam on the sample surface. This method is easy to implement inside a MBE reactor since all the setup is outside the vacuum chamber. In this case, a lever-shaped sample is fix from one of its ends to a special sample holder. An aperture of enough size made on the center of the holder allows the lever to bend freely. This particular requirements of the holder and the sample itself is probably what prevents a general implantation of the technique in commercial MBE reactors. In general, two parallel laser beams hit the sample in a direction perpendicular to the surface, one on the fix end and the other on the free one. If substrate bends, we can measure the deflection of the beam that hits the free end compared to the other beam. Using two beams reduces the noise associated with mechanical vibrations and small temperature variations. The deflection can be recorded collecting the reflected laser beams with two segmented detectors. [@Fuster2006; @Silveira2001; @Fuster2004; @Gonzalez2002a; @Gonzalez2002b] Working principles ================== Using the above mention geometry, the substrate curvature can then be calculated as (Stoneyś equation): $$\Delta\left(\dfrac{1}{R}\right) = \dfrac{(d-d_0)\cos \alpha}{d_H 2L} \label{eq:curvatura}$$ where d is the distance between the spots in the detectors, $d0$ the initial distance between spots, $d_H$ is the separation between the laser beams, $L$ the sample-detector distance, $\alpha$ is the incidence angle and $\Delta(1/R)$ is the substrate curvature variation. If the deposited layer material has a lattice parameter larger than the substrate, then it suffers a compressive stress and the substrate bends, acquiring a convex curvature ($\Delta 1/R > 0$). On the contrary, if the lattice parameter of the deposited layer is smaller than the substrate, the strain is tensile and the substrate becomes concave ($\Delta 1/R > 0$). Before the plastic limit, where the sample suffers from partial relief of accumulated stress through the formation of dislocations, changes in the substrate curvature and the accumulated stress can be related by means of a modified version of the Stoney’s equation, to include the biaxial character of the stress in the thin layers: $$\dfrac{1}{R} = -\dfrac{6(1-\upsilon_S)\sigma h}{Y_s h_S^2}=-\dfrac{6M_S\sigma h}{h_S^2} \label{eq:stoney}$$ where R is the curvature radius, h is the thickness of the deposited layer, h$_S$ is the substrate thickness, $\sigma$ the stress in th elayer and M$_S$=(1-$\upsilon_S$)/Y$_S$ the biaxial modulus that relates the Young modulus (Y$_S$) and the Poisson modulus ($\upsilon_S$).This equation is valid only under the following conditions: 1. The thickness of the deposited layer and the substrate are much smaller than their lateral dimensions. 2. The thickness of the deposited layer is much smaller than the substrate. 3. The stress induced by the layer does not have a component in the direction normal to the sample surface. 4. Substrate material is linearly elastic, homogeneous and isotropic. The deposited layer must also be isotropic. 5. Edge effects are negligible and physical properties are homogeneous in planes perpendicular to the interface. 6. The strain and shear deformations are negligible, in such a way that layer and substrate are within the elastic limit at all times. 7. Substrate has no constraints to bend in neither of the two directions. This condition is no fully satisfied in the described experimental setup. As we use lever shaped substrates with one ends fixed to the holder, we constrain the bending along the short side. If the lever satisfies b $>$ 3a, with a and b the dimensions of the long and short sides, respectively, then the deformation in the transverse direction will not influence the bending along the long axis and this condition can be fulfil. On the other hand, the crystal structure of the materials used in this work do not allow to fulfil the condition of isotropy. Eq. \[eq:stoney\] is not valid and it must be adjusted to the experimental conditions, taking into account the crystal orientation of the interface and the elastic constants of the material in the direction along which the curvature is measure. In this way, the biaxial modulus becomes: $$\dfrac{1-\upsilon_S}{Y_s}\equiv M_S = c_{11}+c_{12}+2\dfrac{c_{12}^2}{c_{11}} \label{eq:biaxial}$$ where c$_{ij}$ are the substrate elastic constants. Rearranging Eq. \[eq:stoney\] for uniaxial stress along the \[110\] and \[1-10\] directions: $$\dfrac{1}{R} = -\dfrac{6\sigma h}{h_S^2}\dfrac{M_S+2c_{44}}{4M_Sc_{44}} \label{eq:stoney2}$$ The value of c$_{44}$ is approximately M$_S$/2 so the error introduced by using Eq. \[eq:stoney\] instead of \[eq:stoney2\] is less than 2%. Until now, we have considered the accumulated stress introduced by a layer in an static situation. However during a MBE growth the stress variation might be due to changes in the deposited layer thickness, changes in its stress or even the surface reconstruction. For this reason, if the thickness of a layer changes $dh$ in a time $t+dt$, using a differential form of Eq. \[eq:stoney\]: $$\dfrac{M_S h_S^2}{6}\dfrac{d\left(1/R\right)}{dt} = \sigma (z=h, t)\dfrac{dh}{dt}+\int _0^h\dfrac{d\sigma}{dt}dz+\left[\Delta \tau _S \right] \label{eq:diff}$$ The right hand side of this equation have three terms. The first one describes changes in stress associated to an increase of thickness $h$ in a time interval \[$t$, $t+dt$\]. The second term accounts for relaxation processes in the already deposited layer at the time $t$. Finally, the third term is related with changes in the surface stress. We can define the accumulated stress at the time $t$ as: $$\Sigma\sigma[h(t)] = \sigma (z=h, t)\dfrac{dh}{dt}+\int _0^h\dfrac{d\sigma}{dt}dz = \int _0^{h(z)}\sigma(z)dz \label{eq:accumulated}$$ Substituting Eq. \[eq:accumulated\] in \[eq:diff\] we obtain: $$\dfrac{M_S h_S^2}{6}\dfrac{d\left(1/R\right)}{dt} = \Sigma\sigma[h(t)]+\left[\Delta \tau _S \right] \label{eq:stoney3}$$ As it can be seen, the magnitude measure in this experiments is the sum of the accumulated stress and the stress associated to changes in the surface reconstruction. Finally, combining Eq. \[eq:curvatura\] in its differential form (taking $\cos \alpha$ = 1), and Eq. \[eq:stoney3\] we get: $$\Sigma\sigma[h(t)]+\left[\Delta \tau _S \right] = \dfrac{M_S h_S^2}{12}\dfrac{[d(t)-d_0]}{d_HL} \label{eq:stress}$$ Implementation and characteristics of the technique =================================================== An important improvement of this technique, as it is implemented in the Instituto de Microelectrónica de Madrid (IMM), is the use of a large area CCD camera to record the two beams simultaneously. This method has several advantages over the segmented detectors. On the one hand, the optical alignment is considerably easier since there is only one detector to be put in place to record both beams. On the other hand, it has larger dynamical range, as the reflected spots are recorded at all times regardless of their separation and exact positions (within a reasonable range). Finally, it has comparable resolution to the segmented detectors method without the need of low noise amplifiers or other extra equipment. Figure \[fig:setup\] shows a detailed schema of the AS measurement system available at the IMM. The laser source (608 nm) produces an intense beam that hits a beam splitter, leading to two perfectly parallel beams of similar intensity. The beams cross the optical window of the MBE reactor and reach the sample perpendicularly to its surface. One of the beam, hereafter reference beam (RB), hits the fix end of the sample so its reflection is not affected by the growth process. The other beam, hereafter signal beam (SB), hits the free end of the sample and its reflection will be affected by the bending of the sample and hence by the stress accumulated during growth. The measurement of the reflected beams is performed in a backscattering geometry, minimizing the error introduced by the approximation made in Eq. \[eq:stress\]. The RB reaches directly the CCD whereas the SB crosses a prism to change its trajectory and send it to the camera. This prism is of capital importance in the setup and allows the usage of a CCD camera instead of the segmented detectors. Even in the case of the beams been reflected perfectly parallel, the distances between the spots would be of around 1 cm. In a more realistic case, where the beams diverge due to the deflection of the lever, the separation at a reasonable distance from the sample surface ($\sim$1 m in our case) could be of several cm, too large for most CCDs. Since the accumulated stress measurements depends only in the distance difference between the spots and not on their absolute value, this approach does not have any effect in the results. We use a SpotOn CCD camera of Duma Optronics Ltd. and their acquisition software to get a beam positioning with sub-micron resolution. The distance between spots is sent to a custom software that converts it into accumulated stress, in real time, by means of Eq. \[eq:stress\]. This software also records the opening and closing of the effusion cell shutters, giving an exact match between the accumulated stress evolution and the materials growth. With this information, and assuming a typical distance of L = 975 mm between sample and detector, $d_H$ = 8 mm as the initial separation of the laser beams, and using the parameters characteristic of our substrates (GaAs, thickness $h_S$=100 $\mu$m, $M_S$ = 124 GPa), we obtain a maximum resolution of 0.02 N/m. This high resolution is normally not attainable due to vibrations and noise in the environment. Mechanical vacuum pumps, either from the MBE reactor or from nearby equipments, have the most detrimental effect and must be disconnected in order to perform high quality measurements. The real resolution in our system is normally between 0.05 and 0.1 N/m. ![Stress accumulated during the formation of QDs after 2 ML of InAs. The lower part of the figure shows the materials that are growing at each time.[]{data-label="fig:StressQDs"}](StressQDs.eps) Experimental results ==================== We have used this technique to characterize the growth of strain balanced quantum dots and quantum posts (QPs). This kind of nanostructures have a three dimensional shape and, thus, the common equations used to calculate the optimum strain balanced condition can not be used, as the strain is inhomogeneous and the layer thicknesses are not well defined. [@Tatebayashi2009; @Bailey2009; @Alonso-Alvarez2010] In this work we use the in-situ accumulated stress measurements, as described above, to obtain the real stress that introduces the QDs and the most appropriate GaAsP thickness and composition that exactly compensates that stress. All samples have been growth using solid source MBE on GaAs (001) substrates 100 $\mu$m thick. InAs and GaAs/GaAsP growth rates are 0.02 and 0.5 ML/s, respectively. As beam equivalent pressure (BEP) is kept at 1.5$\times$10$^-6$ mbar at all times. Strain balanced InAs quantum dots --------------------------------- In Fig. \[fig:StressQDs\] we show the evolution of the total accumulated stress as we grow an InAs QD layer for different substrate temperatures. As it was found by Silveira *et al*, four regions can be distinguished: [@Silveira2001] Region I: InAs begins to grow layer by layer, increasing the compressive stress linearly (except for a transition region at the beginning); Region II: just at the critical thickness, surface relaxes and QDs nucleate. The remaining deposited In keep floating on the surface or incorporates to the existing islands but without increasing the stress; Region III: during capping this remaining In incorporates, suddenly increasing the accumulated stress; And Region IV: when In is exhausted, GaAs grows without any further change in the stress. As it can be seen, the maximum accumulated stress depends strongly on the substrate temperature and also on the total amount of In deposited, as shown in Fig. \[fig:StressQDs2\](1) and (b) (filled symbols). This kind of dependence is disregarded in the strain balance criteria used in QWs. The accumulated stress introduced by a flat, strained layer, assumed homogeneous, can be approximated by: $$\Sigma\sigma_L = M_L \epsilon_L t_L \label{eq:stressperlayer}$$ where M$_L$, $\epsilon_L=(a_{subs}-a_L)/a_{subs}$ and t$_L$ are the layer biaxial modulus, the lattice mismatch between the layer and the substrate and the layer thickness respectively. Fig. \[fig:StressQDs2\] shows also the results of this equation applied to the nominal InAs thickness used in each case (open symbols). It can be seen that using the above equation to calculate the stress introduced by the QDs and, hence, the strain balanced condition, leads to sub-estimate the accumulated stress in all cases. ![Accumulated stress per QD as a function of the growth temperature (a) and InAs thickness (b). Filled symbols represent the experimental data and empty ones the result of applying Eq. \[eq:stressperlayer\][]{data-label="fig:StressQDs2"}](StressQDs2.eps) The reason for this discrepancy lays on the assumption that all the deposited In incorporates in the form of InAs. It is well known that, during the QDs capping, there is a large Ga-In intermixing, leading to quantum dots, wetting layer and capping made of InGaAs of varying composition. If the total In incorporated into the sample is to be constant, an InGaAs layer with a dilute alloy introduces more stress that a pure InAs layer. Fig. \[fig:InComp\] shows the dependence of the accumulated stress on the In content of the layer (x), keeping the restriction of equal overall In content: $$t_{L}\times x=A=constant \Longrightarrow \Sigma\sigma_L = M_L(x) \epsilon_L(x) a_{L}(x)\frac{A}{x} \label{eq:InComp}$$ where $t_{L}$ is the layer thickness in ML. M$_L$(x) and a$_L$(x) are obtained as a linear interpolation of the GaAs and InAs parameters. As an example, if the In contained in a pure InAs monolayer is spread in two monolayers, giving a In$_{0.5}$Ga$_{0.5}$As alloy, the resulting accumulated stress changes from -1.7 N/m to -2.1 N/m. ![Evolution of the accumulated stress as a function of the In composition for a given total In amount. Right scale shows the corresponding layer thickness.[]{data-label="fig:InComp"}](InComp.eps) A similar analysis can be performed for the growth of the GaAsP compensating layer. In this case, the variable parameter is the P BEP, which gives the GaAsP composition. Fig. \[fig:StressGaAsP\] shows the evolution of the accumulated stress as a function of time. As it is expected, the accumulated stress in this case is tensile, owing that the lattice parameter of GaAsP is smaller than that of GaAs. The composition of the layer can be estimated using equation \[eq:stressperlayer\], although it is not really needed for the calculation of the optimum strain balance condition. ![Accumulated stress as a function of the P BEP. Next to each curve is the P content of the alloy as estimated from Eq. \[eq:stressperlayer\][]{data-label="fig:StressGaAsP"}](StressGaAsP.eps) Knowing the compressive stress introduced by the QDs and the tensile stress compensated by the GaAsP strain balanced layer (SBL) as a function of its composition, we designed two strain balanced QDs stacks (A and B) aiming to a 100% of strain compensation. In both cases we use 2 ML of InAs for the QDs and a total spacer between layers of 15 nm. Substrate temperature and As BEP are kept constant at 510$\,^{\circ}\text{C}$ and 1.5$\times$10$^-6$ mbar respectively during the growth of the stacks. The substrate is GaAs (001) cantilever shaped (4x20 mm) with a thickness of 100 $\mu$m. The only difference between the samples is the compensating layer thickness and composition. In sample A we use a 13 nm thick SBL with 4.3% of P after 1 nm of GaAs capping, whereas in sample B we use a SBL 5 nm thick and 18% of P after 8 nm of GaAs capping. The evolution of stress during the growth of both stacks can be seen in Fig. \[fig:Stack\](a) and (b). ![Accumulated stress of samples A (A) and B (b). On the left there is an schema of the samples layer structure.[]{data-label="fig:Stack"}](Stack.eps) Several things can be observed in these curves. Firstly, the average strain has been successfully balanced in both cases. Assuming that each QD layer introduces a stress of 5 N/m, we obtain an average strain compensation of 95% for the sample A and 105% for sample B. Secondly, in the sample A, the oscillations corresponding to the accumulation/compensation sequence are damped. This is due to the InAs and GaAsP intermixing and was expected given the small GaAs capping on top of QDs. The formation of quaternary InGaAsP compounds introduces a stress (compressive or tensile) smaller than the corresponding InAs or GaAsP alloys separately. Although the strain is balanced on average, the stoichiometry of the stack is uncontrolled. This intermixing has an impact on the optical properties of the QDs and must be taken into account when placing the barrier too close to the QDs. It should be notice that the results presented here are only an example of perfectly balanced stacks using the in-situ accumulated stress measurements. Other combinations of SBL composition and thickness are possible, such as using pure GaP layers, GaInP or dilute nitrides, having optical or electrical properties more suitable for a particular application. Strain balanced quantum posts ----------------------------- Quantum posts (QPs) are assembled by epitaxial growth of closely spaced quantum dot layers, modulating the composition of a semiconductor alloy, typically InGaAs. Contrary to normal self-assembled nanostructures, the height of the QPs can be controlled by the number of periods of the superlattice grown on top of the seed QDs layer. The amount of In in this kind of nanostructures is very large compared to stacked QDs, with the result that the accumulated stress is enormous and there is a tendency to the formation of dislocations. The largest QPs reported are about 40 nm high (Pendiente de revisar). In this work we monitor the evolution of the accumulated stress in two QPs samples. The first one (sample C) uses a superlattice of 2.2 Å of InAS and 8.5 Å of GaAS grown at 510 $\,^{\circ}\text{C}$. In sample D, on the other hand, we substitute the GaAs by GaAsP with a nominal 14% of P content. This approach has allowed us to fabricate extremely large QPs of up to 120 nm with very interesting optical and electronic properties, as it has been reported elsewhere. [@Alonso-Alvarez2011a] Fig. \[fig:StressQPs\] shows the resulting accumulated stress curves for both samples. Only the first 17 periods are measured since the sample bending became to large to be recorded with the CCD camera. For each curve, there is an approximated linear extrapolation of the stress accumulated by the first five periods, marked with dashed lines. Grey areas in the background represents the periods when the In effusion cell is open. The intermediate white regions represents a 10 s growth interruption under As flux plus the GaAs (GaAsP) growth. ![Accumulated stress of regular and strain balanced quantum posts (SB-QPs). Dashed lines are linear extrapolations of the accumulated stress in the first five periods in each case.[]{data-label="fig:StressQPs"}](StressQPs.eps) As it can be seen, sample C accumulates larger stress than sample D. The degree of compensation in the superlattice (disregarding the QDs seed) can be estimated from the linear extrapolations mentioned above and gives a value of 57%. The evolution of the accumulated stress in sample D is linear, behaviour that might be expected if all periods introduce the same amount of stress. The strain balanced oscillations are barely visible. As observed previously for QDs, this strong damping is directly related with the intermixing of the constituent materials. In sample C, it is remarkable the progressive bending of the accumulated stress that deviates from the linear tendency observed in sample D. Moreover, as it can be seen there is an apparent inversion of the accumulated stress during the In growth. In this periods, growing In reduces the accumulated stress, rather than increasing it. Both effects might be explained in terms of an initial stage of relaxation processes in the superlattice. As shown by Ujúe et al. during the growth of thick In$_{0.2}$Ga$_{0.8}$As layers on GaAs, prior to the formation of dislocations, there is an initial relaxation stage consisting on a ripening of the growth front along the \[1$\overline 1$0\] direction. From the point of view of accumulated stress, this effect produces a progressive deviation of the linear behaviour stated in Eq. \[eq:stressperlayer\]. The onset of the relaxation depends on the growth rate, taking place earlier for slow growth rates. This is roughly the situation found in sample C, where the average composition of In is also around 20% and with an average growth rate of 0.07 ML/s. On the other hand, the reduction of the accumulated stress during In growth could be related also with a relaxation process but of a more local nature. The fabrication of QPs relays on the effective migration of In adatoms towards the top of buried QDs, where their elastic energy is smaller. The accumulation of this In atoms could partly relieve the stress of the InAs beneath them by locally increasing the lattice parameter of the structure. The growth of the GaAs capping suppresses this effect, increasing the stress by incorporating the InAs on the surface to the GaAs lattice structure. The comparison of sample C and D also indicates that using a strain balanced technique to grow QPs is not only necessary in the case of extremely large QPs, but that it could be desirable also in average size nanostructures, with more than 8 or 10 periods, to avoid the relaxation processes described above. Conclusions =========== In this work we have shown the implementation of a very compact in-situ accumulated stress measurement setup based on the usage of a high resolution CCD camera to monitor the bending of the substrate during growth. The system outperforms previous designs in resolution and simplicity. We have used this system to study the strain balanced process of QDs and QPs. We have found that it is possible to achieve perfect strain compensation in QDs stacks by calibrating separately the stress introduced by the QDs and the compensating layer. This process depends strongly on the substrate temperature and the incorporation of In atoms to the sample. Finally, we have shown a reduction of 57% in the stress accumulated during the growth of QPs by incorporating P to the matrix. The strain balanced technique is found to suppress the relaxation processes that take place in the first stages of the grow of this nanostructures. In summary, this experiments show the capability of strain balance technique to improve the quality of quantum nanostructures and the importance of kinematics in the optimization of the optimum strain balanced conditions. Acknowledgements ================ We acknowledge the financial support by MICINN (TEC2008-06756-C03-01/03, ENE2009-14481-C02-02, CSD2006-0004, CSD2006-0019), CAM (S2009ESP-1503, S2009/ENE-1477) and CSIC (PIF 200950I154).
--- abstract: 'UK researchers have made major contributions to the technical ideas underpinning formal approaches to the specification and development of computer systems. Perhaps as a consequence of this, some of the significant attempts to deploy theoretical ideas into practical environments have taken place in the UK. The authors of this paper have been involved in formal methods for many years and both have tracked a significant proportion of the whole story. This paper both lists key ideas and indicates where attempts were made to use the ideas in practice. Not all of these deployment stories have been a complete success and an attempt is made to tease out lessons that influence the probability of long-term impact.' author: - 'Cliff B. Jones, Martyn Thomas' bibliography: - 'master.bib' title: The development and deployment of formal methods in the UK --- [**This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible**]{}
--- abstract: 'We study the convex hull of the first $n$ steps of a planar random walk, and present large-$n$ asymptotic results on its perimeter length $L_n$, diameter $D_n$, and shape. In the case where the walk has a non-zero mean drift, we show that $L_n / D_n \to 2$ a.s., and give distributional limit theorems and variance asymptotics for $D_n$, and in the zero-drift case we show that the convex hull is infinitely often arbitrarily well-approximated in shape by any unit-diameter compact convex set containing the origin, and then $\liminf_{n \to \infty} L_n/D_n =2$ and $\limsup_{n \to \infty} L_n /D_n = \pi$, a.s. Among the tools that we use is a zero-one law for convex hulls of random walks.' author: - James McRedmond - 'Andrew R. Wade' title: | The convex hull of a planar random walk:\ perimeter, diameter, and shape --- [*Key words:*]{} Random walk; convex hull; perimeter length; diameter; shape; zero-one law. [*AMS Subject Classification:*]{} 60G50 (Primary) 60D05; 60F05; 60F15; 60F20 (Secondary) Model and main results ====================== Introduction and notation ------------------------- The geometry of random walks in Euclidean space is a topic of persistent interest (see e.g. [@rg]). The *convex hull* of a random walk is a classical geometrical characteristic of the walk [@sw; @ss] that has received renewed attention recently [@kvz1; @kvz2; @ty; @wx1; @wx2; @vz; @x]; see [@mcr] for a recent survey, including motivation in terms of modelling the home range of roaming animals. The present paper studies the asymptotic behaviour of the convex hull of a random walk in ${{\mathbb R}}^2$, focusing on its *shape*, its *perimeter length*, and its *diameter*. Let $Z, Z_1, Z_2, \ldots$ be i.i.d. random variables in ${{\mathbb R}}^2$, and let $S_0, S_1, S_2, \ldots$ be the associated random walk, defined by $S_0 := {{\mathbf{0}}}$ (the origin in ${{\mathbb R}}^2$) and $S_n := \sum_{k=1}^n Z_k$ for $n \in{{\mathbb N}}$. Let ${{\mathcal H}}_n := {\mathop \mathrm{hull}}\{ S_0, S_1, \ldots, S_n \}$, where ${\mathop \mathrm{hull}}A$ denotes the convex hull of $A \subseteq {{\mathbb R}}^2$. Write $L_n$ for the perimeter length of ${{\mathcal H}}_n$, and let $$D_n := \operatorname*{diam}\{ S_0, S_1, \ldots, S_n\} = \operatorname*{diam}{{\mathcal H}}_n.$$ (Note that ${{\mathcal H}}_n$, $L_n$, $D_n$ are all random variables on the appropriate spaces: see the comments at the start of Section \[sec:zero-one\].) A striking early result on $L_n$ is the formula of Spitzer and Widom [@sw]: $$\label{s-w} \operatorname{\mathbb{E}}L_n = 2 \sum_{k=1}^n \frac{1}{k} \operatorname{\mathbb{E}}\| S_k \| ,$$ where, and subsequently, $\| \, \cdot \, \|$ is the Euclidean norm on ${{\mathbb R}}^2$. For ${{\mathbf{x}}}\in {{\mathbb R}}^2 \setminus \{ {{\mathbf{0}}}\}$ we set $\hat {{\mathbf{x}}}:= {{\mathbf{x}}}/ \| {{\mathbf{x}}}\|$. If $\operatorname{\mathbb{E}}\| Z \| < \infty$, we write $\mu := \operatorname{\mathbb{E}}Z$. If $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$, we write $\sigma^2 := \operatorname{\mathbb{E}}( \| Z - \mu \|^2 )$, and if $\mu \neq {{\mathbf{0}}}$ we write $${\sigma^2_{\mu}}:= \operatorname{\mathbb{E}}[ ( ( Z - \mu ) \cdot \hat \mu )^2 ], \text{ and } {\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}:= \sigma^2 - {\sigma^2_{\mu}}.$$ The results in this paper concern the asymptotics of $L_n$, $D_n$, and the shape of ${{\mathcal H}}_n$. The cases where $\mu ={{\mathbf{0}}}$ and $\mu \neq {{\mathbf{0}}}$ are, as is no surprise, quite different. Rough information about the shape of ${{\mathcal H}}_n$ is given by the ratio $L_n/D_n$; provided that ${{\mathbb P}}( Z = {{\mathbf{0}}}) < 1$, convexity implies that a.s., for all but finitely many $n$, $$\label{l-d-ineq} 2 \leq {L_n}/{D_n} \leq \pi ,$$ the extrema being the line segment and shapes of constant width (such as the disc). Laws of large numbers --------------------- Our first result is the following law of large numbers for $L_n$. \[t:ss\_lln\] Suppose that $\operatorname{\mathbb{E}}\| Z \| < \infty$. Then $$\lim_{n \to \infty} n^{-1} L_n = 2 \| \mu \| , \text{ a.s.~and in } L^1 .$$ On the other hand, if $\operatorname{\mathbb{E}}\| Z \| = \infty$ then $\limsup_{n \to \infty} n^{-1} L_n = \infty$, a.s. Snyder and Steele [@ss] obtained the almost-sure convergence result under the stronger condition $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$ as a consequence of an upper bound on $\operatorname{\mathbb{V}ar}L_n$ deduced from Steele’s version of the Efron–Stein inequality. (Snyder and Steele state their result for the case $\mu \neq {{\mathbf{0}}}$, but their proof works equally well when $\mu = {{\mathbf{0}}}$.) Similarly, we have a law of large numbers for $D_n$. \[t:diam\_lln\] Suppose that $\operatorname{\mathbb{E}}\| Z \| < \infty$. Then $$\lim_{n \to \infty} n^{-1} D_n = \| \mu \| , \text{ a.s.~and in } L^1 .$$ On the other hand, if $\operatorname{\mathbb{E}}\| Z \| = \infty$ then $\limsup_{n \to \infty} n^{-1} D_n = \infty$, a.s. In the case $\mu \neq {{\mathbf{0}}}$, Theorems \[t:ss\_lln\] and \[t:diam\_lln\] have the following immediate consequence. \[cor:ratio-drift\] Suppose that $\operatorname{\mathbb{E}}\| Z \| < \infty$ and that $\mu \neq {{\mathbf{0}}}$. Then $$\lim_{n \to \infty} L_n / D_n = 2, {\ \text{a.s.}}$$ Zero-drift case --------------- In the zero-drift case, we need the extra condition $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$. Let $\Sigma := \operatorname{\mathbb{E}}( Z Z^{{\scalebox{0.6}{$\top$}}})$, viewing $Z$ as a column vector; note that if $\mu = {{\mathbf{0}}}$ then $\operatorname{tr}\Sigma = \sigma^2$. Our first result is the following shape theorem. Let $\rho_H$ denote the Hausdorff distance between non-empty compact sets; see  below for a definition. \[thm:shape\] Suppose that $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$, $\Sigma$ is positive definite, and $\mu = {{\mathbf{0}}}$. Then, for any compact convex set $K \subset {{\mathbb R}}^2$ with $\operatorname*{diam}K = 1$, $$\liminf_{n\to \infty} \rho_H(D_n^{-1}{{\mathcal H}}_n,K) =0, {\ \text{a.s.}}$$ Note that under the hypotheses of Theorem \[thm:shape\], ${{\mathbb P}}( Z = {{\mathbf{0}}}) < 1$, so that $D_n > 0$ for all but finitely many $n$, a.s. A consequence of Theorem \[thm:shape\] is the following result, which should be contrasted with Corollary \[cor:ratio-drift\]. \[cor:infsupLD\] Suppose that $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$, $\Sigma$ is positive definite, and $\mu = {{\mathbf{0}}}$. Then, $$2 = \liminf_{n \to \infty} \frac{L_n}{D_n} < \limsup_{n \to \infty} \frac{L_n}{D_n} = \pi, {\ \text{a.s.}}$$ Case with drift --------------- Now we turn to the individual asymptotics for $L_n$ and $D_n$ in the case with non-zero drift. The behaviour of $L_n$ was studied in [@wx1], where it was shown (Theorem 1.3 of [@wx1]) that if $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$ and $\mu \neq {{\mathbf{0}}}$, then, as $n \to \infty$, $$\label{wx} n^{-1/2} | L_n - \operatorname{\mathbb{E}}L_n - 2 ( S_n - \operatorname{\mathbb{E}}S_n ) \cdot \hat \mu | \to 0 , \text{ in } L^2 .$$ As shown in [@wx1], this result implies variance asymptotics for $L_n$ as well as a central limit theorem when ${\sigma^2_{\mu}}>0$. We show that  may be recast in the following stronger form. \[thm1\] Suppose that $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$ and $\mu \neq {{\mathbf{0}}}$. Then, as $n \to \infty$, $$n^{-1/2} | L_n - 2 S_n \cdot \hat \mu | \to 0, \text{ in } L^2.$$ The following result is the key additional component in the proof of Theorem \[thm1\], and is of interest in its own right; its proof uses the Spitzer–Widom formula . \[thm:drift-mean\] Suppose that $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$ and $\mu \neq {{\mathbf{0}}}$. Then, as $n \to \infty$, $$\operatorname{\mathbb{E}}L_n = 2 \| \mu \| n + \left(\frac{{\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}}{ \| \mu \|} + o(1) \right) \log n .$$ Analogous asymptotic expansions for $\operatorname{\mathbb{E}}L_n$ in the case $\mu = {{\mathbf{0}}}$ have been presented recently in [@glm]. For the diameter $D_n$, we have the following analogue of Theorem \[thm1\]. \[thm2\] Suppose that $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$ and $\mu \neq {{\mathbf{0}}}$. Then, as $n \to \infty$, $$n^{-1/2} | D_n - S_n \cdot \hat \mu | \to 0, \text{ in } L^2.$$ Denote by ${{\mathcal N}}(0,1)$ the standard normal distribution, and by ${\overset{\textup{d}}{\longrightarrow}}$ convergence in distribution. Theorem \[thm2\] yields variance asymptotics and a central limit theorem when ${\sigma^2_{\mu}}>0$, as follows. \[cor:diam-clt\] Suppose that $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$ and $\mu \neq {{\mathbf{0}}}$. Then $\lim_{n \to \infty} n^{-1} \operatorname{\mathbb{V}ar}D_n = {\sigma^2_{\mu}}$. Moreover, if ${\sigma^2_{\mu}}> 0$, for $\zeta \sim {{\mathcal N}}(0,1)$, as $n \to \infty$, $$\frac{ D_n - \operatorname{\mathbb{E}}D_n}{\sqrt{ \operatorname{\mathbb{V}ar}D_n }} {\overset{\textup{d}}{\longrightarrow}}\zeta, \text{ and } \frac{ D_n - n \| \mu \|}{\sqrt{ n {\sigma^2_{\mu}}}} {\overset{\textup{d}}{\longrightarrow}}\zeta .$$ The degenerate case ${\sigma^2_{\mu}}=0$ corresponds to the case where $Z \cdot \hat \mu = \| \mu \|$ a.s., and is of its own interest. It includes, for example, the case where $Z = (1,1)$ or $(1,-1)$, each with probability $1/2$, in which the two-dimensional walk $S_n$ corresponds to the space-time diagram of a one-dimensional simple symmetric random walk. In the case ${\sigma^2_{\mu}}= 0$, Corollary \[cor:diam-clt\] says only that $\operatorname{\mathbb{V}ar}D_n = o(n)$. We prove the following sharper result which requires some additional conditions. \[thm3\] Suppose that $\operatorname{\mathbb{E}}( \| Z \|^p ) < \infty$ for some $p>2$, $\mu \neq {{\mathbf{0}}}$, and ${\sigma^2_{\mu}}= 0$. Then, $$\label{eqn4} D_n - \| \mu \| n {\overset{\textup{d}}{\longrightarrow}}\frac{{\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}\zeta^2}{2 \| \mu \|} ,$$ where $\zeta \sim {{\mathcal N}}(0,1)$. Further, if, in addition, $\operatorname{\mathbb{E}}(\| Z \|^p) < \infty$ for some $p>4$, then $$\label{eqn5} \lim_{n\to \infty} \operatorname{\mathbb{V}ar}D_n = \frac{{\sigma^4_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}}{2 \| \mu \|^2}.$$ - The higher moments conditions required in Theorem \[thm3\] are necessary for the proofs that we employ; see also Remark \[p2condition\] below. - The statement  may be written as $$\label{eq8} D_n - S_n \cdot \hat{\mu} {\overset{\textup{d}}{\longrightarrow}}\frac{\sigma_{\mu_{\perp}}^2 \zeta^2}{2\|\mu\|}.$$ It is natural to ask whether  also holds in the case where $\sigma_\mu^2 >0$; if it did, then it would provide an alternative proof of the central limit theorem in Corollary \[cor:diam-clt\]. Simulations suggest that when $\sigma_\mu^2 >0$, equation  holds in some, but not all cases. Open problems and paper outline ------------------------------- When $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$, $\mu \neq {{\mathbf{0}}}$, and ${\sigma^2_{\mu}}=0$, Theorem \[thm1\] (see also Theorem 1 in [@wx1]) shows that $\operatorname{\mathbb{V}ar}L_n = o(n)$. It was conjectured in [@wx1] that $\operatorname{\mathbb{V}ar}L_n = O (\log n)$ in this case, which is the subject of ongoing work. We make the following stronger conjecture. Suppose that $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$, $\mu \neq {{\mathbf{0}}}$, ${\sigma^2_{\mu}}= 0$, and ${\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}>0$. Then $$\lim_{n \to \infty} \frac{\operatorname{\mathbb{V}ar}L_n}{\log n} \text{ exists in } (0,\infty).$$ The outline of the remainder of the paper is as follows. In Section \[sec:lln\] we give the proofs of the laws of large numbers Theorems \[t:ss\_lln\] and \[t:diam\_lln\]. In Section \[sec:zero-one\] we present a zero-one law for the convex hull of random walk (Theorem \[thm:zero-one\]), which we then use to prove Theorem \[thm:shape\] and Corollary \[cor:infsupLD\]. Section \[sec:perim-drift\] presents the proofs of Theorems \[thm1\] and \[thm:drift-mean\]. Sections \[sec:diam-drift\] and \[sec:diam-degen\] give the proofs of Theorems \[thm2\] and \[thm3\] respectively. Finally, rather than interrupting the flow of the main arguments, we present in Appendix \[sec:appendix\] a couple of auxiliary technical results. Laws of large numbers {#sec:lln} ===================== Throughout we use the notation ${{\mathbf{e}}}_\theta = (\cos \theta, \sin \theta)$ for the unit vector in direction $\theta$. We recall (see e.g. equation (2.1) of [@ss]) that *Cauchy’s formula* states that for a finite point set $\{ {{\mathbf{x}}}_0, {{\mathbf{x}}}_1, \ldots, {{\mathbf{x}}}_n \} \subset {{\mathbb R}}^2$, the perimeter length of ${\mathop \mathrm{hull}}\{ {{\mathbf{x}}}_0, {{\mathbf{x}}}_1, \ldots, {{\mathbf{x}}}_n \}$ is given by $$\int_0^{2\pi} \max_{0 \leq k \leq n} ( {{\mathbf{x}}}_k \cdot {{\mathbf{e}}}_\theta ) {{\mathrm d}}\theta .$$ Cauchy’s formula applied to our random walk implies that $$\label{eq:cauchy} L_n = \int_{0}^{2\pi} \max_{0 \leq k \leq n } ( S_k \cdot {{\mathbf{e}}}_\theta ) {{\mathrm d}}\theta .$$ First suppose that $\operatorname{\mathbb{E}}\| Z \| < \infty$. Then the strong law of large numbers says that for any ${\varepsilon}>0$ there exists $N_{\varepsilon}$ with ${{\mathbb P}}( N_{\varepsilon}< \infty ) =1$ for which $$\label{eq:lln} \| S_n - n \mu \| < n {\varepsilon}, \text{ for all } n \geq N_{\varepsilon}.$$ Since $S_0 ={{\mathbf{0}}}$, taking $k=0$ and $k=n$ in  and writing $x^+ := x {{\mathbf 1}{\{ x> 0\}}}$, we have $$\begin{aligned} \label{eq:lower-bound} L_n \geq \int_{0}^{2\pi} (S_n \cdot {{\mathbf{e}}}_\theta )^+ {{\mathrm d}}\theta = 2 \| S_n \| ,\end{aligned}$$ by Cauchy’s formula for ${\mathop \mathrm{hull}}\{ {{\mathbf{0}}}, S_n \}$. For $n \geq N_{\varepsilon}$ we have from  that $$\| S_n \| \geq \| n \mu \| - \| S_n - n \mu \| \geq n \| \mu \| - n {\varepsilon}.$$ Since ${\varepsilon}>0$ was arbitrary, it follows that $\liminf_{n \to \infty} n^{-1} L_n \geq 2 \| \mu \|$, a.s. On the other hand, for any ${\varepsilon}>0$, we have from  that $$\begin{aligned} \max_{0 \leq k \leq n} (S_k \cdot {{\mathbf{e}}}_\theta ) & \leq \max_{0 \leq k \leq N_{\varepsilon}} (S_k \cdot {{\mathbf{e}}}_\theta ) + \max_{N_{\varepsilon}\leq k \leq n} (S_k \cdot {{\mathbf{e}}}_\theta ) \\ & \leq \max_{0 \leq k \leq N_{\varepsilon}} \| S_k \| + \max_{0 \leq k \leq n} \left( k (\mu \cdot {{\mathbf{e}}}_\theta + {\varepsilon}) \right) \\ & = \max_{0 \leq k \leq N_{\varepsilon}} \| S_k \| + n ( \mu \cdot {{\mathbf{e}}}_\theta + {\varepsilon})^+ .\end{aligned}$$ Let $A_{\varepsilon}:= \{ \theta \in [0,2\pi] : \mu \cdot {{\mathbf{e}}}_\theta > - {\varepsilon}\}$. Then $$\int_0^{2\pi} ( \mu \cdot {{\mathbf{e}}}_\theta + {\varepsilon})^+ {{\mathrm d}}\theta = \int_{A_{\varepsilon}} ( \mu \cdot {{\mathbf{e}}}_\theta + {\varepsilon}) {{\mathrm d}}\theta \leq \int_{A_{\varepsilon}} \mu \cdot {{\mathbf{e}}}_\theta {{\mathrm d}}\theta + 2\pi {\varepsilon}.$$ But $$\begin{aligned} \int_{A_{\varepsilon}} \mu \cdot {{\mathbf{e}}}_\theta {{\mathrm d}}\theta & = \int_{A_0} \mu \cdot {{\mathbf{e}}}_\theta {{\mathrm d}}\theta + \int_{A_{\varepsilon}\setminus A_0} \mu \cdot {{\mathbf{e}}}_\theta {{\mathrm d}}\theta \\ & \leq \int_0^{2\pi} ( \mu \cdot {{\mathbf{e}}}_\theta )^+ {{\mathrm d}}\theta + \| \mu \| | A_{\varepsilon}\setminus A_0 | .\end{aligned}$$ Hence, from  we obtain $$L_n \leq 2 \pi \max_{0 \leq k \leq N_{\varepsilon}} \| S_k \| + n \int_0^{2\pi} ( \mu \cdot {{\mathbf{e}}}_\theta )^+ {{\mathrm d}}\theta + 2 \pi n {\varepsilon}+ n \| \mu \| | A_{\varepsilon}\setminus A_0 |.$$ Since ${{\mathbb P}}( N_{\varepsilon}< \infty ) = 1$, it follows from Cauchy’s formula for ${\mathop \mathrm{hull}}\{ {{\mathbf{0}}}, \mu \}$ that, a.s., $$\limsup_{n \to \infty} n^{-1} L_n \leq 2 \| \mu \| + 2 \pi {\varepsilon}+ \| \mu \| | A_{\varepsilon}\setminus A_0 | .$$ Since ${\varepsilon}>0$ was arbitrary, and $| A_{\varepsilon}\setminus A_0 | \to 0$ as ${\varepsilon}\to 0$, we get $\limsup_{n \to \infty} n^{-1} L_n \leq 2 \| \mu \|$, a.s. Thus the almost sure convergence statement is established. Moreover, from , $$\begin{aligned} L_n & \leq \int_0^{2\pi} \max_{0 \leq k \leq n} \| S_k \| {{\mathrm d}}\theta \\ & \leq 2 \pi \max_{0 \leq k \leq n} \sum_{j=1}^k \| Z_j \| \\ & \leq 2 \pi \sum_{j=1}^n \| Z_j \| .\end{aligned}$$ The strong law shows that, a.s., $n^{-1} \sum_{j=1}^n \| Z_j \| \to \operatorname{\mathbb{E}}\| Z \| < \infty$, while $\operatorname{\mathbb{E}}( n^{-1} \sum_{j=1}^n \| Z_j \|) = \operatorname{\mathbb{E}}\| Z \|$; hence Pratt’s lemma [@gut p. 221] implies that $n^{-1} L_n \to 2 \| \mu \|$ in $L^1$. Finally, suppose that $\operatorname{\mathbb{E}}\| Z \| = \infty$. From , it suffices to show that $$\limsup_{n \to\infty} n^{-1} \| S_n \| =\infty, \text{ a.s.}$$ To this end we follow [@gut p. 297]. First (see e.g. [@gut p. 75]) $\operatorname{\mathbb{E}}\| Z \| = \infty$ implies that for any $c >0$, we have $\sum_{n =1}^\infty {{\mathbb P}}( \| Z_n \| \geq c n ) = \infty$, which, by the Borel–Cantelli lemma, implies that ${{\mathbb P}}( \| Z_n \| \geq c n \text{ i.o.} ) =1$. But $\| Z_n \| \leq \| S_n \| + \| S_{n-1} \|$, so it follows that ${{\mathbb P}}( \| S_n \| \geq cn/2 \text{ i.o.} ) = 1$. In other words, $\limsup_{n \to \infty} n^{-1} \| S_n \| \geq c/2$, a.s., and, since $c>0$ was arbitrary, we get the result. Since $\| S_n \| \leq D_n \leq L_n /2$ we can apply the strong law for $S_n$, which implies that $n^{-1} \| S_n \| \to \| \mu \|$, and Theorem \[t:ss\_lln\], to deduce that $n^{-1} D_n \to \| \mu \|$, a.s. Since $n^{-1} D_n \leq n^{-1} L_n /2$ we may again apply Pratt’s lemma [@gut p. 221] to deduce the $L^1$ convergence. Finally, if $\operatorname{\mathbb{E}}\| Z \| = \infty$ we use the bound $D_n \geq L_n / \pi$ and the final statement in Theorem \[t:ss\_lln\] to deduce that $\limsup_{n \to \infty} n^{-1} D_n = \infty$, a.s. A zero-one law for convex hulls {#sec:zero-one} =============================== A key ingredient in the proof of Theorem \[thm:shape\] is a *zero-one law* (Theorem \[thm:zero-one\] below). Before we state the result, we need some notation. Define $\sigma$-algebras ${{\mathcal F}}_0 := \{ \emptyset, \Omega \}$ and ${{\mathcal F}}_n := \sigma (Z_1, \ldots, Z_n)$ for $n \geq 1$; also set ${{\mathcal F}}_\infty := \sigma ( \cup_{n \geq 0} {{\mathcal F}}_n )$. Let $\rho_d$ denote the Euclidean metric on ${{\mathbb R}}^d$, and for $A \subseteq {{\mathbb R}}^d$ and ${{\mathbf{x}}}\in {{\mathbb R}}^d$, let $\rho_d ( {{\mathbf{x}}}, A) := \inf_{{{\mathbf{y}}}\in A} \rho_d ( {{\mathbf{x}}}, {{\mathbf{y}}})$. Let ${{\mathcal K}}$ denote the set of compact convex subsets of ${{\mathbb R}}^2$ containing the origin, endowed with the Hausdorff metric $\rho_H$ defined for $K_1, K_2 \in {{\mathcal K}}$ by $$\label{eq:rhoH} \rho_H ( K_1, K_2) = \inf \{ {\varepsilon}\geq 0 : K_1 \subseteq K_2^{\varepsilon}\text{ and } K_2 \subseteq K_1^{\varepsilon}\} ,$$ where $K^{\varepsilon}:= \{ {{\mathbf{x}}}\in {{\mathbb R}}^2 : \rho_2 ( {{\mathbf{x}}}, K) \leq {\varepsilon}\}$. The metric $\rho_H$ generates the associated Borel $\sigma$-algebra ${{\mathcal B}}( {{\mathcal K}})$. Since the function $({{\mathbf{x}}}_0, {{\mathbf{x}}}_1, \ldots, {{\mathbf{x}}}_n ) \mapsto {\mathop \mathrm{hull}}\{ {{\mathbf{x}}}_0, {{\mathbf{x}}}_1, \ldots, {{\mathbf{x}}}_n \}$ (with ${{\mathbf{x}}}_0:={{\mathbf{0}}}$) is continuous from $({{\mathbb R}}^{2(n+1)},\rho_{2(n+1)})$ to $({{\mathcal K}},\rho_H)$, it is measurable from $({{\mathbb R}}^{2(n+1)}, {{\mathcal B}}({{\mathbb R}}^{2(n+1)}) )$ to $({{\mathcal K}}, {{\mathcal B}}({{\mathcal K}}))$; thus ${{\mathcal H}}_n$ is a ${{\mathcal K}}$-valued random variable, and ${{\mathcal H}}_n$ is ${{\mathcal F}}_n$-measurable. For $n \geq 0$, set ${{\mathcal T}}_n := \sigma ( {{\mathcal H}}_n , {{\mathcal H}}_{n+1}, \ldots )$ and define ${{\mathcal T}}:= \cap_{n \geq 0} {{\mathcal T}}_n$. Also, for $n \geq 0$ define $$r_n := \inf \{ \| {{\mathbf{x}}}\| : {{\mathbf{x}}}\in {{\mathbb R}}^2 \setminus {{\mathcal H}}_n \}.$$ Note that $r_n$ is non-decreasing. Here is the zero-one law. \[thm:zero-one\] Suppose that $r_n \to \infty$ a.s. Then if $A \in {{\mathcal T}}$, ${{\mathbb P}}(A) \in \{0,1\}$. Next we give a sufficient condition for $r_n \to \infty$. Recall [@dur p. 190] that $S_n$ is *recurrent* if there is a non-empty set ${{\mathcal R}}$ of points ${{\mathbf{x}}}\in {{\mathbb R}}^2$ (the recurrent values) such that, for any ${\varepsilon}>0$, $\| S_n - {{\mathbf{x}}}\| < {\varepsilon}$ i.o., a.s. \[prop:recurrence\] If $S_n$ is genuinely 2-dimensional and recurrent, then $r_n \to \infty$ a.s. One may also have $r_n \to \infty$ a.s. in the case of a transient walk, provided it visits all angles. However, $\lim_{n \to \infty} r_n < \infty$ a.s. may occur if the walk has a limiting direction, such as if there is a finite non-zero drift. Let $B({{\mathbf{x}}};r)$ denote the closed Euclidean ball centred at ${{\mathbf{x}}}\in {{\mathbb R}}^2$ with radius $r$. Since $S_n$ is recurrent, the set ${{\mathcal R}}$ of recurrent values is a closed subgroup of ${{\mathbb R}}^2$ and coincides with the set of *possible values* for the walk: see [@dur p. 190]. Since $S_n$ is genuinely 2-dimensional, it follows from e.g. Theorem 21.2 of [@br p. 225] that ${{\mathcal R}}$ contains a further closed subgroup ${{\mathcal R}}'$ of the form $H {{\mathbb Z}}^2$ where $H$ is a non-singular 2 by 2 matrix. Hence there exists $h >0$ such that for every ${{\mathbf{x}}}\in {{\mathbb R}}^2$ there exists ${{\mathbf{y}}}\in {{\mathcal R}}'$ with $\| {{\mathbf{x}}}- {{\mathbf{y}}}\| < h$. In particular, for any ${{\mathbf{x}}}\in {{\mathbb R}}^2$, ${{\mathbb P}}(S_n \in B({{\mathbf{x}}};h) {\ \text{i.o.}})=1$. Fix $r > h$, and consider $4$ discs, $D_1,D_2,D_3,D_4$, each of radius $h$, centred at $(\pm 2r, \pm 2r )$. Define $T_r$ to be the first time at which the walk has visited all $4$ discs, i.e., $$T_r := \min\{ n \geq 0 : \exists\ i_1,i_2,i_3,i_4 \in [0, n] \text{ with } S_{i_j}\in D_j \text{ for } j =1,2,3,4 \}.$$ The first paragraph of this proof shows that $T_r < \infty$ a.s. By construction, for $n \geq T_r$ we have that ${{\mathcal H}}_n$ contains the square $[-r,r]^2$, and so $n \geq T_r$ implies $r_n \geq r$. Hence, $${{\mathbb P}}\left( \liminf_{m \to \infty} r_m \geq r \right) \geq {{\mathbb P}}( T_r \leq n) \to 1 ,$$ as $n \to \infty$, and so $\liminf_{n \to \infty} r_n \geq r$, a.s. Since $r > h$ was arbitrary, the result follows. The first step in the proof of Theorem \[thm:zero-one\] is the following result, which uses the fact that $r_n \to \infty$ to show that any initial segment of the trajectory is eventually contained in the interior of the convex hull, uniformly over permutations of the initial increments. \[lem:recurrence\] Suppose that $r_n \to \infty$ a.s. Let $k \in {{\mathbb N}}$. Then there exists a random variable $N_k$ with ${{\mathbb P}}(k < N_k < \infty) =1$ such that (i) $N_k$ is invariant under permutations of $Z_1, \ldots, Z_k$, and (ii) ${{\mathcal H}}_n = {\mathop \mathrm{hull}}\{ S_{k+1}, \ldots, S_n \}$ for all $n \geq N_k$. Fix $k \in {{\mathbb N}}$. Let $R_k := \sum_{i=1}^k \|Z_i\|$ and define $N_k := \min \{ n > k : r_n > R_k \}$. Note that since $r_n$ is non-decreasing, $n \geq N_k$ implies $r_n > R_k$. Since $R_k < \infty$ a.s. and $r_n \to \infty$ a.s., we have $N_k < \infty$ a.s. Observe that if $r_n > R_k$ for $n > k$, then $S_0 , S_1, \ldots, S_k$ are all contained in the interior of ${{\mathcal H}}_n$, so that ${{\mathcal H}}_n = {{\mathcal H}}_{n,k} := {\mathop \mathrm{hull}}\{ S_{k+1}, \ldots , S_n \}$. So statement (ii) holds. Moreover, if $r_{n,k} := \inf \{ \| {{\mathbf{x}}}\| : {{\mathbf{x}}}\in {{\mathbb R}}^2 \setminus {{\mathcal H}}_{n,k} \}$ we have that $\{ r_n > R_k \} = \{ r_{n,k} > R_k \}$. But the events $\{ r_{n,k} > R_k \}$, $n > k$, which determine $N_k$, depend only on $R_k$ and $S_{k+1}, S_{k+2}, \ldots$, and so statement (i) holds. Heuristically, Theorem \[thm:zero-one\] is true since any $A \in {{\mathcal T}}$ is determined by ${{\mathcal H}}_{N_k}, {{\mathcal H}}_{N_k+1}, \ldots$, and Lemma \[lem:recurrence\] shows that this sequence in invariant under permutations of $Z_1, \ldots, Z_k$, as required for the Hewitt–Savage zero-one law. The formal proof is as follows. We adapt one of the standard proofs of the Hewitt–Savage zero-one law; see e.g. [@dur pp. 180–181]. Let $A \in {{\mathcal T}}$ and fix ${\varepsilon}>0$. Recall a fact from measure theory: if ${{\mathcal A}}$ is an algebra and $A \in \sigma ({{\mathcal A}})$, then we can find $A' \in {{\mathcal A}}$ such that ${{\mathbb P}}( A {\mathop\triangle}A' ) < {\varepsilon}$ (see e.g. [@bill p. 179]). Applied to the algebra $\cup_{n \geq 0} {{\mathcal F}}_n$ which generates ${{\mathcal F}}_\infty \supseteq {{\mathcal T}}$, this result implies that we can find $k \geq 0$ and $A_k \in {{\mathcal F}}_k$ such that ${{\mathbb P}}( A {\mathop\triangle}A_k ) < {\varepsilon}$. Fix this $k$, and fix $n$ such that ${{\mathbb P}}( N_{2k} > n ) < {\varepsilon}$, where $N_{2k}$ is as given in Lemma \[lem:recurrence\]. Applied to the algebra ${{\mathcal A}}_n := \cup_{m \geq 0} \sigma ( {{\mathcal H}}_n, {{\mathcal H}}_{n+1}, \ldots, {{\mathcal H}}_{n+m} )$, which has $\sigma ({{\mathcal A}}_n) \supseteq {{\mathcal T}}_n \supseteq {{\mathcal T}}$, the same measure-theoretic result shows that we can find $E_n \in {{\mathcal A}}_n$ such that ${{\mathbb P}}( A {\mathop\triangle}E_n ) < {\varepsilon}$. Now $A_k \in {{\mathcal F}}_k$ can be expressed as $A_k = \{ Z_1 \in C_{k,1}, \ldots, Z_k \in C_{k,k} \}$ for Borel sets $C_{k,1}, \ldots, C_{k,k}$. Set $A'_k := \{ Z_{k+1} \in C_{k,1}, \ldots, Z_{2k} \in C_{k,k} \}$; since the $Z_i$ are i.i.d., ${{\mathbb P}}(A'_k) = {{\mathbb P}}(A_k)$, and $A_k$ and $A_k'$ are independent. We claim that $$\label{eqn:zeroone2} {{\mathbb P}}( ( A'_k {\mathop\triangle}E_n ) \cap \{ N_{2k} \leq n \} ) = {{\mathbb P}}( ( A_k {\mathop\triangle}E_n ) \cap \{ N_{2k} \leq n \} ) \leq 2{\varepsilon}.$$ To see the equality in , observe that Lemma \[lem:recurrence\] shows that $E_n \cap \{ N_{2k} \leq n \}$ is invariant under permutations of $Z_1, \ldots, Z_{2k}$, and the $Z_i$ are i.i.d. For the inequality in , we use the fact that ${{\mathbb P}}( A {\mathop\triangle}B ) \leq {{\mathbb P}}( A {\mathop\triangle}C ) + {{\mathbb P}}( B {\mathop\triangle}C)$ to get $$\begin{aligned} {{\mathbb P}}( ( A_k {\mathop\triangle}E_n ) \cap \{ N_{2k} \leq n \} ) & \leq {{\mathbb P}}( A_k {\mathop\triangle}E_n ) \\ & \leq {{\mathbb P}}(A_k {\mathop\triangle}A ) + {{\mathbb P}}(E_n {\mathop\triangle}A ) \leq 2 {\varepsilon}.\end{aligned}$$ Hence the claim  is verified. Since ${{\mathbb P}}( (A {\mathop\triangle}B) \cap D ) \leq {{\mathbb P}}( (A {\mathop\triangle}C) \cap D ) + {{\mathbb P}}( B {\mathop\triangle}C )$, we also get that $$\begin{aligned} {{\mathbb P}}( ( A {\mathop\triangle}A_k' ) \cap \{ N_{2k} \leq n \} ) \leq {{\mathbb P}}( ( A'_k {\mathop\triangle}E_n ) \cap \{ N_{2k} \leq n \} ) + {{\mathbb P}}( A {\mathop\triangle}E_n ) \leq 3{\varepsilon},\end{aligned}$$ by . Hence $$\begin{aligned} {{\mathbb P}}( A {\mathop\triangle}A'_k) & \leq {{\mathbb P}}( N_{2k} > n) + {{\mathbb P}}( (A {\mathop\triangle}A_k' ) \cap \{ N_{2k} \leq n \} ) \leq 4{\varepsilon}.\end{aligned}$$ The final sequence of the proof is a variation on the standard argument. First note that $$\begin{aligned} |{{\mathbb P}}(A)^2-{{\mathbb P}}(A)| \leq |{{\mathbb P}}(A)^2 - {{\mathbb P}}(A_k \cap A'_k)| + |{{\mathbb P}}(A_k \cap A'_k) - {{\mathbb P}}(A)|. \label{eqn:zeroone1}\end{aligned}$$ For the first term on the right-hand side of , we use the fact that $A_k$ and $A_k'$ are independent with ${{\mathbb P}}(A_k)= {{\mathbb P}}(A'_k)$, along with the property of the symmetric difference operator that $|{{\mathbb P}}(A)-{{\mathbb P}}(B)|\leq {{\mathbb P}}(A{\mathop\triangle}B)$, to get $$\begin{aligned} |{{\mathbb P}}(A)^2-{{\mathbb P}}(A_k \cap A'_k)| & = |{{\mathbb P}}(A)^2-{{\mathbb P}}(A_k)^2|\\ & \leq |{{\mathbb P}}(A)+{{\mathbb P}}(A_k)||{{\mathbb P}}(A)-{{\mathbb P}}(A_k)|\\ &\leq 2 {{\mathbb P}}(A {\mathop\triangle}A_k) \leq 2{\varepsilon}.\end{aligned}$$ Now considering the second term on the right-hand side of  and using the fact that ${{\mathbb P}}(A {\mathop\triangle}(B \cap C)) \leq {{\mathbb P}}(A {\mathop\triangle}B) + {{\mathbb P}}(A {\mathop\triangle}C)$, we have $$\begin{aligned} |{{\mathbb P}}(A_k \cap A'_k) - {{\mathbb P}}(A)| & \leq {{\mathbb P}}(A {\mathop\triangle}(A_k \cap A'_k))\\ &\leq {{\mathbb P}}(A {\mathop\triangle}A_k) + {{\mathbb P}}( A {\mathop\triangle}A'_k) \leq 5 {\varepsilon}.\end{aligned}$$ Combining these two bounds, we obtain from  that $|{{\mathbb P}}(A)^2-{{\mathbb P}}(A)| \leq 7 {\varepsilon}$. Since ${\varepsilon}>0$ was arbitrary, we get the result. The strategy of the proof of Theorem \[thm:shape\], carried out in the remainder of this section, is as follows. We use Donsker’s theorem and the mapping theorem to show that $D_n^{-1} {{\mathcal H}}_n$ converges weakly to the convex hull of an appropriate Brownian motion, scaled to have unit diameter (Lemma \[lem:shape-weak\]). This limiting set has positive probability of being an arbitrarily good approximation to any given unit-diameter convex compact set $K$. An application of the zero-one law (Theorem \[thm:zero-one\]) then completes the proof. For $K \in {{\mathcal K}}$ let ${{\mathcal D}}( K) := \operatorname*{diam}K$. The next result shows that the map $K \mapsto {{\mathcal D}}(K)$ is continuous from $({{\mathcal K}},\rho_H)$ to $({{\mathbb R}}_+,\rho_1)$. \[lem:D-cont\] For $K_1, K_2 \in {{\mathcal K}}$, $| {{\mathcal D}}( K_1) - {{\mathcal D}}(K_2 ) | \leq 2 \rho_H (K_1, K_2 )$. Let $\rho_H ( K_1, K_2 ) = r$. From  we have that for any ${{\mathbf{x}}}_1, {{\mathbf{x}}}_2 \in K_1$, there exist ${{\mathbf{y}}}_1, {{\mathbf{y}}}_2 \in K_2$ such that $\| {{\mathbf{x}}}_i - {{\mathbf{y}}}_i \| \leq s$ for any $s > r$. Then, $$\| {{\mathbf{x}}}_1 - {{\mathbf{x}}}_2 \| \leq \| {{\mathbf{x}}}_1 - {{\mathbf{y}}}_1 \| + \| {{\mathbf{y}}}_1 - {{\mathbf{y}}}_2 \| + \| {{\mathbf{y}}}_2 - {{\mathbf{x}}}_2 \| \leq 2 s + {{\mathcal D}}( K_2 ) .$$ Hence ${{\mathcal D}}(K_1) \leq 2s + {{\mathcal D}}( K_2)$, and since $s >r$ was arbitrary we get ${{\mathcal D}}( K_1) - {{\mathcal D}}( K_2) \leq 2r$. A symmetric argument gives ${{\mathcal D}}( K_2) - {{\mathcal D}}( K_1) \leq 2r$. For $K \in {{\mathcal K}}$ and ${{\mathbf{x}}}\in {{\mathbb S}}:= \{ {{\mathbf{y}}}\in {{\mathbb R}}^2 : \| {{\mathbf{y}}}\| = 1 \}$, define $h_K ({{\mathbf{x}}}) := \sup_{{{\mathbf{y}}}\in K} ({{\mathbf{y}}}\cdot {{\mathbf{x}}})$. Equivalent to  for $K_1, K_2 \in {{\mathcal K}}$ is the formula [@gruber p. 84] $$\label{eq:rhoH2} \rho_H ( K_1, K_2 ) = \sup_{{{\mathbf{x}}}\in {{\mathbb S}}} | h_{K_1} ({{\mathbf{x}}}) - h_{K_2} ({{\mathbf{x}}}) | .$$ Let ${{\mathcal K}}^\star := \{ K \in {{\mathcal K}}: {{\mathcal D}}( K) > 0 \} = {{\mathcal K}}\setminus \{ \{ {{\mathbf{0}}}\} \}$. \[lem:scale-cont\] Suppose that $K_1, K_2 \in {{\mathcal K}}^\star$. Then $$\label{eq:scale-cont} \rho_H ( K_1 / {{\mathcal D}}(K_1) , K_2 / {{\mathcal D}}(K_2 )) \leq \frac{3 \rho_H (K_1, K_2)}{{{\mathcal D}}(K_1 ) } .$$ In particular, the map $K \mapsto K/{{\mathcal D}}(K)$ is continuous from $({{\mathcal K}}^\star ,\rho_H)$ to $({{\mathcal K}}^\star ,\rho_H)$. We first claim that for $K_1, K_2 \in {{\mathcal K}}$ and $\alpha_1, \alpha_2 >0$, $$\label{eq:rhoH-scale} \rho_H ( \alpha_1 K_1 , \alpha_2 K_2 ) \leq \alpha_1 \rho_H ( K_1, K_2 ) + | \alpha_1 - \alpha_2 | {{\mathcal D}}( K_2 ) .$$ Suppose that $K_1, K_2 \in {{\mathcal K}}^\star$. Applying  with $\alpha_i = 1/{{\mathcal D}}(K_i)$, we get $$\begin{aligned} \rho_H ( K_1 / {{\mathcal D}}(K_1) , K_2 / {{\mathcal D}}(K_2 )) & \leq \frac{\rho_H ( K_1, K_2)}{{{\mathcal D}}(K_1)} + \frac{| {{\mathcal D}}( K_1 ) - {{\mathcal D}}(K_2)|}{{{\mathcal D}}( K_1 ) } ,\end{aligned}$$ from which  follows by Lemma \[lem:D-cont\]. This gives the desired continuity. It remains to verify the claim . From , with the observation that, for $\alpha >0$, $h_{\alpha K} ({{\mathbf{x}}}) = \alpha h_K ({{\mathbf{x}}})$, it follows that $$\begin{aligned} \rho_H ( \alpha_1 K_1 , \alpha_2 K_2 ) & = \sup_{{{\mathbf{x}}}\in {{\mathbb S}}} | \alpha_1 h_{K_1} ({{\mathbf{x}}}) - \alpha_1 h_{K_2} ({{\mathbf{x}}}) + (\alpha_1 - \alpha_2) h_{K_2} ({{\mathbf{x}}}) |\\ & \leq \alpha_1 \sup_{{{\mathbf{x}}}\in {{\mathbb S}}} | h_{K_1} ({{\mathbf{x}}}) - h_{K_2} ({{\mathbf{x}}}) | + | \alpha_1 - \alpha_2 | \sup_{{{\mathbf{x}}}\in {{\mathbb S}}} h_{K_2} ({{\mathbf{x}}}) ,\end{aligned}$$ from which the claim  follows. Suppose that $\Sigma := \operatorname{\mathbb{E}}( Z Z^{{\scalebox{0.6}{$\top$}}})$ is positive definite. Let $(b(t), t \geq 0)$ be standard Brownian motion in ${{\mathbb R}}^2$. Let $h_1 := {\mathop \mathrm{hull}}b [0,1]$, the convex hull of Brownian motion run for unit time. Let $\Sigma^{1/2}$ denote the (unique) positive-definite symmetric matrix such that $\Sigma^{1/2} \Sigma^{1/2} = \Sigma$. The map ${{\mathbf{x}}}\mapsto \Sigma^{1/2} {{\mathbf{x}}}$ is an affine transformation of ${{\mathbb R}}^2$, such that $\Sigma^{1/2} b$ is Brownian motion with covariance matrix $\Sigma$, and $\Sigma^{1/2} h_1 = {\mathop \mathrm{hull}}\Sigma^{1/2} b [0,1]$ is the corresponding convex hull. \[lem:shape-weak\] Suppose that $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$, $\mu = {{\mathbf{0}}}$, and $\Sigma$ is positive definite. Then $$D_n^{-1}{{\mathcal H}}_n \Rightarrow \frac{\Sigma^{1/2}h_1}{{{\mathcal D}}(\Sigma^{1/2}h_1)},$$ in the sense of weak convergence on $({{\mathcal K}}, \rho_H)$. The convergence $n^{-1/2} {{\mathcal H}}_n \Rightarrow \Sigma^{1/2} h_1$ is given in Theorem 2.5 of [@wx2]. Since (by Lemma \[lem:scale-cont\]) $K \mapsto K/{{\mathcal D}}(K)$ is continuous on ${{\mathcal K}}^\star$, and ${{\mathbb P}}( \Sigma^{1/2} h_1 \in {{\mathcal K}}^\star ) = 1$, we may apply the mapping theorem [@billc p. 21] to deduce the result. Fix $K \in {{\mathcal K}}$ with ${{\mathcal D}}( K) =1$. We claim that, for any ${\varepsilon}>0$, $$\label{shape-claim} {{\mathbb P}}\left( \liminf_{n \to \infty} \rho_H \left(D_n^{-1}{{\mathcal H}}_n,K\right) \leq {\varepsilon}\right) >0.$$ Under the conditions of the theorem, $S_n$ is genuinely 2-dimensional and recurrent [@dur p. 195], and so, by Proposition \[prop:recurrence\], $r_n \to \infty$ a.s. Since the event in  is in ${{\mathcal T}}$, the zero-one law (Theorem \[thm:zero-one\]) shows that the probability in  must be equal to 1. Since ${\varepsilon}>0$ was arbitrary, the statement of the theorem follows. Thus it remains to prove the claim . To this end, observe that, for any ${\varepsilon}>0$, $$\begin{aligned} {{\mathbb P}}\left( \liminf_{n \to \infty} \rho_H \left(D_n^{-1}{{\mathcal H}}_n,K\right) \leq {\varepsilon}\right) & \geq {{\mathbb P}}\left(\rho_H\left(D_n^{-1}{{\mathcal H}}_n,K\right)<{\varepsilon}{\ \text{i.o.}}\right) \\ &={{\mathbb P}}\left( \bigcap_{n=1}^\infty \bigcup_{m\geq n} \left\{\rho_H (D_m^{-1}{{\mathcal H}}_m,K)<{\varepsilon}\right\} \right)\\ &=\lim_{n\to\infty} {{\mathbb P}}\left( \bigcup_{m\geq n} \left\{ \rho_H (D_m^{-1}{{\mathcal H}}_m,K)<{\varepsilon}\right\}\right)\\ &\geq \lim_{n\to \infty} {{\mathbb P}}\left(\rho_H(D_n^{-1}{{\mathcal H}}_n,K)<{\varepsilon}\right).\end{aligned}$$ By the triangle inequality, $| \rho_H ( K , K_1 ) - \rho_H (K , K_2) | \leq \rho_H ( K_1, K_2)$, i.e., for fixed $K$, the function $K_1 \mapsto \rho_H (K, K_1)$ is continuous. Thus by Lemma \[lem:shape-weak\] and the mapping theorem $$\label{eq1} \lim_{n\to \infty} {{\mathbb P}}\left(\rho_H(D_n^{-1}{{\mathcal H}}_n,K)<{\varepsilon}\right) = {{\mathbb P}}\left( \rho_H \left( \frac{\Sigma^{1/2}h_1}{{{\mathcal D}}(\Sigma^{1/2}h_1)},K\right) < {\varepsilon}\right).$$ Let $\delta \in (0, {\varepsilon}/6)$. For convenience, set $A = \Sigma^{1/2} h_1$. First suppose that ${{\mathbf{0}}}$ is in the interior of $K$. Then, it is not hard to see that $K \subseteq A \subseteq (1+\delta ) K$ occurs with positive probability (one can force the Brownian motion to make a ‘loop’ in $( (1+\delta) K ) \setminus K$). On this event, we have $h_K ({{\mathbf{x}}}) \leq h_A ({{\mathbf{x}}}) \leq (1+\delta) h_K ({{\mathbf{x}}})$ for all ${{\mathbf{x}}}\in {{\mathbb S}}$, so that, by , $$\rho_H (A, K) = \sup_{{{\mathbf{x}}}\in {{\mathbb S}}} | h_A ({{\mathbf{x}}}) - h_K ({{\mathbf{x}}}) | \leq \delta \sup_{{{\mathbf{x}}}\in {{\mathbb S}}} h_K ({{\mathbf{x}}}) \leq \delta {{\mathcal D}}(K) = \delta .$$ It follows from taking $K_1 = K$ and $K_2 = A$ in  that $$\rho_H ( A / {{\mathcal D}}(A) , K ) \leq 3 \rho_H ( A, K ) \leq 3 \delta < {\varepsilon}/2 .$$ If ${{\mathbf{0}}}$ is not in the interior of $K$, then we can find $K' \in {{\mathcal K}}$ with $K \subset K'$ such that ${{\mathbf{0}}}$ is in the interior of $K'$ and $\rho_H (K, K') < {\varepsilon}/2$. Then $$\rho_H ( A / {{\mathcal D}}(A), K) \leq \rho_H (A / {{\mathcal D}}(A), K') + \rho_H (K, K') < {\varepsilon},$$ on the event $K' \subseteq A \subseteq (1+\delta ) K'$, which has positive probability. Hence, in either case, the probability on the right-hand side of  is strictly positive, establishing . For $K \in {{\mathcal K}}$, let ${{\mathcal L}}(K)$ denote the perimeter length of $K$; then, Lemma 2.4 of [@wx2] shows that $$\label{eq:L-cont} | {{\mathcal L}}(K_1 ) - {{\mathcal L}}(K_2) | \leq 2 \pi \rho_H (K_1, K_2 ), \text{ for any } K_1, K_2 \in {{\mathcal K}}.$$ First, take $K$ to be a unit-length line segment in ${{\mathbb R}}^2$ containing ${{\mathbf{0}}}$. Theorem \[thm:shape\] shows that, for any ${\varepsilon}> 0$, $\rho_H ( D_n^{-1} {{\mathcal H}}_n , K ) < {\varepsilon}$ i.o., a.s. Hence, by , $$L_n / D_n = {{\mathcal L}}( D_n^{-1} {{\mathcal H}}_n ) \leq {{\mathcal L}}( K ) + 2 \pi {\varepsilon}, {\ \text{i.o.}},$$ and ${{\mathcal L}}(K ) = 2$. Since ${\varepsilon}>0$ was arbitrary, we get $\liminf_{n \to \infty} L_n /D_n \leq 2$, and the first inequality in  shows that this latter inequality is in fact an equality. Now take $K$ to be a unit-diameter disc in ${{\mathbb R}}^2$ containing ${{\mathbf{0}}}$. Again, Theorem \[thm:shape\] shows that, for any ${\varepsilon}> 0$, $\rho_H ( D_n^{-1} {{\mathcal H}}_n , K ) < {\varepsilon}$ i.o., a.s. Hence, by , $$L_n / D_n = {{\mathcal L}}( D_n^{-1} {{\mathcal H}}_n ) \geq {{\mathcal L}}( K ) - 2 \pi {\varepsilon}, {\ \text{i.o.}},$$ and since now ${{\mathcal L}}(K) = \pi$ we get $\limsup_{n \to \infty} L_n /D_n \geq \pi$, which combined with the second inequality in  completes the proof. Perimeter in the case with drift {#sec:perim-drift} ================================ Suppose that $\operatorname{\mathbb{E}}( \| Z\|^2 )< \infty$ and $\mu \neq {{\mathbf{0}}}$. We work towards the proof of Theorem \[thm:drift-mean\]. Write $X_n := S_n \cdot \hat \mu$ and $Y_n := S_n \cdot \hat \mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}$, where $\hat \mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}$ is any fixed unit vector orthogonal to $\mu$. Then $X_n$ and $Y_n$ are one-dimensional random walks with increment distributions $Z \cdot \hat \mu$ and $Z \cdot \hat \mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}$ respectively; note that $\operatorname{\mathbb{E}}( Z \cdot \hat \mu ) = \| \mu \|$, $\operatorname{\mathbb{E}}( Z \cdot \hat \mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}) = 0$, $\operatorname{\mathbb{V}ar}( Z \cdot \hat \mu ) = {\sigma^2_{\mu}}$, and $$\begin{aligned} \operatorname{\mathbb{V}ar}( Z \cdot \hat\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}) = \operatorname{\mathbb{E}}[ ( ( Z -\mu) \cdot \hat \mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}})^2 ] & = \operatorname{\mathbb{E}}[ \| Z - \mu \|^2 ] - \operatorname{\mathbb{E}}[ ( ( Z-\mu ) \cdot \hat \mu )^2 ] \\ & = \sigma^2 - {\sigma^2_{\mu}}= {\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}.\end{aligned}$$ The first step towards the proof of Theorem \[thm:drift-mean\] is the following result. \[lem:ui\] Suppose that $\operatorname{\mathbb{E}}( \| Z\|^2 )< \infty$ and $\mu \neq {{\mathbf{0}}}$. Then $\|S_n\| - |S_n \cdot \hat \mu|$ is uniformly integrable. The central limit theorem shows that $n^{-1} Y_n^2 {\overset{\textup{d}}{\longrightarrow}}{\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}\zeta^2$ where $\zeta \sim {{\mathcal N}}(0,1)$. Also, since $\operatorname{\mathbb{E}}[ Y_n^2 ] = n {\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}$, $n^{-1} \operatorname{\mathbb{E}}( Y_n^2 ) \to {\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}= \operatorname{\mathbb{E}}( {\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}\zeta^2 )$. It is a fact that if $\theta, \theta_1, \theta_2, \ldots$ are ${{\mathbb R}_+}$-valued random variables with $\theta_n {\overset{\textup{d}}{\longrightarrow}}\theta$, then $\operatorname{\mathbb{E}}\theta_n \to \operatorname{\mathbb{E}}\theta < \infty$ if and only if $\theta_n$ is uniformly integrable: see [@kall Lemma 4.11]. Hence we conclude that $$\label{ui1} n^{-1} Y_n^2 \text{ is uniformly integrable}.$$ Fix ${\varepsilon}>0$. Let $\delta \in (0,\| \mu \|)$ to be chosen later. For ease of notation, write $T_n = \| S_n \| - | X_n |$. Then since $T_n \leq \| S_n \|$ and $| X_n | \leq \| S_n \|$, we have $$\begin{aligned} \operatorname{\mathbb{E}}\left[ T_n {{\mathbf 1}{\{ T_n > M \}}} {{\mathbf 1}{\{ \| S_n \| \leq \delta n \}}} \right] & \leq \delta n {{\mathbb P}}( \| S_n \| \leq \delta n ) \\ & \leq \delta n {{\mathbb P}}( | X_n | \leq \delta n ) \\ & \leq \delta n {{\mathbb P}}( | X_n - \| \mu \| n | > ( \|\mu \| - \delta ) n ) .\end{aligned}$$ Since $\operatorname{\mathbb{E}}X_n = n \| \mu\|$ and $\operatorname{\mathbb{V}ar}X_n = n {\sigma^2_{\mu}}$, Chebyshev’s inequality then yields $$\begin{aligned} \operatorname{\mathbb{E}}\left[ T_n {{\mathbf 1}{\{ T_n > M \}}} {{\mathbf 1}{\{ \| S_n \| \leq \delta n \}}} \right] & \leq \delta n \frac{n {\sigma^2_{\mu}}}{ ( \| \mu \| - \delta)^2 n^2} .\end{aligned}$$ It follows that, for suitable choice of $\delta$ (not depending on $M$) and any $M \in (0,\infty)$, $$\begin{aligned} \sup_n \operatorname{\mathbb{E}}\left[ T_n {{\mathbf 1}{\{ T_n > M \}}} {{\mathbf 1}{\{ \| S_n \| \leq \delta n \}}} \right] \leq {\varepsilon}. \end{aligned}$$ On the other hand, we use the fact that $$\label{Ysq} 0 \leq \| S_n \| - | X_n | = T_n = \frac{ \| S_n \|^2 - X_n^2}{\| S_n \| + | X_n |} = \frac{Y_n^2}{\| S_n \| + | X_n |} .$$ Hence $$\begin{aligned} \operatorname{\mathbb{E}}\left[ T_n {{\mathbf 1}{\{ T_n > M \}}} {{\mathbf 1}{\{ \| S_n \| > \delta n \}}} \right] & = \operatorname{\mathbb{E}}\left[ \tfrac{Y_n^2}{\| S_n \| + | X_n |} \mathbf{1} \left\{ \tfrac{Y_n^2}{\| S_n \| + | X_n |} > M \right\} {{\mathbf 1}{\{ \| S_n \| > \delta n \}}} \right]\\ & \leq \frac{1}{\delta n} \operatorname{\mathbb{E}}\left[ Y_n^2 {{\mathbf 1}{\{ Y_n^2 > M \delta n \}}} \right] .\end{aligned}$$ It follows that $$\sup_n \operatorname{\mathbb{E}}\left[ T_n {{\mathbf 1}{\{ T_n > M \}}} {{\mathbf 1}{\{ \| S_n \| > \delta n \}}} \right] \leq \frac{1}{\delta} \sup_n \operatorname{\mathbb{E}}\left[ n^{-1} Y_n^2 {{\mathbf 1}{\{ n^{-1} Y_n^2 > M \delta \}}} \right] ,$$ which, for fixed $\delta$, tends to $0$ as $M \to \infty$ by . Thus for any ${\varepsilon}>0$ we have that $\sup_n \operatorname{\mathbb{E}}\left[ T_n {{\mathbf 1}{\{ T_n > M \}}} \right] \leq {\varepsilon}$, for all $M$ sufficiently large, which completes the proof. The next result is of some independent interest, and may be known, although we could find no reference. \[lem2\] Suppose that $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$ and $\mu \neq {{\mathbf{0}}}$. Then $$0 \leq \| S_n \| - S_n \cdot \hat \mu \to \frac{{\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}\zeta^2}{2 \| \mu \|}, \text{ in } L^1, \text{ as } n \to \infty,$$ for $\zeta \sim {{\mathcal N}}(0,1)$. In particular, $$0 \leq \operatorname{\mathbb{E}}\| S_n \| - \| \mu \| n = \frac{{\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}}{2 \| \mu \|} + o(1), \text{ as } n \to \infty .$$ As above, for $x \in {{\mathbb R}}$ set $x^+ := x {{\mathbf 1}{\{ x >0\}}}$, and also set $x^- = -x {{\mathbf 1}{\{ x < 0 \}}}$. Then $x = x^+ - x^-$ and $|x| = x^+ + x^-$, so $x = |x| -2x^-$; thus $| X_n | - 2 X_n^- = X_n \leq | X_n |$, and $$\label{X-bounds} 0 \leq \| S_n \| - | X_n| \leq \| S_n \| - X_n = \| S_n \| - |X_n| + 2 X_n^- ;$$ in particular $\operatorname{\mathbb{E}}\| S_n \| \geq \operatorname{\mathbb{E}}X_n = \| \mu \| n$. Now, we have from  that $$\| S_n \| - |X_n | = \frac{Y_n^2}{\| S_n \| + | X_n |} = \frac{n^{-1} Y_n^2}{ n^{-1} \| S_n \| + n^{-1} | X_n |} ,$$ where $n^{-1} Y_n^2 {\overset{\textup{d}}{\longrightarrow}}{\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}\zeta^2$ for $\zeta \sim {{\mathcal N}}(0,1)$, and, by the strong law of large numbers, both $n^{-1} \| S_n\|$ and $n^{-1} | X_n |$ tend to $\| \mu \|$ a.s. Hence $0 \leq \| S_n \| - |X_n| {\overset{\textup{d}}{\longrightarrow}}\frac{{\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}\zeta^2}{2 \| \mu \|}$, and by Lemma \[lem:ui\] we can conclude that $\| S_n\| - | X_n | \to \frac{{\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}\zeta^2}{2 \| \mu \|}$ in $L^1$. Moreover, Lemma \[lem:negative-part\] shows that $X_n^- \to 0$ in $L^1$. Thus the result follows from . We can now complete the proof of Theorem \[thm:drift-mean\] and then the proof of Theorem \[thm1\]. From the Spitzer–Widom formula  and Lemma \[lem2\], we have $$\begin{aligned} \operatorname{\mathbb{E}}L_n = 2 \sum_{k=1}^n \frac{1}{k} \left( \| \mu \| k + \frac{{\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}}{2 \| \mu \|} + o(1) \right) = 2 \|\mu \| n + \frac{{\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}}{\| \mu \|} \log n + o (\log n) ,\end{aligned}$$ as claimed. Theorem \[thm:drift-mean\] shows that $$\label{eq2} n^{-1/2} | \operatorname{\mathbb{E}}L_n - 2 \operatorname{\mathbb{E}}S_n \cdot \hat \mu | \to 0 .$$ Then by the triangle inequality $$n^{-1/2} | L_n - 2 S_n \cdot \hat \mu | \leq n^{-1/2} | L_n - \operatorname{\mathbb{E}}L_n - 2 (S_n - \operatorname{\mathbb{E}}S_n ) \cdot \hat \mu | + n^{-1/2} | \operatorname{\mathbb{E}}L_n - 2 \operatorname{\mathbb{E}}S_n \cdot \hat \mu | ,$$ which tends to $0$ in $L^2$ by  and . Diameter in the case with drift {#sec:diam-drift} =============================== Now we turn to the diameter $D_n$. The main aim of this section is to establish the following result, from which we will deduce Theorem \[thm2\]. \[thm:DnL2\] Suppose that $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$ and $\mu \neq {{\mathbf{0}}}$. Then, as $n\to\infty$, $$\label{eqn:DnL2conv} n^{-1/2}\left| D_n - \operatorname{\mathbb{E}}D_n - (S_n-\operatorname{\mathbb{E}}S_n )\cdot \hat \mu \right| \rightarrow 0, \text{ in } L^2.$$ Theorem \[thm:DnL2\] is the analogue for $D_n$ of the result  for $L_n$, established in Theorem 1.3 of [@wx1]. Our approach to proving Theorem \[thm:DnL2\] is similar in outline to that in [@wx1], where a martingale difference idea (which we explain below in the present context) was combined with Cauchy’s formula for the perimeter length. Here, the place of Cauchy’s formula is taken by the formula $$\label{diam-formula} \operatorname*{diam}A = \sup_{0 \leq \theta \leq \pi} \rho_A (\theta ),$$ where $A \subset {{\mathbb R}}^d$ is a non-empty compact set, and $\rho_A(\theta) := \sup_{{{\mathbf{x}}}\in A} ( {{\mathbf{x}}}\cdot {{\mathbf{e}}}_\theta ) - \inf_{{{\mathbf{x}}}\in A} ({{\mathbf{x}}}\cdot {{\mathbf{e}}}_\theta)$; see Lemma 6 of [@mx] for a derivation of . Before embarking on the proof of Theorem \[thm:DnL2\], we observe the following result. \[lem:diam-expec\] Suppose that $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$ and $\mu \neq {{\mathbf{0}}}$. There exists $C < \infty$ such that $$0 \leq \operatorname{\mathbb{E}}D_n - \| \mu \| n \leq C( 1 + \log n), \text{ for all } n \geq 1.$$ The lower bound follows from Lemma \[lem2\] and the fact that $D_n \geq \| S_n \|$. The upper bound follows from the fact that $D_n \leq L_n/2$ and the fact that, by Theorem \[thm:drift-mean\], $\operatorname{\mathbb{E}}L_n \leq 2\|\mu \| n + C(1+ \log n)$. Now we describe the martingale difference construction, which is standard. Recall that ${{\mathcal F}}_0 := \{ \emptyset, \Omega \}$ and ${{\mathcal F}}_n := \sigma(Z_1,\ldots,Z_n)$ for $n \geq 1$. Let $Z_1',Z_2',\ldots$ be an independent copy of the sequence $Z_1,Z_2,\ldots$. Fix $n \in {{\mathbb N}}$. For $i \in \{1,\ldots,n\}$, define $$S_j^{(i)} := \begin{cases} S_j &\text{if }\ j<i,\\ S_j-Z_i+Z_i' &\text{if }\ j\geq i; \end{cases}$$ then $(S_j^{(i)};0\leq j \leq n)$ is the random walk $(S_j;0\leq j \leq n)$ but with $Z_i$ ‘resampled’ and replaced by $Z_i'$. For $i \in \{1,\ldots,n\}$, define $$D_n^{(i)} := \operatorname*{diam}\{ S_0^{(i)},\ldots,S_n^{(i)}\}, \text{ and } \Delta_{n,i}:= \operatorname{\mathbb{E}}( D_n-D_n^{(i)} \mid {{\mathcal F}}_i ). \label{eqn:Dni}$$ Observe that we also have the representation $\Delta_{n,i} = \operatorname{\mathbb{E}}( D_n \mid {{\mathcal F}}_i ) - \operatorname{\mathbb{E}}( D_n \mid {{\mathcal F}}_{i-1} )$ and hence $\Delta_{n,i}$ is a martingale difference sequence, i.e., $\Delta_{n,i}$ is ${{\mathcal F}}_i$-measurable with $\operatorname{\mathbb{E}}( \Delta_{n,i} \mid {{\mathcal F}}_{i-1} ) = 0$. The utility of this construction is the following result (see e.g. Lemma 2.1 of [@wx1]). \[lemma:VarGn\] Let $n\in {{\mathbb N}}$. Then $D_n - \operatorname{\mathbb{E}}D_n = \sum_{i=1}^n \Delta_{n,i}$, and $\operatorname{\mathbb{V}ar}D_n = \sum_{i=1}^n\operatorname{\mathbb{E}}( \Delta_{n,i}^2 )$. Recall that ${\mathbf{e}_{\theta}}$ denotes the unit vector in direction $\theta$. For $\theta \in [0,\pi]$, define $$M_n(\theta):= \max_{0\leq j\leq n} (S_j\cdot{\mathbf{e}_{\theta}}), \text{ and } m_n(\theta):= \min_{0\leq j\leq n} (S_j\cdot {\mathbf{e}_{\theta}}) ,$$ and define $R_n(\theta) := M_n(\theta)-m_n(\theta)$. Note that since $S_0={{\mathbf{0}}}$, we have $M_n(\theta)\geq 0$ and $m_n(\theta)\leq 0$, a.s. It follows from  that $D_n = \sup_{0\leq \theta \leq \pi} R_n(\theta)$. Similarly, when the $i$th increment is resampled, $D_n^{(i)} = \sup_{0\leq \theta \leq \pi} R_n^{(i)}(\theta)$, where $R_n^{(i)}(\theta) := M_n^{(i)}(\theta)-m_n^{(i)}(\theta)$, with $$M_n^{(i)}(\theta):= \max_{0\leq j\leq n}(S_j^{(i)}\cdot{\mathbf{e}_{\theta}}), \text{ and } m_n^{(i)}(\theta):= \min_{0\leq j\leq n}(S_j^{(i)}\cdot {\mathbf{e}_{\theta}}).$$ Thus to study $\Delta_{n,i}$ as defined at , we are interested in $$\label{eq:d-difference} D_n - D_n^{(i)} = \sup_{0\leq \theta \leq \pi} R_n(\theta) - \sup_{0\leq \theta \leq \pi} R_n^{(i)}(\theta) .$$ For the remainder of this section we suppose, without loss of generality, that $\mu=\|\mu\| {{\mathbf{e}}}_{\pi/2}$ with $\|\mu\| \in (0,\infty)$. An important observation is that the diameter does not deviate far from the direction of the drift. For $\delta \in (0,\pi/2)$ and $i \in \{1,\ldots,n\}$, define the event $$A_{n,i} (\delta):= \left\{ \left| \frac{\pi}{2} - \operatorname*{arg \mkern 1mu \max}_{0\leq \theta \leq \pi} R_n(\theta) \right| < \delta \right\} \cap \left\{ \left| \frac{\pi}{2} - \operatorname*{arg \mkern 1mu \max}_{0\leq \theta \leq \pi} R^{(i)}_n(\theta) \right| < \delta \right\} .$$ \[lemma:Dthetabound\] Suppose that $\operatorname{\mathbb{E}}\|Z \|<\infty$ and $\mu = \|\mu\|{{\mathbf{e}}}_{\pi/2} \neq {{\mathbf{0}}}$. Then for any $\delta \in (0,\pi/2)$, $\lim_{n\rightarrow \infty} \min_{1 \leq i \leq n} {{\mathbb P}}(A_{n,i}(\delta))=1$. Fix $\delta \in (0,\pi/2)$. Note that $S_j \cdot {{\mathbf{e}}}_0$ is a random walk on ${{\mathbb R}}$ with mean increment $\operatorname{\mathbb{E}}( Z \cdot {\mathbf{e}_{0}}) = \mu \cdot {\mathbf{e}_{0}}= 0$. Hence the strong law of large numbers implies that for any ${\varepsilon}>0$, $$\max_{0 \leq j \leq n} | S_j \cdot {\mathbf{e}_{0}}| \leq {\varepsilon}n,$$ for all $n\geq N_{{\varepsilon}}$ with ${{\mathbb P}}(N_{{\varepsilon}}<\infty)=1$. Similarly, since $S_j \cdot {\mathbf{e}_{\pi/2}}$ is a random walk on ${{\mathbb R}}$ with mean increment $\| \mu \| >0$, there exists $N'$ with ${{\mathbb P}}(N'<\infty)=1$ such that $$S_j \cdot {\mathbf{e}_{\pi/2}}\geq \tfrac{1}{2} \|\mu\| j, \text{ for all } j \geq N'.$$ Let $A'_n({\varepsilon})$ denote the event $$\Bigl\{\max_{0\leq j\leq n} |S_j \cdot {\mathbf{e}_{0}}|\leq {\varepsilon}n\Bigr\} \cap \left\{ S_n \cdot {{\mathbf{e}}}_{\pi/2} \geq \tfrac{1}{2} \|\mu\| n \right\}.$$ Then if $A'_n ({\varepsilon}) $ occurs, any line segment that achieves the diameter has length at least $\frac{1}{2} \|\mu\| n$ and horizontal component at most $2{\varepsilon}n$. Thus if $\theta_n= \operatorname*{arg \mkern 1mu \max}_{0\leq \theta \leq \pi} R_n(\theta)$ we have $$|\cos \theta_n|\leq \frac{4{\varepsilon}}{\|\mu\|}, \text{ on } A'_n ({\varepsilon}).$$ Thus for ${\varepsilon}$ sufficiently small we have that $A'_n({\varepsilon})$ implies $|\theta_n - \pi/2| < \delta$. Hence $${{\mathbb P}}( |\theta_n - \pi/2| < \delta )\geq{{\mathbb P}}(A'_n({\varepsilon}))\geq {{\mathbb P}}\left(n\geq \max\{N_{\varepsilon},N'\}\right)\rightarrow {{\mathbb P}}\left(\max\{N_{\varepsilon},N'\}<\infty\right)=1 .$$ But $\theta^{(i)}_n= \operatorname*{arg \mkern 1mu \max}_{0\leq \theta \leq \pi} R^{(i)}_n(\theta)$ has the same distribution as $\theta_n$, so $$\min_{1 \leq i \leq n} {{\mathbb P}}( \{ |\theta_n - \pi/2| < \delta \} \cap \{ |\theta^{(i)}_n - \pi/2| < \delta \} ) \geq 1 - 2 {{\mathbb P}}( |\theta_n - \pi/2| \geq \delta ) ,$$ and the result follows. Lemma \[lemma:Dthetabound\] tells us that the key to understanding  is to understand what is happening with $R_n(\theta)$ and $R_n^{(i)} (\theta)$ for $\theta \approx \pi/2$. The next important observation is that for $\theta \in (0,\pi)$, the one-dimensional random walk $S_j \cdot {{\mathbf{e}}}_\theta$ has drift $\mu \cdot {{\mathbf{e}}}_\theta = \mu \sin \theta >0$, so, with very high probability $M_n (\theta)$ is attained somewhere near the end of the walk, and $m_n (\theta)$ somewhere near the start. To formalize this statement, and its consequence for $R_n(\theta) - R_n^{(i)}(\theta)$, define $$\begin{aligned} \bar{J}_n(\theta) & := \operatorname*{arg \mkern 1mu \max}_{0\leq j\leq n} (S_j\cdot {\mathbf{e}_{\theta}}), \text{ and } {\underaccent{\bar}{J}}_n(\theta):= \operatorname*{arg \mkern 1mu \min}_{0\leq j\leq n} (S_j\cdot {\mathbf{e}_{\theta}});\\ \bar{J}_n^{(i)}(\theta) & := \operatorname*{arg \mkern 1mu \max}_{0\leq j\leq n} (S_j^{(i)}\cdot {\mathbf{e}_{\theta}}), \text{ and } {\underaccent{\bar}{J}}_n^{(i)}(\theta):= \operatorname*{arg \mkern 1mu \min}_{0\leq j\leq n} (S_j^{(i)}\cdot {\mathbf{e}_{\theta}}). \end{aligned}$$ For $\gamma \in (0,1/2)$ (a constant that will be chosen to be suitably small later in our argument), we denote by $E_{n,i}(\gamma)$ the event that the following occur: - for all $\theta \in [\pi/4,3\pi/4]$, ${\underaccent{\bar}{J}}_n(\theta)<\gamma n$ and $\bar{J}_n(\theta)>(1-\gamma) n$; - for all $\theta \in [\pi/4,3\pi/4]$, ${\underaccent{\bar}{J}}_n^{(i)}(\theta)<\gamma n$ and $\bar{J}_n^{(i)}(\theta)>(1-\gamma) n$; note that the choice of interval $[\pi/4, 3 \pi/4]$ could be replaced by any other interval containing $\pi/2$ and bounded away from $0$ and $\pi$. Define $I_{n,\gamma}:= \{1, \ldots ,n\}\cap [\gamma n, (1-\gamma)n]$. The next result is contained in Lemma 4.1 of [@wx1]. \[lem:Eprob\] For any $\gamma \in (0,1/2)$ the following hold. - If $i \in I_{n,\gamma}$, then, on the event $E_{n,i} (\gamma)$, $$R_n(\theta) - R_n^{(i)}(\theta) = (Z_i - Z'_i)\cdot {\mathbf{e}_{\theta}}, \text{ for any } \theta \in [\pi/4, 3\pi/4]. \label{eqn:rangeEquation}$$ - If $\operatorname{\mathbb{E}}\| Z \| < \infty$ and $\mu\neq {{\mathbf{0}}}$ then $\lim_{n \to \infty} \min_{1\leq i \leq n} {{\mathbb P}}( E_{n,i}( \gamma) ) = 1$. In light of Lemma \[lemma:Dthetabound\], the key to estimating  is provided by the following. \[lem:diambound\] Let $\gamma \in (0,1/2)$. Then for any $\delta \in (0, \pi/4)$ and any $i\in I_{n,\gamma}$, on $E_{n,i}(\gamma)$, $$\left|\sup_{|\theta-\pi/2|\leq \delta}R_n(\theta)-\sup_{|\theta-\pi/2|\leq \delta}R_n^{(i)}(\theta)-(Z_i-Z'_i)\cdot {\mathbf{e}_{\pi/2}}\right|\leq \delta\|Z_i-Z'_i\|.$$ Before proving Lemma \[lem:diambound\], we need a simple geometrical lemma. \[lemma:RLipschitz\] For any ${{\mathbf{x}}}\in {{\mathbb R}}^2$ and $\theta_1, \theta_2 \in {{\mathbb R}}$, $$|{{\mathbf{x}}}\cdot {{\mathbf{e}}}_{\theta_1}-{{\mathbf{x}}}\cdot{{\mathbf{e}}}_{\theta_2}|\leq \|{{\mathbf{x}}}\||\theta_1-\theta_2|.$$ We have $$\begin{aligned} {{\mathbf{e}}}_{\theta_1}-{{\mathbf{e}}}_{\theta_2}&=(\cos \theta_1 - \cos \theta_2, \sin \theta_1 - \sin \theta_2 )\\ &=\left(-2\sin\left( \frac{\theta_1 - \theta_2}{2}\right)\sin\left( \frac{\theta_1 + \theta_2}{2}\right),2\sin\left( \frac{\theta_1 - \theta_2}{2}\right)\cos\left( \frac{\theta_1 + \theta_2}{2}\right)\right),\end{aligned}$$ so that $\|{{\mathbf{e}}}_{\theta_1}-{{\mathbf{e}}}_{\theta_2}\|^2 = 4 \sin^2 \left( \frac{\theta_1 - \theta_2}{2}\right)$, and hence $\|{{\mathbf{e}}}_{\theta_1}-{{\mathbf{e}}}_{\theta_2}\| = 2 \left| \sin \left( \frac{\theta_1 - \theta_2}{2}\right)\right|$. Now use the inequality $|\sin x|\leq |x|$ (valid for all $x\in {{\mathbb R}}$) to get $$\|{{\mathbf{e}}}_{\theta_1}-{{\mathbf{e}}}_{\theta_2}\|\leq |\theta_1-\theta_2|,$$ and the result follows. We claim that with $i \in I_{n,\gamma}$, for any $\theta_1,\theta_2 \in [\pi/4,3\pi/4]$, on the event $E_{n,i}(\gamma)$, it holds that $$\label{eq:two-theta} \inf_{\theta_1\leq \theta \leq \theta_2}(Z_i-Z'_i)\cdot {\mathbf{e}_{\theta}}\leq\sup_{\theta_1\leq \theta \leq \theta_2} R_n(\theta) - \sup_{\theta_1\leq \theta \leq \theta_2} R_n^{(i)}(\theta) \leq \sup_{\theta_1\leq \theta \leq \theta_2}(Z_i-Z'_i)\cdot {\mathbf{e}_{\theta}}.$$ Given the claim , and that, as follows from Lemma \[lemma:RLipschitz\], $$\begin{aligned} \sup_{|\theta-\pi/2|\leq \delta}(Z_i-Z'_i)\cdot {\mathbf{e}_{\theta}}& \leq (Z_i-Z'_i)\cdot {\mathbf{e}_{\pi/2}}+ \delta \|Z_i-Z'_i\|, \text{ and} \\ \inf_{|\theta-\pi/2|\leq \delta}(Z_i-Z'_i)\cdot {\mathbf{e}_{\theta}}& \geq (Z_i-Z'_i)\cdot {\mathbf{e}_{\pi/2}}- \delta \|Z_i-Z'_i\|, \end{aligned}$$ the statement in the lemma follows on taking $\theta_1 = \pi/2 - \delta$ and $\theta_2 = \pi/2 + \delta$. It remains to establish the claim . First we note that for $f, g : {{\mathbb R}}\to {{\mathbb R}}$ with $\sup_{ \theta \in I} |f (\theta) | < \infty$ and $\sup_{ \theta \in I} |g (\theta) | < \infty$, $$\label{eqn:supABineq} \inf_{\theta \in I} (f(\theta)-g(\theta)) \leq \sup_{\theta \in I} f(\theta)-\sup_{\theta \in I} g(\theta)\leq \sup_{\theta \in I} (f(\theta)-g(\theta)).$$ In particular, taking $I = [\theta_1, \theta_2]$, with $\theta_1,\theta_2 \in [\pi/3,3\pi/4]$, we have $$\begin{aligned} \inf_{\theta_1 \leq \theta \leq \theta_2} \left( R_n (\theta) - R_n^{(i)} (\theta) \right) \leq \sup_{\theta_1\leq \theta \leq \theta_2} R_n(\theta) - \sup_{\theta_1\leq \theta \leq \theta_2} R_n^{(i)}(\theta) & \leq \sup_{\theta_1 \leq \theta \leq \theta_2} \left( R_n (\theta) - R_n^{(i)} (\theta) \right) ,\end{aligned}$$ and, on the event $E_{n,i}(\gamma)$, we have from  that $$R_n (\theta) - R_n^{(i)} (\theta) = (Z_i-Z'_i) \cdot {\mathbf{e}_{\theta}}, \text{ for all } \theta \in [\theta_1, \theta_2 ] ,$$ which establishes the claim . To obtain rough estimates when the events $A_{n,i} (\delta)$ and $E_{n,i} (\gamma)$ do not occur, we need the following bound. \[lemma:ACDeltaBound\] For any $i \in \{ 1,2,\ldots,n\}$, a.s., $$|D_n^{(i)}-D_n| \leq 2\|Z_i\|+2\|Z'_i\|.$$ Lemma 3.1 from [@wx1] states that, for any $i \in \{ 1,2,\ldots,n\}$, a.s., $$\sup_{0 \leq \theta \leq \pi} \left| R_n(\theta)- R_n^{(i)}(\theta) \right| \leq 2\|Z_i\| + 2\|Z'_i\|.$$ Now from  and  we obtain the result. Now define the event $B_{n,i} ( \gamma, \delta):= E_{n,i}( \gamma) \cap A_{n,i}(\delta)$. Let $B_{n,i}^{{\mathrm{c}}}(\gamma,\delta)$ denote the complementary event. The preceding results in this section can now be combined to obtain the following approximation lemma for $\Delta_{n,i}$ as given by . \[lem:approx\] Suppose that $\operatorname{\mathbb{E}}\|Z \|<\infty$ and $\mu \neq {{\mathbf{0}}}$. For any $\gamma \in (0,1/2)$, $\delta \in (0,\pi/4)$, and $i \in I_{n,\gamma}$, we have, a.s., $$\begin{aligned} \left| \Delta_{n,i} - (Z_i-\mu)\cdot \hat{\mu}\right| \leq&~ 3 \|Z_i\| {{\mathbb P}}( B_{n,i}^{{\mathrm{c}}}(\gamma,\delta) \mid {{\mathcal F}}_i) + 3\operatorname{\mathbb{E}}[\|Z'_i\|{{\mathbf 1}{( B_{n,i}^{{\mathrm{c}}}(\gamma,\delta) )}} \mid {{\mathcal F}}_i ]\nonumber\\ & + \delta \left(\|Z_i\|+\operatorname{\mathbb{E}}\|Z \| \right).\end{aligned}$$ First observe that, since $Z_i$ is ${{\mathcal F}}_i$-measurable and $Z_i'$ is independent of ${{\mathcal F}}_i$, $$\Delta_{n,i} - (Z_i - \mu ) \cdot \hat \mu = \operatorname{\mathbb{E}}[ D_n - D_n^{(i)} - (Z_i - Z_i' ) \cdot \hat \mu \mid {{\mathcal F}}_i ] .$$ Hence, by the triangle inequality, $$\begin{aligned} |\Delta_{n,i}-(Z_i-\mu)\cdot \hat{\mu}| &\leq\operatorname{\mathbb{E}}\left[ \left| D_n-D_n^{(i)}-(Z_i-Z'_i)\cdot \hat{\mu}\right| {{\mathbf 1}{( B_{n,i}(\gamma,\delta) )}} \mid {{\mathcal F}}_i\right]\\ &\qquad {} + {} \operatorname{\mathbb{E}}\left[ \left| D_n-D_n^{(i)}-(Z_i-Z'_i)\cdot \hat{\mu}\right| {{\mathbf 1}{( B^{{\mathrm{c}}}_{n,i}(\gamma,\delta) )}} \mid {{\mathcal F}}_i\right] .\end{aligned}$$ Here, by Lemma \[lemma:ACDeltaBound\], we have that $$\begin{aligned} \operatorname{\mathbb{E}}\left[\left| D_n-D_n^{(i)}-(Z_i-Z'_i)\cdot \hat{\mu}\right| {{\mathbf 1}{( B^{{\mathrm{c}}}_{n,i}(\gamma,\delta) )}} \mid {{\mathcal F}}_i\right] & \leq 3 \operatorname{\mathbb{E}}\left[ ( \| Z_i \| + \| Z_i ' \| ) {{\mathbf 1}{( B^{{\mathrm{c}}}_{n,i}(\gamma,\delta) )}} \mid {{\mathcal F}}_i\right] .\end{aligned}$$ Now, on $A_{n,i} (\delta)$ we have that $$D_n = \sup_{| \theta - \pi/2 | \leq \delta} R_n (\theta), \text{ and } D_n^{(i)} =\sup_{| \theta - \pi/2 | \leq \delta} R^{(i)}_n (\theta) ,$$ and hence, by Lemma \[lem:diambound\], on $A_{n,i} (\delta) \cap E_{n,i} (\gamma)$, $$| D_n - D_n^{(i)} - (Z_i - Z_i') \cdot \hat \mu | \leq \delta \| Z_i - Z_i' \| .$$ Hence $$\operatorname{\mathbb{E}}\left[ \left| D_n-D_n^{(i)}-(Z_i-Z'_i)\cdot \hat{\mu}\right| {{\mathbf 1}{( B_{n,i}(\gamma,\delta) )}} \mid {{\mathcal F}}_i\right] \leq \delta \operatorname{\mathbb{E}}[ \| Z_i \| + \| Z_i' \| \mid {{\mathcal F}}_i ] .$$ Combining these bounds, and using the fact that $Z_i$ is ${{\mathcal F}}_i$-measurable and $Z_i'$ is independent of ${{\mathcal F}}_i$, we obtain the result. We are now almost ready to complete the proof of Theorem \[thm:DnL2\]. To do so, we present an analogue of Lemma 6.1 from [@wx1]; we set $V_i := (Z_i-\mu)\cdot \hat{\mu}$, and $W_{n,i}:= \Delta_{n,i}- V_i$. \[lem:ExpWsqconv\] Suppose that $\operatorname{\mathbb{E}}( \|Z \|^2 ) <\infty$ and $\mu\neq{{\mathbf{0}}}$. Then $$\lim_{n\rightarrow \infty} n^{-1} \sum_{i=1}^n \operatorname{\mathbb{E}}( W_{n,i}^2 )=0 .$$ The proof is similar to that of Lemma 6.1 of [@wx1]. Fix ${\varepsilon}\in (0,1)$. Take $\gamma \in (0,1/2)$ and $\delta \in (0,\pi/4)$, to be specified later. Note that from Lemma \[lemma:ACDeltaBound\] we have $| W_{n,i} | \leq 3(\|Z_i\|+\operatorname{\mathbb{E}}\|Z \|)$, so that, provided $\operatorname{\mathbb{E}}( \|Z\|^2 ) < \infty$, we have $\operatorname{\mathbb{E}}( W_{n,i}^2 ) \leq C_0$ for all $n$ and all $i$, for some constant $C_0<\infty$, depending only on the distribution of $Z$. Hence $$\frac{1}{n}\sum_{i\not\in I_{n,\gamma}}\operatorname{\mathbb{E}}( W_{n,i}^2 ) \leq 2\gamma C_0 .$$ From now on choose and fix $\gamma >0$ small enough so that $2\gamma C_0< {\varepsilon}$. Now consider $i\in I_{n,\gamma}$. For such $i$, Lemma \[lem:approx\] yields an upper bound for $|W_{n,i}|$. Note that, for any $C_1 < \infty$, since $Z_i'$ is independent of ${{\mathcal F}}_i$, $$\operatorname{\mathbb{E}}[\|Z'_i\|{{\mathbf 1}{(B_{n,i}^{{\mathrm{c}}}(\gamma,\delta))}} \mid {{\mathcal F}}_i] \leq \operatorname{\mathbb{E}}[ \|Z \|{{\mathbf 1}{\{ \|Z\| \geq C_1 \}}} ] + C_1 {{\mathbb P}}[ B_{n,i}^{{\mathrm{c}}}(\gamma,\delta ) \mid {{\mathcal F}}_i].$$ Given ${\varepsilon}\in (0,1)$ we can take $C_1 = C_1 ({\varepsilon})$ large enough such that $\operatorname{\mathbb{E}}[ \|Z \|{{\mathbf 1}{\{ \|Z\| \geq C_1 \}}} ] \leq {\varepsilon}$, by dominated convergence; for convenience we take $C_1 > 1$ and $C_1 > \operatorname{\mathbb{E}}\| Z \|$. Hence from Lemma \[lem:approx\] we obtain $$\begin{aligned} |W_{n,i}|&\leq 3 (\|Z_i\|+C_1){{\mathbb P}}[B_{n,i}^{{\mathrm{c}}}(\gamma, \delta) \mid {{\mathcal F}}_i] + 3 {\varepsilon}+ \delta \left(\|Z_i\|+\operatorname{\mathbb{E}}\|Z\| \right) .\end{aligned}$$ Using the fact that ${{\mathbb P}}[B_{n,i}^{{\mathrm{c}}}(\gamma, \delta) \mid {{\mathcal F}}_i] \leq 1$, ${\varepsilon}\leq 1$, $\delta \leq 1$, and $C_1 > 1$, $C_1 > \operatorname{\mathbb{E}}\| Z \|$, we can square both sides of the last display and collect terms to obtain $$\begin{aligned} W_{n,i}^2 \leq 27 C_1^2 ( 1 + \| Z_i \| )^2 {{\mathbb P}}[B_{n,i}^{{\mathrm{c}}}(\gamma, \delta) \mid {{\mathcal F}}_i] + 9 {\varepsilon}+ 13 C_1^2 \delta \left(1 + \|Z_i\|\right)^2 .\end{aligned}$$ Since $\operatorname{\mathbb{E}}( \| Z \|^2 ) < \infty$, it follows that, given ${\varepsilon}$ and hence $C_1$, we can choose $\delta \in (0,\pi/4)$ sufficiently small so that $13 C_1^2 \delta \operatorname{\mathbb{E}}[ \left(1 + \|Z_i\|\right)^2 ] < {\varepsilon}$; fix such a $\delta$ from now on. Then $$\operatorname{\mathbb{E}}( W_{n,i}^2 ) \leq 27 C_1^2 \operatorname{\mathbb{E}}[ ( 1 + \| Z_i \| )^2 {{\mathbb P}}[B_{n,i}^{{\mathrm{c}}}(\gamma, \delta) \mid {{\mathcal F}}_i] ] + 10 {\varepsilon}.$$ Here we have that, for any $C_2 > 0$, $$\operatorname{\mathbb{E}}[ ( 1 + \| Z_i \| )^2 {{\mathbb P}}[B_{n,i}^{{\mathrm{c}}}(\gamma, \delta) \mid {{\mathcal F}}_i] ] \leq (1 + C_2 )^2 {{\mathbb P}}( B_{n,i}^{{\mathrm{c}}}(\gamma, \delta) ) + \operatorname{\mathbb{E}}[ ( 1 + \| Z \| )^2 {{\mathbf 1}{\{ \| Z \| \geq C_2 \}}} ] ,$$ where dominated convergence shows that we may choose $C_2$ large enough so that the last term is less than ${\varepsilon}/ C_1^2$, say. Then, $$\operatorname{\mathbb{E}}( W_{n,i}^2 ) \leq 37 {\varepsilon}+ 27 C_1^2 (1 + C_2)^2 {{\mathbb P}}( B_{n,i}^{{\mathrm{c}}}(\gamma, \delta) ) .$$ Finally, we see from Lemmas \[lemma:Dthetabound\] and \[lem:Eprob\] than $\max_{1 \leq i \leq n} {{\mathbb P}}( B_{n,i}^{{\mathrm{c}}}(\gamma, \delta) ) \to 0$, so that, for given ${\varepsilon}>0$ (and hence $C_1$ and $C_2$) we may choose $n \geq n_0$ sufficiently large so that $\max_{i \in I_{n,\gamma}} \operatorname{\mathbb{E}}( W_{n,i}^2 ) \leq 38 {\varepsilon}$. Hence $$\frac{1}{n} \sum_{i\in I_{n,\gamma}} \operatorname{\mathbb{E}}( W_{n,i}^2 ) \leq 38 {\varepsilon},$$ for all $n\geq n_0$. Combining this result with the estimate for $i\not\in I_{n,\gamma}$, we see that $$\frac{1}{n}\sum_{i=1}^n \operatorname{\mathbb{E}}( W_{n,i}^2 ) \leq 39 {\varepsilon},$$ for all $n\geq n_0$. Since ${\varepsilon}>0$ was arbitrary, the result follows. First note that $W_{n,i}$ is ${{\mathcal F}}_i$-measurable with $\operatorname{\mathbb{E}}( W_{n,i} \mid {{\mathcal F}}_{i-1} ) = \operatorname{\mathbb{E}}( \Delta_{n,i} \mid {{\mathcal F}}_{i-1} ) - \operatorname{\mathbb{E}}V_i = 0$, so that $W_{n,i}$ is a martingale difference sequence. Therefore by orthogonality, $n^{-1} \operatorname{\mathbb{E}}[ (\sum_{i=1}^n W_{n,i})^2 ] = n^{-1}\sum_{i=1}^n \operatorname{\mathbb{E}}( W_{n,i}^2 ) \rightarrow 0$ as $n\rightarrow \infty$, by Lemma \[lem:ExpWsqconv\]. In other words, $n^{-1/2}\sum_{i=1}^n W_{n,i} \rightarrow 0$ in $L^2$. But, by Lemma \[lemma:VarGn\], $$\sum_{i=1}^n W_{n,i} = \sum_{i=1}^n \Delta_{n,i} - \sum_{i=1}^n ( Z_i - \mu) \cdot \hat \mu = D_n - \operatorname{\mathbb{E}}D_n - ( S_n - \operatorname{\mathbb{E}}S_n ) \cdot \hat \mu .$$ This yields the statement in the theorem. Finally we can give the proof of Theorem \[thm2\]. Lemma \[lem:diam-expec\] shows that $$\label{eq4} n^{-1/2} | \operatorname{\mathbb{E}}D_n - \operatorname{\mathbb{E}}S_n \cdot \hat \mu | \to 0 .$$ Then by the triangle inequality $$n^{-1/2} | D_n - S_n \cdot \hat \mu | \leq n^{-1/2} | D_n - \operatorname{\mathbb{E}}D_n - (S_n - \operatorname{\mathbb{E}}S_n ) \cdot \hat \mu | + n^{-1/2} | \operatorname{\mathbb{E}}D_n - \operatorname{\mathbb{E}}S_n \cdot \hat \mu | ,$$ which tends to $0$ in $L^2$ by  and . Corollary \[cor:diam-clt\] is deduced from Theorem \[thm2\] in a very similar manner to how Theorems 1.1 and 1.2 in [@wx1] were deduced from Theorem 1.3 there, so we omit the details. Diameter in the degenerate case {#sec:diam-degen} =============================== The aim of this section is to prove Theorem \[thm3\]; thus we assume $\mu \neq {{\mathbf{0}}}$. First we state a result that will enable us to obtain the second statement in Theorem \[thm3\] from the first. \[DnUnifIntLemma\] Suppose that $\operatorname{\mathbb{E}}( \|Z\|^p ) < \infty$ for some $p>4$, $\mu \neq {{\mathbf{0}}}$, and ${\sigma^2_{\mu}}=0$. Then $(D_n-\|\mu\|n)^2$ is uniformly integrable. As in Section \[sec:perim-drift\], we write $X_n := S_n \cdot \hat \mu$ and $Y_n := S_n \cdot \hat \mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}$, where $\hat \mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}$ is any fixed unit vector orthogonal to $\mu$. Note that if ${\sigma^2_{\mu}}= 0$, then $X_n = n \| \mu \|$ is deterministic. For $i \leq j$, we have $\| S_j - S_i \|^2 = (Y_j - Y_i)^2 + (X_j- X_i)^2$, so that $$\begin{aligned} (D_n - \|\mu\|n)^2 &= \left( \max_{0\leq i \leq j \leq n} \left( (Y_j - Y_i)^2 + \|\mu\|^2 (j-i)^2\right)^{1/2} -\|\mu\|n \right) ^2\\ &\leq \left( \|\mu\|n \max_{0\leq i \leq j \leq n} \left( 1 + \frac{(Y_j-Y_i)^2}{\|\mu\|^2 n^2} \right) ^{1/2} - \|\mu\|n\right)^2. \end{aligned}$$ Since $(1 + y)^{1/2} \leq 1 + (y/2)$ for $y \geq 0$, and $(a-b)^2 \leq 2 (a^2 + b^2)$ for $a, b \in {{\mathbb R}}$, we obtain $$\begin{aligned} (D_n - \|\mu\|n)^2 \leq \left( \|\mu\|n \max_{0\leq i \leq j \leq n} \frac{(Y_j-Y_i)^2}{2 \|\mu\|^2 n^2} \right)^2 \leq \frac{4}{\| \mu \|^{2}} \max_{1\leq i\leq n} \frac{Y_i^4}{n^2} . \end{aligned}$$ Now, $|Y_n|$ is a non-negative submartingale, so Doob’s $L^p$ inequality [@gut p. 505] yields $$\begin{aligned} \operatorname{\mathbb{E}}\left[ \left( \max_{1\leq i\leq n} \frac{Y_i^4}{n^2} \right)^{p/4} \right] = n^{-p/2} \operatorname{\mathbb{E}}\left( \max_{1\leq i\leq n} |Y_i|^p \right) \leq C_p n^{-p/2} \operatorname{\mathbb{E}}( |Y_n|^p ), \end{aligned}$$ for any $p>1$ and some constant $C_p < \infty$. Assuming that $\operatorname{\mathbb{E}}( \| Z \|^p ) < \infty$ for $p >4$, $Y_n$ is a random walk on ${{\mathbb R}}$ whose increments have zero mean and finite $p$th moments, so, by the Marcinkiewicz–Zygmund inequality [@gut p. 151], $\operatorname{\mathbb{E}}(|Y_n|^p ) \leq C n^{p/2}$. Hence $$\sup_{n \geq 0} \operatorname{\mathbb{E}}\left[ \left( (D_n - \|\mu\|n)^2 \right)^{p/4} \right] < \infty ,$$ which, since $p/4 > 1$, establishes uniform integrability. Next we show that, under the conditions of Theorem \[thm3\], the diameter must be attained by a point ‘close to’ the start and one ‘close to’ the end of the walk. \[lem:Dattained\] Suppose that $\operatorname{\mathbb{E}}( \|Z\|^2 ) < \infty$, $\mu \neq {{\mathbf{0}}}$, and ${\sigma^2_{\mu}}=0$. Let $\beta \in (0,1)$. Then, a.s., for all but finitely many $n$, $$D_n = \max_{\substack{0\leq i \leq n^{\beta} \\ n-n^\beta \leq j \leq n}}\|S_j-S_i\| .$$ Fix $\beta\in (0,1)$. Since $D_n = \max_{0\leq i,j \leq n}\|S_j-S_i\|$, we have $$\begin{aligned} D_n = \max \left\{ \max_{\substack{0\leq i \leq n^{\beta} \\ n-n^\beta \leq j \leq n}}\|S_j-S_i\|, \max_{\substack{0\leq i \leq n^{\beta} \\ 0 \leq j \leq n-n^\beta}}\|S_j-S_i\|, \max_{n^{\beta}\leq i,j \leq n}\|S_j-S_i\|\right\}. \label{eqn1} \end{aligned}$$ It is clear that $$\max_{\substack{0\leq i \leq n^{\beta} \\ n-n^\beta \leq j \leq n}}\|S_j-S_i\| \geq \|S_n\| \geq | X_n | = \|\mu\|n .$$ We aim to show that the other two terms on the right-hand side of  are strictly less than $\|\mu\|n$ for all but finitely many $n$. A consequence of the law of the iterated logarithm is that, for any ${\varepsilon}>0$, a.s., for all but finitely many $n$, $\max_{0\leq i \leq n} Y_i^2 \leq n^{1+{\varepsilon}}$; see e.g. [@gut p. 384]. Take ${\varepsilon}\in (0,\beta)$. Then, $$\begin{aligned} \max_{\substack{0\leq i \leq n^{\beta} \\ 0 \leq j \leq n-n^\beta}}\|S_j-S_i\|^2 & \leq \max_{\substack{0\leq i \leq n^{\beta} \\ 0 \leq j \leq n-n^\beta}}|X_j-X_i|^2 +\max_{\substack{0\leq i \leq n^{\beta} \\ 0 \leq j \leq n-n^\beta}}|Y_j - Y_i|^2 \\ &\leq \|\mu\|^2 (n-n^\beta)^2 +\max_{0 \leq j \leq n-n^\beta } Y_j^2 +\max_{0\leq i \leq n^\beta} Y_i^2 + 2\max_{\substack{0\leq i \leq n^{\beta} \\ 0 \leq j \leq n-n^\beta}}|Y_j| |Y_i| \\ &\leq \|\mu\|^2 n^2 - 2\|\mu\|^2 n^{1+\beta} + \|\mu\|^2 n^{2\beta}+ n^{1+{\varepsilon}} ,\end{aligned}$$ for all but finitely many $n$. Since ${\varepsilon}< \beta < 1$, this last expression is strictly less than $\|\mu\|^2 n^2$ for all $n$ sufficiently large. Similarly, $$\begin{aligned} \max_{n^\beta \leq i,j \leq n}\|S_j-S_i\|^2 & \leq \|\mu\|^2 (n-n^\beta)^2 +\max_{n^\beta \leq j \leq n } Y_j^2 +\max_{n^\beta \leq i \leq n} Y_i^2 + 2\max_{n^\beta \leq i,j \leq n}|Y_j| |Y_i| \\ &\leq \|\mu\|^2 n^2 - 2\|\mu\|^2 n^{1+\beta} + \|\mu\|^2 n^{2\beta}+ n^{1+{\varepsilon}} ,\end{aligned}$$ for all but finitely many $n$, and, as before, this is strictly less than $\|\mu\|^2 n^2$ for all $n$ sufficiently large. Then  yields the result. The main remaining step in the proof of Theorem \[thm3\] is the following result. \[lem:Dconverge\] Suppose that $\operatorname{\mathbb{E}}( \|Z\|^p ) < \infty$ for some $p>2$, $\mu \neq {{\mathbf{0}}}$, and ${\sigma^2_{\mu}}=0$. Then, as $n \to \infty$, $D_n - \| S_n \| \to 0$, a.s. Using the fact that $\| S_n \|^2 = \| \mu \|^2 n^2 + Y_n^2$, we have that, for $j \leq n$, $$\begin{aligned} \|S_j-S_i\|^2 & = \| \mu \|^2 (j - i)^2 + (Y_j - Y_i )^2 \\ & = \| S_n \|^2 + \|\mu\|^2i^2 +\|\mu\|^2j^2 -2\|\mu\|^2ij -\|\mu\|^2n^2 + Y_i^2 +Y_j^2 - 2Y_i Y_j -Y_n^2 \\ &\leq \|S_n\|^2+ \|\mu\|^2i^2 -(Y_n-Y_j)(Y_n+Y_j)+2Y_i(Y_n-Y_j)-2Y_iY_n + Y_i^2 . \end{aligned}$$ Here we have that, for any ${\varepsilon}>0$, $\max_{0 \leq i \leq n^\beta} | Y_i Y_n | \leq n^{\frac{1+\beta}{2} + {\varepsilon}}$ and $\max_{0 \leq i \leq n^\beta} Y_i^2 \leq n^{\beta + {\varepsilon}}$ for all but finitely many $n$. For the terms involving $Y_j$, Lemma \[LILStyleLem\] shows that we may choose $\beta \in (0,1/2)$ such that, for any sufficiently small ${\varepsilon}>0$, $$\max_{n-n^\beta \leq j \leq n} |Y_n-Y_j| \leq n^{\frac{1}{2}-{\varepsilon}} , \text{ and } \max_{n-n^\beta \leq j \leq n} |Y_n-Y_j| |Y_n+Y_j| \leq n^{1-{\varepsilon}} ,$$ for all but finitely many $n$. With this choice of $\beta$ and sufficiently small ${\varepsilon}$, we combine these bounds to obtain $$\max_{\substack{0\leq i \leq n^{\beta} \\ n-n^\beta \leq j \leq n}} \|S_j-S_i\|^2 \leq \| S_n \|^2 + \| \mu \|^2 n^{2\beta} +n^{1-{\varepsilon}} + n^{\frac{1+\beta}{2} + {\varepsilon}} + n^{\beta + {\varepsilon}} ,$$ for all but finitely many $n$. Since $\beta \in (0,1/2)$, we may apply Lemma \[lem:Dattained\] and choose ${\varepsilon}>0$ sufficiently small to see that $D_n^2 \leq \| S_n \|^2 + n^{1-{\varepsilon}}$, for all but finitely many $n$. Hence $$\begin{aligned} D_n \leq \|S_n\| \left(1 + \|S_n\|^{-2} n^{1-{\varepsilon}} \right)^{1/2} \leq \|S_n\| \left(1+ \|\mu\|^{-2} n^{-1-{\varepsilon}} \right)^{1/2} ,\end{aligned}$$ since $\|S_n\|\geq n\|\mu\|$. Using the fact that $(1+ x)^{1/2} \leq 1 + (x/2)$ for $x \geq 0$, we get $$D_n \leq \|S_n\| \left(1+ \tfrac{1}{2} \|\mu\|^{-2} n^{-1-{\varepsilon}}\right) \leq \|S_n\| + \|\mu\|^{-1} n^{-{\varepsilon}} ,$$ for all but finitely many $n$, since, by the strong law of large numbers, $\|S_n\|\leq 2\|\mu\|n$ all but finitely often. Combined with the bound $D_n \geq \| S_n \|$, this completes the proof. Combining Lemmas \[lem:Dconverge\] and \[lem2\] with Slutsky’s theorem [@gut p. 249] and the fact that, in this case, $X_n = \| \mu \| n$, we obtain . From Lemma \[DnUnifIntLemma\] we have that, if $\operatorname{\mathbb{E}}( \|Z\|^p ) < \infty$ for $p>4$, both $D_n - \| \mu \| n$ and $(D_n-\|\mu\|n)^2$ are uniformly integrable. Thus from  we obtain $$\begin{aligned} \lim_{n \to \infty} \operatorname{\mathbb{E}}( D_n-\|\mu\|n ) & = \operatorname{\mathbb{E}}\left[\frac{{\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}\zeta^2}{2\|\mu\|} \right] = \frac{{\sigma^2_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}}}{2\|\mu\|}, \text{ and }\\ \lim_{n \to \infty} \operatorname{\mathbb{E}}[(D_n-\|\mu\|n)^2] & = \operatorname{\mathbb{E}}\left[ \frac{\sigma_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}^4\zeta^4}{4\|\mu\|^2}\right]=\frac{3\sigma_{\mu_{{\mkern -1mu \scalebox{0.5}{$\perp$}}}}^4}{4\|\mu\|^2}.\end{aligned}$$ Using the fact that $$\begin{aligned} \operatorname{\mathbb{V}ar}D_n = \operatorname{\mathbb{V}ar}(D_n -\|\mu\|n) = \operatorname{\mathbb{E}}[(D_n - \|\mu\|n)^2]-\operatorname{\mathbb{E}}[D_n - \|\mu\|n]^2 ,\end{aligned}$$ we obtain  on letting $n \to \infty$. Auxiliary results {#sec:appendix} ================= In this appendix we present two technical results on sums of i.i.d. random variables that are needed in the body of the paper. The first is used in the proof of Lemma \[lem2\]. \[lem:negative-part\] Let $\xi, \xi_1, \xi_2, \ldots$ be i.i.d. random variables with $\operatorname{\mathbb{E}}( \xi^2 ) < \infty$ and $\operatorname{\mathbb{E}}\xi >0$. Let $X_n = \sum_{k=1}^n \xi_k$. Then $\lim_{n \to \infty} \operatorname{\mathbb{E}}X_n^- =0$. Let $\operatorname{\mathbb{E}}\xi = m >0$ and $\operatorname{\mathbb{V}ar}\xi = s^2 < \infty$. Fix ${\varepsilon}>0$. Note that $$\operatorname{\mathbb{E}}X_n^- = \int_0^\infty {{\mathbb P}}( X_n^- > r ) {{\mathrm d}}r = \int_0^{{\varepsilon}n} {{\mathbb P}}( X_n^- > r ) {{\mathrm d}}r + \int_{{\varepsilon}n}^\infty {{\mathbb P}}( X_n^- > r ) {{\mathrm d}}r .$$ Here we have that, by Chebyshev’s inequality, $${{\mathbb P}}(X_n^- > r ) \leq {{\mathbb P}}( | X_n - m n | > m n + r ) \leq \frac{\operatorname{\mathbb{V}ar}X_n}{(m n + r)^2} = \frac{s^2 n}{(m n + r)^2} .$$ It follows that $$\label{eq20} \int_0^{{\varepsilon}n} {{\mathbb P}}( X_n^- > r ) {{\mathrm d}}r \leq s^2 n \int_0^{{\varepsilon}n} \frac{{{\mathrm d}}r}{(m n + r)^2} \leq \frac{s^2 {\varepsilon}}{m^2} .$$ For $B \in (0,\infty)$ let $\xi'_k := \xi_k {{\mathbf 1}{\{ | \xi_k | \leq B \}}}$ and $\xi''_k := \xi_k {{\mathbf 1}{\{ | \xi_k | > B \}}}$. Set $X_n' := \sum_{k=1}^n \xi'_k$ and $X_n'' := \sum_{k=1}^n \xi''_k$. By dominated convergence, we have that as $B \to \infty$, $\operatorname{\mathbb{E}}\xi'_1 \to m$, $\operatorname{\mathbb{V}ar}\xi'_1 \to s^2$, $\operatorname{\mathbb{E}}| \xi''_1 | \to 0$, and $\operatorname{\mathbb{V}ar}\xi''_1 \to 0$, so in particular we may (and do) choose $B$ large enough so that $\operatorname{\mathbb{E}}\xi'_1 > m /2$, $\operatorname{\mathbb{E}}| \xi''_1 | < {\varepsilon}/4$, and $\operatorname{\mathbb{V}ar}\xi''_1 < {\varepsilon}^2$. Since $X_n = X_n' + X_n''$, for any $r >0$ we have $$\label{eq21} {{\mathbb P}}( X_n < - r ) \leq {{\mathbb P}}( X_n' < -r/2) + {{\mathbb P}}(X_n'' < - r/2 ) .$$ Here since $\operatorname{\mathbb{E}}( (\xi'_k)^4 ) \leq B^4 < \infty$ it follows from Markov’s inequality and the Marcinkiewicz–Zygmund inequality [@gut p. 151] that for some constant $C < \infty$ (depending on $B$), $${{\mathbb P}}( X_n ' < - r ) \leq {{\mathbb P}}( | X_n' - \operatorname{\mathbb{E}}X_n'|^4 > ( \operatorname{\mathbb{E}}X_n' + r)^4 ) \leq \frac{C n^2}{((m/2) n + r )^4 } .$$ So $$\label{eq22} \int_{{\varepsilon}n}^\infty {{\mathbb P}}( X_n ' < - r/2 ) {{\mathrm d}}r \leq 16 C n^2 \int_0^\infty \frac{{{\mathrm d}}r}{(m n+r )^4} = O (1/n) .$$ On the other hand, by Chebyshev’s inequality, for $r > ({\varepsilon}/4) n$, $${{\mathbb P}}( X_n'' < - r ) \leq {{\mathbb P}}( | X_n'' - \operatorname{\mathbb{E}}X_n'' | > \operatorname{\mathbb{E}}X_n'' + r ) \leq \frac{\operatorname{\mathbb{V}ar}X_n''}{(r - ({\varepsilon}/4) n)^2} \leq \frac{{\varepsilon}^2 n}{(r - ({\varepsilon}/4) n)^2} .$$ Hence $$\label{eq23} \int_{{\varepsilon}n}^\infty {{\mathbb P}}( X_n'' < - r/2 ) \leq 4 {\varepsilon}^2 n \int_{{\varepsilon}n}^\infty \frac{ {{\mathrm d}}r}{(r - ({\varepsilon}/2) n )^2} = 8 {\varepsilon}.$$ So from  with  and , we have $$\limsup_{n \to \infty} \int_{{\varepsilon}n}^\infty {{\mathbb P}}( X_n < - r ) {{\mathrm d}}r \leq 8 {\varepsilon},$$ which combined with  implies that $$\limsup_{n \to \infty} \operatorname{\mathbb{E}}X_n^- \leq \frac{s^2 {\varepsilon}}{m^2} + 8 {\varepsilon}.$$ Since ${\varepsilon}>0$ was arbitrary, the result follows. The next result is used in the proof of Lemma \[lem:Dconverge\]. \[LILStyleLem\] Let $\xi, \xi_1, \xi_2, \ldots$ be i.i.d. random variables with $\operatorname{\mathbb{E}}( |\xi|^p ) < \infty$ for some $p>2$, and $\operatorname{\mathbb{E}}\xi =0$. For $0 \leq j \leq n$, let $T_{n,j}:=\sum_{i=n-j}^{n} \xi_i$. Then there exist $\beta_0 \in (0,1/2)$ and ${\varepsilon}_0 \in (0,1/2)$ such that for any $\beta \in (0,\beta_0)$ and any ${\varepsilon}\in (0,{\varepsilon}_0)$, $$\lim_{n\rightarrow \infty}\max_{0\leq j \leq n^\beta} \frac{|T_{n,j}|}{n^{(1/2) - {\varepsilon}}}=0 , {\ \text{a.s.}}$$ \[p2condition\] On first sight, by the fact that there are $O(n^\beta)$ terms in the sum $T_{n, j}$, one’s intuition may be misled to conclude that $T_{n,j}$ should be only of size about $n^{\beta/2}$. However, note that assuming only $\operatorname{\mathbb{E}}( \xi^2 ) < \infty$, $\max_{0 \leq i \leq n} \xi_i$ can be essentially as big as $n^{1/2}$, and with probability at least $1/n$ this maximal value is a member of $T_{n,j}$, and so it seems reasonable to expect that $T_{n,j}$ should be as big as $n^{1/2}$ infinitely often. Thus our $p>2$ moments condition seems to be necessary. Let $\xi'_i = \xi_i {{\mathbf 1}{\{|\xi_i|\leq i^{1/2-\delta}\}}}$ and $\xi''_i = \xi_i {{\mathbf 1}{\{|\xi_i| > i^{1/2-\delta}\}}}$ for some $\delta \in (0,1/2)$ to be chosen later. Then we use the subadditivity of the supremum, the triangle inequality, and the condition ${\varepsilon}\in (0,{\varepsilon}_0)$ to get $$\max_{0\leq j \leq n^\beta} \frac{|T_{n,j}|}{n^{1/2-{\varepsilon}}}\leq \max_{0\leq j \leq n^\beta} \frac{|\sum_{i=n-j}^{n}(\xi'_i-\operatorname{\mathbb{E}}\xi'_i)|}{n^{1/2-{\varepsilon}}}+ \frac{\sum_{i=n- n^\beta}^{n}|\operatorname{\mathbb{E}}\xi'_i|}{n^{1/2-{\varepsilon}_0}} + \frac{\sum_{i=n- n^\beta}^{n}|\xi''_i|}{n^{1/2-{\varepsilon}_0}}, \label{eqn:1}$$ where, and for the rest of this proof, if $n^\beta$ appears in the index of a sum, we understand it to be shorthand for $\lfloor n^\beta \rfloor$. By Markov’s inequality, since $\operatorname{\mathbb{E}}( |\xi|^p )<\infty$ for $p>2$ we have $${{\mathbb P}}\left( |\xi_i|>i^{1/2-\delta}\right)\leq \frac{\operatorname{\mathbb{E}}( |\xi|^p )}{i^{(1/2-\delta)p}} = O(i^{\delta p-p/2}) .$$ Suppose that $\delta \in (0 , (p-2)/2p )$, so that $\delta p-p/2<-1$, and thus the Borel–Cantelli lemma implies that $\xi''_i=0$ for all but finitely many $i$. Thus, for any $\beta, {\varepsilon}_0 \in (0,1/2)$, $$\lim_{n\rightarrow \infty} \frac{\sum_{i=n-n^\beta}^{n}|\xi''_i|}{n^{1/2-{\varepsilon}_0}}=0 , {\ \text{a.s.}}$$ For the second term on the right-hand side of , $\operatorname{\mathbb{E}}\xi =0$ implies $\left|\operatorname{\mathbb{E}}\xi'_i\right| = \left|\operatorname{\mathbb{E}}\xi''_i\right|$, so $$\sum_{i=n-n^\beta}^{n} \left|\operatorname{\mathbb{E}}\xi'_i\right| = \sum_{i=n-n^\beta}^{n} \left|\operatorname{\mathbb{E}}\xi''_i\right| \leq ( n^\beta + 1) \operatorname{\mathbb{E}}\left(|\xi|{{\mathbf 1}{\{|\xi|> (n/2)^{1/2-\delta}\}}}\right),$$ for all $n$ large enough so that $n - n^\beta > n/2$. Here $$\operatorname{\mathbb{E}}\left(|\xi|{{\mathbf 1}{\{|\xi|> (n/2)^{1/2-\delta}\}}}\right) = \operatorname{\mathbb{E}}\left(|\xi|^{2} | \xi |^{-1} {{\mathbf 1}{\{|\xi|> (n/2)^{1/2-\delta}\}}}\right) \leq C n^{\delta - 1/2} ,$$ for some constant $C$ depending only on $\operatorname{\mathbb{E}}( \xi^2)$. Suppose that $\delta \leq 1/4$. Then we get $\sum_{i=n-n^\beta}^{n} \left|\operatorname{\mathbb{E}}\xi'_i\right| = O ( n^{\beta -1/4} )$, so that, for any $\beta \in (0,1/2)$ and ${\varepsilon}_0 \in (0,1/4)$, $$\lim_{n \rightarrow \infty} \frac{\sum_{i=n-n^\beta}^{n}|\operatorname{\mathbb{E}}\xi'_i|}{n^{1/2-{\varepsilon}_0}} = 0, {\ \text{a.s.}}$$ Finally, we consider the first term on the right-hand side of , with the truncated, centralised sum, which we denote as $T'_{n,j} := \sum_{i=n-j}^n (\xi'_i-\operatorname{\mathbb{E}}\xi'_i)$. The $\xi'_i - \operatorname{\mathbb{E}}\xi_i'$ are independent, zero-mean random variables with $| \xi'_i - \operatorname{\mathbb{E}}\xi_i' | \leq 2 n^{1/2-\delta}$ for $i \leq n$, so we may apply the Azuma–Hoeffding inequality [@penrose p. 33] to obtain, for any $t \geq 0$, $${{\mathbb P}}\left(|T'_{n,j}|\geq t \right)\leq 2 \exp\left( - \frac{t^2}{8 (j+1) n^{1-2\delta}}\right).$$ In particular, taking $t = n^{1/2 -{\varepsilon}_0}$ we obtain $$\begin{aligned} \label{eq:ah} {{\mathbb P}}\left( \max_{0\leq j \leq n^{\beta}}|T'_{n,j}|\geq n^{1/2-{\varepsilon}_0}\right) & \leq (n^\beta +1)\max_{0\leq j \leq n^\beta} {{\mathbb P}}\left(|T'_{n,j}|\geq n^{1/2-{\varepsilon}_0} \right) \nonumber\\ &\leq 2 (n^\beta +1) \exp\left( - \frac{n^{1-2{\varepsilon}_0}}{16 n^{1+\beta-2\delta}}\right), \end{aligned}$$ for all $n$ sufficiently large. Now choose and fix $\delta = \delta (p) := \min \{ 1/4, (p-2)/4p \}$, so $\delta >0$ satisfies the bounds earlier in this proof, and then choose $\beta < \beta_0 := \delta$ such that $$\frac{n^{1-2{\varepsilon}_0}}{n^{1+\beta-2\delta}} = n^{2\delta - 2 {\varepsilon}_0 - \beta} \geq n^{\delta - 2{\varepsilon}_0} .$$ So choosing ${\varepsilon}_0 = \delta /4$ we have that the probability bound in  is summable. Thus by the Borel–Cantelli lemma, we have that $\max_{0\leq j \leq n^\beta} |T'_{n,j}| \leq n^{1/2-{\varepsilon}_0}$ for all but finitely many $n$, a.s. It follows that, for any ${\varepsilon}\in (0,{\varepsilon}_0)$, $$\lim_{n\rightarrow \infty} \frac{|\sum_{i=n-n^\beta}^{n}(\xi'_i-\operatorname{\mathbb{E}}\xi'_i)|}{n^{1/2-{\varepsilon}}}=0, {\ \text{a.s.}},$$ which completes the proof. Acknowledgements {#acknowledgements .unnumbered} ================ The authors are grateful to Ostap Hryniv, for numerous helpful discussions on the subject of this paper, and to Wilfrid Kendall for a question that prompted us to formulate Theorem \[thm:shape\]. The first author is supported by an EPSRC studentship (EP/M507854/1). [99]{} R.N. Bhattacharya and R.R. Rao, *Normal Approximation and Asymptotic Expansions*, updated reprint of the 1986 edition, SIAM, Philadelphia, 2010. P. Billingsley, *Convergence of Probability Measures*, 2nd ed., Wiley, New York, 1999. P. Billingsley, *Probability and Measure*, Anniversary ed., Wiley, New York, 2012. R. Durrett, *Probability: Theory and Examples*, 4th ed., Cambridge University Press, Cambridge, 2010. D.S. Grebenkov, Y. Lanoiselée, and S.N. Majumdar, Mean perimeter and mean area of the convex hull over planar random walks, *J. Statist. Mech. Theor. Exp.* (2017) 103203. P.M. Gruber, *Convex and Discrete Geometry*, Springer, Berlin, 2007. A. Gut, *Probability: A Graduate Course*, Springer, Berlin, 2005. Z. Kabluchko, V. Vysotsky, and D. Zaporozhets, Convex hulls of random walks, hyperplane arrangements, and Weyl chambers, [*Geom. Funct. Anal.*]{} [**27**]{} (2017) 880–918. Z. Kabluchko, V. Vysotsky, and D. Zaporozhets, Convex hulls of random walks: Expected number of faces and face probabilities, [*Adv. Math.*]{} [**320**]{} (2017) 595–629. O. Kallenberg, *Foundations of Modern Probability*, 2nd ed., Springer, 2002. S.N. Majumdar, A. Comtet, and J. Randon-Furling, Random convex hulls and extreme value statistics, [*J. Stat. Phys.*]{} [**138**]{} (2010) 955–1009. J. McRedmond and C. Xu, On the expected diameter of planar Brownian motion, [*Statist. Probab. Lett.*]{} [**130**]{} (2017) 1–4. M. Penrose, *Random Geometric Graphs*, Oxford University Press, Oxford, 2003. J. Rudnick and G. Gaspari, The shapes of random walks, [*Science*]{} [**236**]{} (1987) 384–389. T.L. Snyder and J.M. Steele, Convex hulls of random walks, [*Proc. Amer. Math. Soc.*]{} [**117**]{} (1993) 1165–1173. F. Spitzer and H. Widom, The circumference of a convex polygon, [*Proc. Amer. Math. Soc.*]{} [**12**]{} (1961) 506–509. K. Tikhomirov and P. Youssef, When does a discrete-time random walk in $\mathbb{R}^n$ absorb the origin into its convex hull? [*Ann. Probab.*]{} [**45**]{} (2017) 965–1002. V. Vysotsky and D. Zaporozhets, Convex hulls of multidimensional random walks, arXiv:1506.07827. A.R. Wade and C. Xu, Convex hulls of planar random walks with drift, [*Proc. Amer. Math. Soc.*]{} [**143**]{} (2015) 433–445. A.R. Wade and C. Xu, Convex hulls of random walks and their scaling limits, [*Stochastic Process. Appl.*]{} [**125**]{} (2015) 4300–4320. C. Xu, *Convex Hulls of Planar Random Walks*, PhD thesis, University of Strathclyde, 2017, arXiv:1704.01377.
--- abstract: 'We propose a scheme by which two parties can secretely and simultaneously exchange messages. The scheme requires the two parties to share entanglement and both to perform Bell-state measurements. Only two out of the four Bell states are required to be distinguished in the Bell-state measurements, and thus the scheme is experimentally feasible using only linear optical means. Generalizations of the scheme to high-dimensional systems and to multipartite entanglement are considered. We show also that the proposed scheme works even if the two parties do not possess shared reference frames.' address: 'Department of Physics, Korea Advanced Institute of Science and Technology Daejeon, 305-701, Korea' author: - Sung Soon Jang - 'Hai-Woong Lee' title: 'Quantum Message Exchange Based on Entanglement and Bell-State Measurements' --- Introduction ============ Entanglement is an essential resource for many applications in quantum information science such as quantum superdense coding[@BW92; @MWKZ96] quantum teleportation[@BBCJPW93; @BBMHP98; @BPMEWZ97; @FSBFKP98; @LK01; @LSPM02], quantum cryptography[@E91; @ECrypt00; @LLCLK03], and quantum computing[@RB01; @Ni04]. From an information-theoretic point of view, two parties sharing entanglement can be regarded to have already a certain amount of information distributed between them; one e-bit per a shared maximally entangled pair of qubits. Thus, for example, in superdense coding two bits of information can be sent from one party to another by manipulating only one of two maximally entangled qubits. In quantum teleportation a quantum state of a qubit can be completely transferred by sending only two bits of classical information, if the two parties, the sender and the receiver, share a maximally entangled pair. In this work we explore yet another situation in which two(or more) parties can make use of entanglement they share to their advantage. We consider a situation in which two parties, Alice and Bob, share a maximally entangled pair A and B of qubits. Alice makes a Bell-state measurement upon the qubit $A$ and another qubit $\alpha$ she prepared in a state about which only she has the information. Bob also makes a Bell-state measurement upon the qubit $B$ and another qubit $\beta$ he prepared in a state about which only he has the information. We are interested in the probability for each of the sixteen possible joint measurement outcomes, which in general depends upon the states of the qubits $\alpha$ and $\beta$ in a way characteristic of the shared entanglement. If Alice keeps the information on the state of qubit $\alpha$ to herself and Bob keeps the information on the state of qubit $\beta$ to himself, they have a partial knowledge of the probabilities in advance that others do not. We suggest that this advantage can be exploited to devise a method by which Alice and Bob secretely and simultaneously exchange messages. Generalizations of the method to higher-dimensional systems(“qudits”) and to multipartite entanglement are also discussed. The Basic Scheme ================ Let us suppose that two parties, Alice and Bob, share a maximally entangled pair $A$ and $B$ of qubits. The qubits A and B can be in any of the four Bell states $$|\Phi _{ij}\rangle_{AB} = \frac{1}{\sqrt 2}\sum\limits_{q=0}^{1} {(-1) ^{jq} } |q\rangle_A |q + i\rangle_B; i,j=0,1 \begin{array}{l} \\ [-10mm] , \end{array}$$ but for the sake of concreteness of argument, we take it as $$|\Phi _{00}\rangle_{AB} = \frac{1}{{\sqrt 2 }}\left( {\left| 0 \right\rangle_A \left| 0 \right\rangle_B + \left| 1 \right\rangle_A \left| 1 \right\rangle_B } \right)$$ Alice has in her possession another qubit $\alpha$ which she prepared in state $$|\psi\rangle_{\alpha} = a_0 |0 \rangle_{\alpha}+ a_1 |1\rangle_{\alpha}$$ Similarly, Bob has in his possession yet another qubit $\beta$ which he prepared in state $$|\psi\rangle_{\beta} = b_0 |0 \rangle_{\beta}+ b_1 |1\rangle_{\beta}$$ Now Alice performs a Bell-state measurement on the pair $\alpha$ and $A$, and Bob performs a Bell-state measurement on the pair $\beta$ and $B$. The experimental scheme is depicted schematically in Fig. 1. ![Experimental Scheme. The EPR(Einstein-Podolsky-Rosen) source emits an entangled pair in state $|\Phi_{00}\rangle_{AB}$. Alice performs a Bell-state measurement on the qubit pair $\alpha$ and A, and Bob performs a Bell-state measurement on the qubit pair $\beta$ and B. BSM stands for Bell-state measurement.](efig){width="14cm"} The probability $P_{i_1 j_1 i_2 j_2} (i_1,j_1,i_2,j_2 = 0,1)$ that Alice’s Bell-state measurement yields $|\Phi_{i_1 j_1}\rangle_{\alpha A}$ and Bob’s Bell-state measurement yields $|\Phi_{i_2 j_2}\rangle_{\beta B}$ can be obtained by expanding the total wave function $|\psi\rangle_{\alpha \beta A B} = |\psi\rangle_\alpha |\psi\rangle_\beta |\Phi_{00}\rangle_{AB}$ as $$\left| \psi \right\rangle _{\alpha \beta AB} = \sum\limits_{i_1,j_1,i_2,j_2 = 0}^1 {\left| {\Phi _{i_1 j_1} } \right\rangle _{\alpha A} \left| {\Phi _{i_2 j_2} } \right\rangle _{\beta B} V_{i_1 j_1 i_2 j_2} }$$ A straightforward algebra yields $$V_{i_1 j_1 i_2 j_2}=\frac{1}{2\sqrt 2}(-1)^{\left ( i_1 j_1 + i_2 j_2 \right )} \left [ a_{i_1} b_{i_2} + (-1)^{\left (j_1 + j_2\right )} a_{i_1 + 1} b_{i_2 + 1} \right ],$$ where all indices are evaluated modulo 2. The probabilities $P_{i_1 j_1 i_2 j_2}$’s are given by $P_{i_1 j_1 i_2 j_2} = |V_{i_1 j_1 i_2 j_2}|^2$. From Eq.(6) we immediately obtain $$\begin{aligned} P_{0000} = P_{0101} = P_{1010} = P_{1111} = \frac{1}{8}\left | a_0 b_0 + a_1 b_1 \right |^2 \\ P_{0001} = P_{0100} = P_{1011} = P_{1110} = \frac{1}{8}\left | a_0 b_0 - a_1 b_1 \right |^2 \\ P_{0010} = P_{0111} = P_{1000} = P_{1101} = \frac{1}{8}\left | a_0 b_1 + a_1 b_0 \right |^2 \\ P_{0011} = P_{0110} = P_{1001} = P_{1100} = \frac{1}{8}\left | a_0 b_1 - a_1 b_0 \right |^2\end{aligned}$$ We note that, since Alice prepared the state of qubit $\alpha$ and thus knows what $a_0$ and $a_1$ are, and similarly since Bob prepared the state of qubit $\beta$ and thus knows what $b_0$ and $b_1$ are, Alice and Bob have a partial prior knowledge of the probabilities $P_{i_1 j_1 i_2 j_2}$’s. We suggest that they can take advantage of this knowledge to secretely exchange messages. The scheme we propose goes as follows. We suppose that Alice and Bob share a large number N($\gg 1$) of maximally entangled pairs A’s and B’s. We further suppose that Alice has an equally large number N of qubits $\alpha$’s, each of which she prepared in state (3), and that Bob has N qubits $\beta$’s, each of which he prepared in state (4). Alice keeps the information on the state of qubits $\alpha$’s to herself and Bob keeps the information on the state of qubits $\beta$’s to himself. Alice and Bob then perform a series of N Bell-state measurements on each pair $\alpha$ and $A$, and $\beta$ and $B$, respectively. They announce publicly their measurement results only when the outcome is either $\Phi_{10}$ or $\Phi_{11}$ (this considerably eases the burden on the Bell-state measurements, because only these two Bell states can be unambiguously distinguished with linear optical means [@MWKZ96; @LK01; @LCS99]). By counting the number $N_{1 j_1 1 j_2}$ of occurrences for the joint outcome $\left |\Phi_{1 j_1} \right \rangle_{\alpha A} \left | \Phi_{1 j_2} \right \rangle_{\beta B}$, the probabilities $P_{1 j_1 1 j_2}$’s $(j_1 , j_2 = 0,1)$ can be determined experimentally as $$P^{exp}_{1 j_1 1 j_2} = \frac{N_{1 j_1 1 j_2}}{N}$$ When the four experimentally determined probabilities $P^{exp}_{1 j_1 1 j_2}$’s ($P^{exp}_{1010}$, $P^{exp}_{1111}$, $P^{exp}_{1011}$, $P^{exp}_{1110}$) are substituted for the probabilities $P_{1 j_1 1 j_2}$’s of Eqs. (7a) and (7b), we obtain two equations that relate the four constants $a_0 , a_1 , b_0$ and $b_1$. Since Alice knows the values of $a_0$ and $a_1$, there are only two unknowns $b_0$ and $b_1$ \[ the constants $b_0$ and $b_1$ are complex numbers, but they are subject to normalization and the overall phase can be ignored\] to her. Likewise, there are only two unknowns $a_0$ and $a_1$ to Bob. To any third party, however, the number of unknowns is four. We can thus conclude that only Alice and Bob can completely determine the four constants $a_0, a_1, b_0$ and $b_1$. Let us suppose that Alice and Bob have secret messages they wish to send to each other. If they prepare their messages in the form of two constants, the scheme described above can be used for them to achieve a secret two-way communication. We mention that a scheme which is different from our proposed scheme but allows two parties to simultaneously exchange their messages as our proposed scheme does has recently been proposed[@Ng04]. Efficiency ========== Let us now estimate the efficiency of the scheme described in the previous section. When a sufficiently large number $N\gg 1$ of Bell-state measurements are made by Alice and Bob each, the number $N_{i_1 j_1 i_2 j_2}$ of times the joint outcome $|\Phi_{i_1 j_1}\rangle_{\alpha A}|\Phi_{i_2 j_2}\rangle_{\beta B}$ is counted lies within the range defined as[@Reif] $$\begin{array}{l} NP_{i_1 j_1 i_2 j_2} - \sqrt {2NP_{i_1 j_1 i_2 j_2} \left( {1 - P_{i_1 j_1 i_2 j_2} } \right)} \lesssim N_{i_1 j_1 i_2 j_2}^{exp} \lesssim NP_{i_1 j_1 i_2 j_2} + \sqrt {2NP_{i_1 j_1 i_2 j_2} \left( {1 - P_{i_1 j_1 i_2 j_2} } \right)} \end{array}$$ where $P_{i_1 j_1 i_2 j_2}$’s are the exact theoretical probabilities given by Eqs.(7). Thus, the accuracy of the experimentally determined probabilities $P^{exp}_{i_1 j_1 i_2 j_2} = N^{exp}_{i_1 j_1 i_2 j_2}/N$ is limited by $$\left | P^{exp}_{i_1 j_1 i_2 j_2} - P_{i_1 j_1 i_2 j_2} \right | \lesssim \frac{\sqrt{2NP_{i_1 j_1 i_2 j_2} \left( {1 - P_{i_1 j_1 i_2 j_2} } \right)}}{N}\cong \frac{1}{\sqrt N}$$ Eq.(10) is valid as long as Alice and Bob perform each of their Bell-state measurements individually, which we assume here. \[If a collective approach is adopted, for example, if Alice performs all N Bell-state measurements before Bob makes any of his measurements and sends the message containing the outcome of all her measurements to Bob, and if Bob, upon receiving Alice’s message, performs his Bell-state measurements in an appropriate collective way, it may be possible to obtain a higher accuracy for $P^{exp}_{i_1 j_1 i_2 j_2}$. Further research is needed on the collective approach.\] Eq.(10) indicates that, in order to obtain $P^{exp}_{i_1 j_1 i_2 j_2}$ accurate to one decimal point, which would allow Alice and Bob to share four real values $\cos\theta_A, \cos\phi_A, \cos\theta_B, \cos\phi_B$ ($a_0 = \cos\theta_A, a_1 = \sin\theta_A e^{i\phi_A}, b_0 =\cos\theta_B, a_1 = \sin\theta_B e^{i\phi_B}$) accurate to one decimal point each, Alice and Bob should perform $\sim 100$(perhaps a few hundred) Bell-state measurements each. We therefore conclude that Alice and Bob gain 4 secret digits(or equivalently 13$\sim$14 secret bits) at the expense of $\sim 100$(a few hundred) entangled pairs, i.e., the number of secret bits gained per use of an entangled pair is roughly 0.1 or less. The efficiency of the proposed scheme is thus somewhat lower than that of the entanglement-based cryptographic scheme(E91)[@E91]. We note that the proposed protocol can be used for Alice and Bob to send secretly to each other directions in space and consequently to possess two private shared reference frames. They need of course to store the information on their Euler angles in the constants $a_0$, $a_1$ and $b_0$, $b_1$, respectively. Shared reference frames are a resource for quantum communications [@Enk04; @RG03; @BRS04]. A standard protocol[@RG03] to send directions in space when Alice and Bob share entangled pairs, say in $\left ( \Phi_{00} \right)_{AB}$, requires Alice to perform a projection measurement in her $|0\rangle-|1\rangle$ basis on the particle A of each entangled pair and announce publicly the outcome of each measurement. Bob would then follow with his projection measurement in his $|0\rangle-|1\rangle$ basis on the corresponding particle B of each entangled pair. If a sufficiently large number $N \gg 1$ of projection measurements are performed by Alice and Bob, the number $N_s$ of times that Alice and Bob obtain the same measurement outcome will be given by $\frac{N_s}{N} = \cos^2\theta$, where $\theta$ is the angle between Alice’s and Bob’s axes.\[ We assume that qubits are polarized photons. If we take spins for qubits, then $\frac{N_s}{N} = cos^2\frac{\theta}{2}$\]. Provided that Alice and Bob perform each of their projection measurements individually, essentially the same statistical analysis based on Eq.(9) applies here, and thus the efficiency of our proposed protocol is of the same order of magnitude as that of this standard entanglement-based protocol. \[If, however, we allow Bob to make collective measurements in the standard protocol, the efficiency can be higher. See, for example, Ref.[@SharingReference].\] Our protocol, however, requires both Alice and Bob to perform Bell-state measurements, which are in general more difficult to perform than von-Neumann projection measurements. One advantage of our protocol is that it is symmetric with respect to Alice and Bob and allows both Alice and Bob to gain information, whereas in standard schemes the information usually flows one way. Generalization to higher-dimensional systems ============================================ We now consider a generalization of the above scheme to higher-dimensional systems, i.e., to “qudits”. The generalized Bell states for a qudit can be defined as [@ADGJ00; @E03] $$|\Phi _{ij}\rangle_{AB} = \frac{1}{\sqrt d }\sum\limits_{q=0}^{d-1} {\omega ^{jq} } |q\rangle_A |q + i\rangle_B;~i,j=0,1,\dots,d-1$$ where $\omega=e^{i\frac{2 \pi}{d}}$. Let us assume that Alice and Bob share a large number N($\gg 1$) of entangled pairs $A$’s and $B$’s in the generalized Bell state $\left | \Phi_{00} \right \rangle_{AB}$. Alice performs a series of N Bell-state measurements on pairs of qudit $A$ and another qudit $\alpha$ she prepared in state $|\psi\rangle_{\alpha} = \sum\limits_{i=0}^{d-1} a_i |i \rangle_{\alpha}$, while Bob performs a series of N Bell-state measurements on pairs of qudit $B$ and yet another qudit $\beta$ he prepared in state $|\psi\rangle_{\beta}=\sum\limits_{i=0}^{d-1}b_i|i\rangle_{\beta}$. As in the qubit case, the total wave function $|\psi\rangle_{\alpha\beta A B}$ can be expanded in terms of the products of the generalized Bell states $|\Phi_{i_1 j_1}\rangle_{\alpha A} |\Phi_{i_2 j_2}\rangle_{\beta B}$ as $$|\psi\rangle_{\alpha \beta A B} = \sum\limits_{i_1 , j_1 , i_2 , j_2 = 0}^{d-1} |\Phi_{i_1 j_1}\rangle_{\alpha A} |\Phi_{i_2 j_2}\rangle_{\beta B} V_{i_1 j_1 i_2 j_2}$$ where $V_{i_1 j_1 i_2 j_2}$ is given by $$V_{i_1 j_1 i_2 j_2}=\frac{1}{d \sqrt d}~\omega^{\left ( i_1 j_1 + i_2 j_2 \right ) } \sum\limits_{m=0}^{d-1}\omega^{-\left (j_1 + j_2 \right ) m} a_{m-i_1} b_{m-i_2}$$ where all indices are now evaluated modulo d. Eq.(13) indicates that the probabilities $P_{i_1 j_1 i_2 j_2}=\left | V_{i_1 j_1 i_2 j_2} \right|^2$ take on the same value if $i_1 - i_2$(mod d) is the same and if $j_1 + j_2$(mod d) is the same. Thus, there are $d^2$ different values of the probabilities $P_{i_1 j_1 i_2 j_2}$’s. Eq.(13) then provides $(d^2-1)$ independent equations that relate the constants, $a_0, a_1,\dots,a_{d-1}$; $b_0,b_1,\dots,b_{d-1}$. To any third party other than Alice and Bob, the number of unknowns contained in these constants is $(4d-4)$. There are , however, only $(2d-2)$ unknowns, as far as Alice or Bob is concerned. By agreeing to publicly announce their measurement results only when the outcome is among judiciously chosen generalized Bell states, Alice and Bob can limit the number of probabilities that can be determined experimentally in such a way that the number of equations that relate the experimentally determined probabilities with the constants $a_i$’s and $b_i$’s is greater than or equal to $(2d-2)$ but less that $(4d-4)$. This way, Alice and Bob can send secret messages in the form of $(2d-2)$ constants to each other and as a result secretely share $(4d-4)$ constants between them. As an example consider the case $d=3$. If Alice and Bob announce results of the Bell-state measurements only when they measure either $\Phi_{00}$ or $\Phi_{21}$, they can determine experimentally the four probabilities $P^{exp}_{0000}, P^{exp}_{2121}, P^{exp}_{0021}$ and $P^{exp}_{2100}$, which are given by $$\begin{aligned} P_{0000} & = & \frac{1}{27}\left |a_0 b_0 + a_1 b_1 + a_2 b_2 \right|^2 \\ P_{2121} & = & \frac{1}{27}\left |a_0 b_0 + a_1 b_1 \omega + a_2 b_2\omega^2 \right |^2 \\ P_{0021} & = & \frac{1}{27}\left |a_2 b_0 + a_0 b_1 \omega^2 + a_1 b_2 \omega \right |^2 \\ P_{2100} & = & \frac{1}{27}\left |a_1 b_0 \omega^2 + a_2 b_1 \omega + a_0 b_2 \right |^2\end{aligned}$$ where $\omega=e^{i \frac{2 \pi}{3}}$. Eqs. (14a)-(14d) are sufficient for Alice and Bob to determine their four unknowns, allowing them to exchange messages in the form of four constants each. Generalization to multipartite entanglement =========================================== Another possible generalization of the proposed scheme is to the case of multiparty communications. Let us consider the case when N($>2$) parties share an N-qubit entangled state of Greenberger-Horne-Zeilinger (GHZ) type [@GHZ] given by $$|\Phi\rangle_{AB\dots N}=\frac{1}{\sqrt 2}\left ( |0\rangle_A |0\rangle_B \dots |0\rangle_N + |1\rangle_A |1\rangle_B \dots |1\rangle_N \right )$$ where the letter N(also the small letter $n$ and the Greek letter $\nu$) refers to the Nth party or Nth qubit. Each party has, in addition to the qubit K of Eq.(15) \[ the letter K (and also the small letter k and the Greek letter $\kappa$) denotes the Kth party or Kth qubit, where $1\leq K \leq N$\], another qubit $\kappa$ which she or he prepared in state $$|\psi\rangle_\kappa = k_0 |0\rangle_\kappa + k_1 |1\rangle_\kappa$$ Each party then performs a Bell-state measurement upon the qubits $\kappa$ and K. The total wave function for the 2N qubits $\alpha, \beta,\dots \nu, A, B, \dots N$ can be expanded as $$|\psi\rangle_{\alpha, \beta,\dots \nu, A, B, \dots N} = \sum \limits_{i_1, j_1, i_2, j_2, \dots, i_N, j_N = 0}^{1}\left( \Phi_{i_1 j_1} \right )_{\alpha A}\left( \Phi_{i_2 j_2} \right )_{\beta B} \dots \left ( \Phi_{i_N j_N} \right )_{\nu N} V_{i_1 j_1 i_2 j_2 \dots i_N j_N}$$ A straightforward algebra yields $$\begin{array}{l} V_{i_1 j_1 i_2 j_2 \dots i_N j_N} = \\ \displaystyle{\frac{1}{(\sqrt 2)^{N+1}}}(-1)^{i_1 j_1 + i_2 j_2 + \dots + i_N j_N } \left [ a_{i_1} b_{i_2} \dots n_{i_N} + (-1)^{j_1 + j_2 + \dots + j_N} a_{i_1 + 1}b_{i_2 + 1} \dots n_{i_N +1} \right ] \end{array}$$ In Eq.(18), the constants $n_0$ and $n_1$ define the state of the qubit $\nu$ in which the Nth party prepared according to $$|\psi\rangle_\nu = n_0 |0\rangle_\nu + n_1 |1\rangle_\nu$$ Eq.(18) indicates that there are $2^N$ different values for the probabilities $P_{i_1 j_1 i_2 j_2 \dots i_N j_N}$ $= \left | V_{i_1 j_1 i_2 j_2 \dots i_N j_N} \right |^2$. To any member of the $N$ parties sharing the entanglement of Eq.(15), the number of unknowns is $(2N-2)$, while it is $(2N)$ to any outsider. As before, by agreeing to publicly announce the measurement results only when the outcome is among judiciously chosen Bell states, each of the N parties can secretely send his message in the form of two constants to all others of the N parties, so that the N parties can share secretely the messages in the form of $2N$ constants. As an example, consider the case $N=3$. Let us assume that the three parties, Alice, Bob and Charlie agree that Charlie is the last one to make an announcement each time, that Alice and Bob announce the measurement results only when they obtain either $\Phi_{10}$ or $\Phi_{11}$, and that Charlie announces his measurement result only when both Alice and Bob announce their measurement results and he(Charlie) obtains either $\Phi_{10}$ or $\Phi_{11}$ or $\Phi_{00}$. The probabilities that can be determined experimentally are then the following: $$\begin{aligned} P^{exp}_{101010} &=& P^{exp}_{101111} = P^{exp}_{111110} = P^{exp}_{111011} = \frac{1}{16}\left | a_0 b_0 c_0 + a_1 b_1 c_1 \right |^2 \\ P^{exp}_{111010} &=& P^{exp}_{101110} = P^{exp}_{101011} = P^{exp}_{111111} = \frac{1}{16}\left | a_0 b_0 c_0 - a_1 b_1 c_1 \right |^2 \\ P^{exp}_{101100} &=& P^{exp}_{111000} = \frac{1}{16}\left | a_0 b_0 c_1 - a_1 b_1 c_0 \right |^2 \\ P^{exp}_{101000} &=& P^{exp}_{111100} = \frac{1}{16}\left | a_0 b_0 c_1 + a_1 b_1 c_0 \right |^2\end{aligned}$$ Since each of Alice, Bob and Charlie has four unknowns, she or he can use Eqs. (20a)-(20d) to solve for her or his unknowns. This allows Alice, Bob and Charlie to share secretely six constants among them. If Alice, Bob and Charlie are limited to linear-optical Bell-state measurements, they can only determine eight probabilities $P_{1 j_1 1 j_2 1 j_3}\left ( j_1, j_2, j_3 = 0, 1 \right )$ of Eqs. (20a) and (20b) experimentally. In this situation, Alice, Bob and Charlie each needs to announce publicly one of the constants, say $a_0, b_0$ and $c_0$. Each of Alice, Bob and Charlie then has two unknowns for which Eqs.(20a) and (20b) provide sufficient information. In this case, however, the number of constants that the three parties can secretely share is reduced to three. Case of no shared reference frames ================================== So far, we have assumed that Alice and Bob have exactly the same basis for the states $|0\rangle$ and $|1\rangle$. Thus, if the qubits we consider are polarized photons or spins, we assume that Alice and Bob share a spatial reference frame. As far as our proposed protocol is concerned, this shared reference frame does not need to be private, because the privacy of the protocol does not depend upon the privacy of the reference frame. The shared reference frame can, for example, be a specific direction with respect to a fixed star. ![Alice’s reference frame ($|1\rangle$ and $|0\rangle$) and Bob’s reference frame ($|1'\rangle$ and $|0'\rangle$).](efigref){width="5cm"} It may happen, however, that Alice and Bob, for reasons of better security, want to use reference frames of their own choice which may not coincide, or that their reference frames are inadvertently misaligned. As show below, our proposed protocol still works, even if Alice’s reference frame does not coincide with Bob’s. Let us suppose that Bob’s axis makes an angle $\theta$ with respect to Alice’s, as shown in Fig. 2. The initial state for the four particles $\alpha,\beta,A$ and $B$ are now written as $$\left|\Psi\right\rangle_{\alpha\beta A B} = \left( {a_0 \left| 0 \right\rangle _\alpha + a_1 \left| 1 \right\rangle _\alpha } \right) \frac{1}{\sqrt 2} \left( |0\rangle_A |0\rangle_B + |1\rangle_A |1\rangle_B \right) \left( {b_0 \left| {0'} \right\rangle _\beta + b_1 \left| {1'} \right\rangle _\beta } \right)$$ where$|0\rangle$ and $|1\rangle$ denote Alice’s basis states and $|0'\rangle$ and $|1'\rangle$ Bob’s basis states. We assume that Alice prepares the entangled pair $A$ and $B$ in $|\Phi_{00}\rangle_{AB}$, keeps $A$ and sends $B$ to Bob. Now Alice performs her Bell state measurement on the pair $\alpha A$ in her $|0\rangle-|1\rangle$ basis, whereas Bob performs his Bell state measurement on the pair $\beta B$ in his $|0'\rangle-|1'\rangle$ basis. We thus need to express $|0\rangle_B$ and $|1\rangle_B$ in terms of $|0'\rangle_B$ and $|1'\rangle_B$, and expand the wave function $|\Psi\rangle_{\alpha \beta A B}$ as $$\left|\Psi\right\rangle_{\alpha \beta A B} = \sum\limits_{i_1 ,j_1 ,i_2,j_2=0}^1 \left| {\Phi _{i_1 j_1} } \right\rangle_{\alpha A} \left| {\Phi '_{i_2 j_2} } \right \rangle_{\beta B} V_{i_1 j_1 i_2 j_2}$$ where $|\Phi'_{ij}\rangle$ refers to Bell states of Eq. (1) defined in terms of Bob’s basis $|0'\rangle$ and $|1'\rangle$. A straightforward algebra yields, for the probabilities $P_{i_1 j_1 i_2 j_2}=\left| V_{i_1 j_1 i_2 j_2} \right|^2$, $$P_{0000} = P_{1111} = \frac{1}{8}\left| {a_0 b_0 \cos \theta - a_1 b_0 \sin \theta + a_0 b_1 \sin \theta + a_1 b_1 \cos \theta } \right|^2 \equiv \frac{1}{2}P_1$$ $$P_{0001} = P_{1110} = \frac{1}{8}\left| {a_0 b_0 \cos \theta - a_1 b_0 \sin \theta - a_0 b_1 \sin \theta - a_1 b_1 \cos \theta } \right|^2 \equiv \frac{1}{2}P_2$$ $$P_{0010} = P_{1101} = \frac{1}{8}\left| {a_0 b_0 \sin \theta + a_1 b_0 \cos \theta + a_0 b_1 \cos \theta - a_1 b_1 \sin \theta } \right|^2 \equiv \frac{1}{2}P_3$$ $$P_{0011} = P_{1100} = \frac{1}{8}\left| {a_0 b_0 \sin \theta + a_1 b_0 \cos \theta - a_0 b_1 \cos \theta + a_1 b_1 \sin \theta } \right|^2 \equiv \frac{1}{2}P_4$$ $$P_{0100} = P_{1011} = \frac{1}{8}\left| {a_0 b_0 \cos \theta + a_1 b_0 \sin \theta + a_0 b_1 \sin \theta - a_1 b_1 \cos \theta } \right|^2 \equiv \frac{1}{2}P_5$$ $$P_{0101} = P_{1010} = \frac{1}{8}\left| {a_0 b_0 \cos \theta + a_1 b_0 \sin \theta - a_0 b_1 \sin \theta + a_1 b_1 \cos \theta } \right|^2 \equiv \frac{1}{2}P_6$$ $$P_{0110} = P_{1001} = \frac{1}{8}\left| {a_0 b_0 \sin \theta - a_1 b_0 \cos \theta + a_0 b_1 \cos \theta + a_1 b_1 \sin \theta } \right|^2 \equiv \frac{1}{2}P_7$$ $$P_{0111} = P_{1000} = \frac{1}{8}\left| {a_0 b_0 \sin \theta - a_1 b_0 \cos \theta - a_0 b_1 \cos \theta - a_1 b_1 \sin \theta } \right|^2 \equiv \frac{1}{2}P_8$$ where the probabilities are further restricted by the identities $$P_1 + P_2 + P_3 + P_4 = \frac{1}{2}$$ $$P_2 + P_3 = P_5 + P_8$$ $$P_1 + P_4 = P_6 + P_7$$ Comparison of Eqs.(23) with Eqs.(7) indicates that the misalignment of Bob’s axis with respect to Alice’s axis partly breaks degeneracies among the probabilities. For example, $P_{1111}$ is no longer equal to $P_{1010}$, and $P_{1110}$ is no longer equal to $P_{1011}$. The difference between these probabilities can thus be considered as a measure of the misalignment. Our scheme for quantum message exchange can proceed exactly as before. We let Alice and Bob announce publicly their measurement results only when the outcome is either $\Phi_{10}$ or $\Phi_{11}$. The four experimentally determined probabilities $P^{exp}_{1010}, P^{exp}_{1111}, P^{exp}_{1011}$, and $P^{exp}_{1110}$ then provide four equations, Eqs. (23a), (23b), (23e) and (23f), that relate the five constants; $a_0, a_1, b_0, b_1$ and the angle $\theta$. To Alice(Bob) there are three unknowns $b_0, b_1 (a_0, a_1)$ and $\theta$, whereas to any third party the number of unknowns is five. Only Alice and Bob can thus determine the five constants $a_0, a_1, b_0, b_1$ and $\theta$. Our scheme thus allows Alice and Bob not only to secretely share the four constants but also to determine the angle between their reference frames. Summary and discussion ====================== We have analyzed a situation in which each of two (or more) parties sharing entanglement performs a Bell-state measurement upon the entangled particle in his (or her) possession and another particle he(or she) prepared in a specific state. The probability for a joint measurement outcome corresponding to a given combination of the Bell states depends critically upon the states of the particles involved. Taking advantage of the fact that each person belonging to the parties sharing entanglement and only he(or she) knows the state of the particle he(or she) prepared, we suggest a scheme by which two(or more) parties sharing entanglement can secretely and simultaneously exchange messages. For the case of two parties sharing entangled qubits, the scheme requires Bell-state measurements that distinguish only two out of the four Bell states, which can be accomplished using only linear optical devices. The scheme thus provides an experimentally feasible means of two-way communication. We should emphasize that, although it may not be apparent at first sight, entanglement plays a critical role in the proposed scheme. It is through entanglement that the joint probabilities appear in an “entangled” form as give in Eqs.(7) and that the separate probabilities for either Alice or Bob to obtain any arbitrary Bell state are evenly distributed regardless of which Bell state we consider. Information on the constants $a_0,a_1,b_0$ and $b_1$ can be obtained only by looking at the joint probabilities. On the other hand, if the qubits A and B were not entangled, Alice’s Bell-state measurements would be completely independent of Bob’s Bell-state measurements, and information on the constants $a_0$ and $a_1$ ($b_0$ and $b_1$) would be obtained by looking only at the results of Alice’s (Bob’s) Bell-state measurements. Bob(Alice) would need as much information as any third party in order to determine $a_0$ and $a_1$($b_0$ and $b_1$) from the results of Alice’s (Bob’s) Bell-state measurements. The parties sharing entanglement have advantages only because the joint probabilities for Alice’s and Bob’s Bell-state measurements are “entangled”. Of course, the maximal advantage is provided by the maximal entanglement which we have assumed. In general, as the degree of shared entanglement is decreased, the joint probabilities exhibit less degree of entanglement, and as a result the parties sharing entanglement has less degree of advantage over a third party. This research was supported by a Grant from the Korea Science and Engineering Foundation(KOSEF) through Korea-China International Cooperative Research Program and from the Ministry of Science and Technology(MOST) of Korea. The authors thank Professor Gui Lu Long of Tsinghua university of China for helpful discussions. [99]{} C. H. Bennett and S. J. Wiesner, Phys. Rev. Lett. **69** (1992) 2881. K.Mattle, H.Weinfurter, P.G.Kwiat, and A. Zeilinger, Phys. Rev. Lett. **76** (1996) 4656. C.H.Benett, G. Brassard, C. Crepeau, R. Jozsa, A. Peres, and W. K. Wootters, Phys. Rev. Lett. **70** (1993) 1895. D. Boschi, S. Branca, F. De Martini, L. Hardy, and S. Popescu, Phys. Rev. Lett. **80** (1998) 1121. D. Bouwmeester, J.W. Pan, K. Mattle, M. Eibl, H. Weinfurter, and A. Zeilinger, Nature (London) **390** (1997) 575. A. Furusawa, J. L. S[ø]{}rensen, S.L.Braustein, C. A. Fuchs, H. J. Kimble, and E. S. Polzik, Science **282** (1998) 706. H.W.Lee and J. Kim, Phys. Rev. A **63** (2001) 012305. E. Lombardi, F. Sciarrino, S. Popescu, and F. De Martini, Phys. Rev. Lett. **88** (2002) 070402. A. K. Ekert, Phys. Rev. Lett. **67** (1991) 661. T. Jennewein, C. Simon, G. Weihs, H. Weinfurter, and A. Zeilinger, Phys. Rev. Lett. **84** (2000) 4729; D. S. Naik, C. G. Peterson, A. G. White, A. J. Berglund, and P. G. Kwiat, Phys. Rev. Lett. **84** (2000) 4733; W. Tittel, J. Brendel, H. Zbinden, and N. Gisin, Phys. Rev. Lett. **84** (2000) 4737. J.W.Lee, E.K.Lee, Y.W.Chung, H.W.Lee, and J.Kim, Phys. Rev. A **68** (2003) 012324. R. Raussendorf and H.J.Briegel, Phys. Rev. Lett. **86** (2001) 5188. M.A.Nielsen, quant-ph/0402005. N.Lütkenhaus, J.Calsamiglia, and S.A.Suominen, Phys. Rev. A **59** (1999) 3295. B.A.Nguyen, Phys. Lett. A **328** (2004) 6. See, for example, F. Reif, Fundamentals of Statistical and Thermal Physics (McGraw-Hill, New York, 1965), Ch. 1. S.J.van Enk, quant-ph/0410083 T.Rudolph and L.Grover, Phys.Rev.Lett. **91**, 217905(2003). S.D.Bartlett, T.Rudolph, and R.W.Spekkens, Phys.Rev.A **70**, 032307 (2004). S.Massar and S.Popescu, Phys.Rev.Lett. **74**, 1259(1995); R.Derka, V.Buzek, and A.K.Ekert, Phys.Rev.Lett. **80**, 1571 (1998); N.Gisin and S.Popescu, Phys.Rev.Lett. **83**, 432 (1999); S.Massar, Phys.Rev.A **62**, 040101(R) (2000); A.Peres and P.F.Scudo, Phys.Rev.Lett. **86**, 4160 (2001). G.Alber, A.Delgado, N.Gisin, and I.Jex, quant-ph/0008022. S.J. van Enk, Phys. Rev. Lett. **91** (2003) 017902. D.M.Greenberger, M.A.Horne, A.Shimony, and A.Zeilinger, Amer.J.Phys. **58** (1990) 1131; N.D.Mermin, Phys. Today **43** (1990) 9.
--- abstract: | This paper illustrates how a Prolog program, using chronological backtracking to find a solution in some search space, can be enhanced to perform intelligent backtracking. The enhancement crucially relies on the impurity of Prolog that allows a program to store information when a dead end is reached. To illustrate the technique, a simple search program is enhanced. To appear in Theory and Practice of Logic Programming. author: - | MAURICE BRUYNOOGHE\ Katholieke Universiteit Leuven, Department of Computer Science\ Celestijnenlaan 200A, B3001 Heverlee, Belgium\ e-mail: Maurice.Bruynooghe@cs.kuleuven.ac.be title: Enhancing a Search Algorithm to Perform Intelligent Backtracking --- \[firstpage\] intelligent backtracking, dependency-directed backtracking, backjumping, conflict-directed backjumping, nogood sets, look-back. Introduction {#sec:intro} ============ The performance of backtracking algorithms for solving finite-domain constraint satisfaction problems can be improved substantially by so called look-back and look-ahead methods [@Dechter02]. Look-back techniques extract information by analyzing failing search paths that are terminated by dead ends and use that information to prune the search tree. Look-ahead techniques use constraint propagation algorithms in an attempt to avoid such dead ends altogether. Constraint propagation can rather easily be isolated from the search itself and can be localized in a constraint store. Following the seminal work of [@Hentbook], look-ahead techniques are available to the logic programmer in a large number of systems. This is not the case for look-back methods. Intelligent backtracking has been explored as a way of improving the backtracking behavior of logic programs [@BP84]. For some time, a lot of effort went into adding intelligent backtracking to Prolog implementations (see references in [@Br91]). However, the inherent space and time costs, which must be paid even when no backtracking occurs, impeded its introduction in real implementations. For a long time, look-ahead methods dominated in solving constraint satisfaction problems. However, already in [@RB86] we have shown empirical evidence that look-back methods can be useful, even that it can be interesting to combine both. Starting in the nineties there is a renewed interest in look-back methods, [*e.g.*]{}, [@ginsberg93], and in combining look-back with look-ahead [*e.g.*]{}, [@Dechter02]. Look-back turned out to be the most successful of the approaches tried in a research project aiming at detecting unsolvable queries (queries that do not terminate, such as the query $\leftarrow \mathit{ odd}(X), \mathit{ even}(X)$ for a program defining odd and even numbers). The approach was to construct a model of the program over a finite domain in which the query was false. The central part of this model construction was to search for a pre-interpretation leading to the desired model, [*i.e.*]{}, with $D$ the domain, to find an appropriate function $D^n \rightarrow D$ for every n-ary functor in the program. A meta-interpreter was built which performed a backtracking search over the solution space. A control strategy was devised which resulted in the early detection of instances of program clauses which showed that the choices made so far could not result in the desired model. This meta-interpreter outperformed dedicated model generators on several problems [@BVWD98]. However it remained very sensitive to the initial ordering in which the various components of the different functions were assigned. The point was that not all choices made so far necessarily contributed to the evaluation of a clause instance. We experimented with constraint techniques and also investigated the use of intelligent backtracking. With a small programming effort, we could enhance the meta-interpreter to support a form of intelligent backtracking. As reported in [@BVWD99], this was the most successful approach. As Prolog is a popular tool for prototyping search problems and as look-back methods, though useful, are not available in off-the-shelf Prolog systems, we decided to describe for a wider audience how to enhance a Prolog search program with a form of intelligent backtracking. The technique crucially depends on the impure feature of Prolog (assert/retract) that allows storing information when a dead end is reached. The stored information is used to decide whether a choice point should be skipped when chronological backtracking returns to it. Hence we propose the technique as a black pearl. In the application mentioned above, the meta-interpreter is performing a substantial amount of computation after making a choice whereas the amount of computation added to support intelligent backtracking is comparatively small. This is not always the case. When the amount of computation in between choices is small and solutions are rather easy to find, the overhead of supporting intelligent backtracking may be larger than the savings due to the pruning of the search space. This is the case in toy problems such as the n-queens. In the example we develop here, there is a small speed-up. We recall some basics of intelligent backtracking in Section \[sec:IB\]. In Section \[sec:nqueens\], we introduce the example program and in Section \[sec:ib\] we enhance it with intelligent backtracking. We conclude with a discussion in Section \[sec:discussion\]. Intelligent Backtracking {#sec:IB} ======================== Intelligent backtracking as described in [@Br81] is a very general schema. It keeps track of the reason for eliminating a variable in a domain. Upon reaching a dead end, it identifies a culprit for the failure and [*jumps back*]{} to the choice point where the culprit was assigned a value. Information about the variables assigned in between the culprit and the dead end can be retained if still valid, as in the dynamic backtracking of [@ginsberg93] which can be considered as an instance of the schema. More straightforward in a Prolog implementation is to give up that information, this gives the backjumping algorithm (Algorithm 3.3) in [@ginsberg93] (intelligent backtracking with static order in [@RB86]). We follow rather closely [@ginsberg93] for introducing it. A constraint satisfaction problem (CSP) can be identified by a triple $(I,D,C)$ with $I$ a set of variables, $D$ a mapping from variables to domains and $C$ a set of constraints. Each variable $i \in I$ is mapped by $D$ into a domain $D_i$ of possible values. Each constraint $c \in C$ defines a relation $R_c$ over a set $I_c \subseteq I$ of variables and is satisfied for the tuples in that relation. A solution to a CSP consists of a value $v_i$ (an [*assignment*]{}) for each variable $i$ in $I$ such that: (1) for all variables $i$: $v_i \in D_i$ and, (2) for all constraints $c$: with $I_c = \{j_1, \ldots, j_k\} $, it holds that $(v_{j_1}, \ldots, v_{j_k}) \in R_c $. A partial solution to a CSP $(I,D,C)$ is a subset $J \subseteq I$ and an assignment to each variable in $J$. A partial solution $P$ is ordered by the order in which the algorithm that computes it assigns values to the variables and is denoted by a sequence of ordered pairs $(i,v_i)$. A pair $(i,v_i)$ indicates that variable $i$ is assigned value $v_i$; ${\ensuremath{I_P}}= \{i | (i,v_i) \in P\}$ denotes the set of variables assigned values by $P$. Given a partial solution $P$, an [*eliminating explanation*]{} (cause-list in [@Br81]) for a variable $i$ is a pair $(v_i,S)$ where $v_i \in D_i$ and $S \subseteq {\ensuremath{I_P}}$. It expresses that the assignments to the variables of $S$ by the partial solution $P$ cannot be extended into a solution where variable $i$ is assigned value $v_i$. Contrary to [@ginsberg93], we use an [*elimination mechanism*]{} that tests one value at a time. Hence we assume a function $\mathit{ consistent}(P, i ,v_i)$ that returns true when $P \cup \{(i,v_i)\}$ satisfies all constraints over ${\ensuremath{I_P}}\cup \{i\})$ and a function $\mathit{ elim}(P, i ,v_i)$ that returns an eliminating explanation $(v_i,S)$ when $\neg\mathit{ consistent}(P, i ,v_i)$. Below, we formulate the backjumping algorithm; next we clarify its reasoning. $E_i$ is the set of eliminating explanations for variable $i$. \[alg:1\] Given as inputs a CSP $(I,D,C)$. 1. Set $P:=\emptyset$. 2. If ${\ensuremath{I_P}}= I$ return $P$. Otherwise select a variable $i \in I \setminus {\ensuremath{I_P}}$, set $S_i := D_i$ and $E_i := \emptyset$. 3. If $S_i$ is empty then go to step 4; otherwise, remove an element $v_i$ from it.\ If $\mathit{ consistent}(P, i ,v_i)$ then extend $P$ with $(i,v_i)$ and go to step 2; otherwise add $\mathit{ elim}(P, i ,v_i)$ to $E_i$ and go to step 3. 4. ($S_i$ is empty and $E_i$ has an eliminating explanation for each value in $D_i$.) Let $C$ be the set of all variables appearing in the explanations of $E_i$. 5. If $C = \emptyset$, return failure. Otherwise, let $(l,v_l)$ be the last pair in $P$ such that $l \in C$. Remove from $P$ this pair and any pair following it. Add $(v_l,C \setminus \{l\})$ to $E_l$, set $i:=l$ and go to step 3. In step 3, when the extension of the partial solution is inconsistent then $\mathit{ elim}(P, i ,v_i)$ returns a pair $(v_i,\{j_1,\ldots,j_m\})$ such that the partial solution $(j_1,v_{j_1}), \ldots ,(j_m,v_{j_m}),(i,v_i) $ violates the constraints. The inconsistency of this assignment can be expressed by the clause: $\leftarrow j_1 = v_{j_1}, \ldots ,j_m=v_{j_m}, i=v_i $ (The head is false, the body is a conjunction). In step 4, when $S_i$ is empty, we have an eliminating explanation for each value $v_{i_k}$ in the domain $D_i$. Hence we have a set of clauses of the form $$\label{cl2} \leftarrow j_{k,1} = v_{j_{k,1}}, \ldots, j_{k,m_k} = v_{j_{k,m_k}}, i=v_{i_k}$$ The condition that the variable $i$ must be assigned a value from domain $D_i$ with $n$ elements can be expressed by the clause (the head is a disjunction, the body is true): $$\label{cl1} i =v_{i_1} ,\ldots, i =v_{i_n} \leftarrow$$ Now, one can perform hyperresolution [@Rob65] between clause (\[cl1\]) and the clauses of the form (\[cl2\]) (for $k$ from 1 to $n$). This gives: $$\label{cl4} \leftarrow j_{1,1} = v_{j_{1,1}}, \ldots, j_{1,m_1} = v_{j_{1,m_1}}, \ldots, j_{n,1} = v_{j_{n,1}}, \ldots , j_{n,m_n} = v_{j_{n,m_n}}$$ This expresses a conflict between the current values of the variables in the set $\{j_{1,1}, \ldots, j_{1,m_1}, \ldots, j_{n,1}, \ldots,j_{n,m_n} \} = C$. Hence, with $l$ the last assigned variable in $C$, $C\setminus \{l\}$ is an eliminating explanation for $v_l$. The conflict $C$ is computed in step 4. When empty, the problem has no solution as detected in step 5. Otherwise, step 5 backtracks and adds the eliminating explanation $(v_l, C\setminus \{l\})$ to the set of eliminating explanations of variable $l$. One can observe that the algorithm does not use the individual eliminating explanations in the set $E_i= (v_{i_k}, S_k)$, but only the set $C$ which is the union of the sets $S_k$. As we have no interest in introducing more refined forms of intelligent backtracking, we develop Algorithm \[alg:2\] where $E_i$ holds the union of the sets $S_k$ in the eliminating explanations of variable $i$. To obtain an algorithm that closely corresponds to the Prolog encoding we present in Section \[sec:ib\], we reorganise the code and introduce some more changes. The function $\mathit{ elim}(P,i,v_i)$ that returns an eliminating explanation $(v_i,S)$ for the current value of variable $i$ is replaced by a function $\mathit{ conflict}(P,i,v_i)$ that returns the set $\{i\} \cup S$ (the variables that participate in a conflict as represented by Equation \[cl2\]). This conflict is stored in a variable $C$ (step 3 of Algorithm \[alg:2\]). It is nonempty and $i$ is the last assigned variable, hence the value of $i$ remains unchanged in step 4 and, in step 5, the eliminating explanation $C \setminus \{i\}$ is added to $E_i$. This reorganisation of the code has as result that a local conflict (the chosen value for the last assigned variable $i$ is inconsistent with the partial solution) and a deep conflict (all values for variable $i$ have been eliminated) are handled in a uniform manner: upon failure, the algorithm computes a conflict and stores it in variable $C$ (for the local conflict in step 3, for the deep conflict in step 5), backtracks to the variable computed in step 4 (the “culprit”) and resumes in step 5 with updating $E_i$ and trying a next assignment to variable $i$. \[alg:2\] Given as input a CSP $(I,D,C)$. 1. Set $P:=\emptyset$. 2. If ${\ensuremath{I_P}}= I$ return P. Otherwise select a variable $i \in I \setminus {\ensuremath{I_P}}$. Select a value $v_i$ from $D_i$. Set $S_i := D_i \setminus \{v_i\}$ and $E_i := \emptyset$. 3. If $\mathit{consistent}(P, i ,v_i)$ then extend $P$ with $(i,v_i)$ and go to step 2; otherwise set $C := \mathit{conflict}(P, i ,v_i)$. 4. If $C=\emptyset$ then return failure; otherwise let $(l,v_l)$ be the last pair in $P$ such that $l \in C$. Set $i:=l$. 5. Add $C \setminus \{i\}$ to $E_i$. If $S_i= \emptyset$ then $C := E_i$ and go to step 4; otherwise select and remove a value $v_i$ from $S_i$ and go to step 3. A search problem {#sec:nqueens} ================ The code below is, apart from the specific constraints, fairly representative for a finite domain constraint satisfaction problem. The problem is parameterized with two cardinalities: [VarCard]{}, the number of variables (the first argument of [problem/3]{}) and [ValueCard]{}, the number of values in the domains of the variables (the second argument of [problem/3]{}). The third argument of [ problem/3]{} gives the solution in the form of a list of elements $\mathit{ assign}(i,v_i)$. The main predicate uses [ init\_domain/2]{} to create a domain $[1, 2, \ldots, \mathit{ValueCard}]$ and [init\_pairs/3]{} to initialize $\mathit{Pairlist}$ as a list of pairs $i$-$D_i$ with $D_i$ the domain of variable $i$. The first argument of [extend\_solution/3]{} is a list of pairs $i$-$D_i$ with $i$ an unassigned variable and $D_i$ what remains of its domain; the second argument is the (consistent) partial solution (initialized as the empty list) and the third argument is the solution. The predicate is recursive; each iteration extends the partial solution with an assignment to the first variable on the list of variables to be assigned. The nondeterministic predicate [ my\_assign/2]{} selects the value. If desirable, one could introduce a selection function which dynamically selects the variable to be assigned next. Consistency of the new assignment with the partial solution is tested by the predicates [consistent1/2]{} and [consistent2/2]{}. They create a number of binary constraints. The binary constraints themselves are tested with the predicates [constraint1/2]{} and [ constraint2/2]{}. What they express is not so important. The purpose is to create a problem that is sufficiently difficult so that enhancing the program with intelligent backtracking pays off. For the interested reader, the predicate [consistent2/2]{} creates a very simple constraint that verifies (using [constraint1/2]{}) that the value of the newly assigned variable is different from the value of the previously assigned variable. The predicate [consistent1/2]{} creates a set of more involved constraints. The odd numbered and even numbered variables each encode the constraints of the n-queens problem. As a result, the solution of [*e.g.,*]{} [problem(16,8,S)]{} contains a solution for the 8-queens problem in the odd numbered variables and a [*different*]{} (due to the constraints created by [ consistent2/2]{}) solution in the even numbered variables. Substantial search is required to find a first solution. For example, the first solution for [problem(16,8,S)]{} is found after 32936 assignments (using a similar set-up of constraints, a solution is found for the 8-queen problem after only 876 assignments). Note that the constraint checking between the new assigned variable and the other assigned variables is done in an order that is in accordance with the order of assigning variables. Hence [ consistent1/2]{} is not tail recursive. The order is not important for the algorithm without intelligent backtracking. However, it is crucial to obtain optimal intelligent backtracking: as with chronological backtracking, constraint checking will stop at the first conflict detected and an eliminating explanation will be derived from it. As an eliminating explanation with an older assigned variable gives more pruning than one with a more recently assigned variable, the creation of constraints requires one to pay attention to the order. It is done already here to minimize the differences between this version and the enhanced version. problem(VarCard,ValueCard,Solution) :- init_domain(ValueCard,Domain), init_pairs(VarCard,Domain,Pairs), extend_solution(Pairs,[],Solution). init_domain(ValueCard,Domain) :- ( ValueCard=0 -> Domain=[] ; ValueCard>0, ValueCard1 is ValueCard-1, Domain=[ValueCard|Domain1], init_domain(ValueCard1,Domain1) ). init_pairs(VarCard,Domain,Vars) :- ( VarCard=0 -> Vars = [] ; VarCard>0, VarCard1 is VarCard-1, Vars=[VarCard-Domain|Vars1], init_pairs(VarCard1,Domain,Vars1) ). extend_solution([],Solution,Solution). extend_solution([Var-Domain|Pairs],PartialSolution,Solution) :- my_assign(Domain,Value), consistent1(PartialSolution,assign(Var,Value)), consistent2(PartialSolution,assign(Var,Value)), extend_solution(Pairs, [assign(Var,Value)|PartialSolution], Solution). my_assign([Value|_],Value). my_assign([_|Domain],Value) :- my_assign(Domain,Value). consistent1([],_). consistent1([_],_). consistent1([_, Assignment1|PartialSolution],Assignment0) :- consistent1(PartialSolution,Assignment0), constraint1(Assignment0,Assignment1), constraint2(Assignment0,Assignment1). consistent2([],_). consistent2([Assignment1|_],Assignment0) :- constraint1(Assignment0,Assignment1). constraint1(assign(_,Value0),assign(_,Value1)) :- Value0 \== Value1. constraint2(assign(Var0,Value0),assign(Var1,Value1)) :- D1 is abs(Value0-Value1), D2 is abs(Var0-Var1)//2, D1 \== D2. Adding intelligent backtracking {#sec:ib} =============================== Adding intelligent backtracking requires us to maintain eliminating explanations. In Algorithm \[alg:2\], a single eliminating explanation is associated with each variable. The eliminating explanation of a variable $i$ is initialised as empty in step 2, when assigning a first value to the variable. It is updated in step 5, when the last assigned value turns out to be the “culprit” of an inconsistency. This happens just before assigning the next value to variable $i$. This indicates that the right place to store eliminating explanations is as an extra argument in the predicate [ my\_assign/2]{}. In step 4, the algorithm has to identify the “last” variable $l$ of a conflict (the “culprit”), just before updating the eliminating explanation. We will also use the [my\_assign/2]{} predicate to check whether the variable it assigns corresponds to the culprit of the failure. Hence also the identitity of the variable should be an argument. These considerations lead to the replacement of the [my\_assign/2]{} predicate by the following [my\_assign/4]{} predicate. my_assign([Value|_],_Var,_Explanation,Value ). my_assign([_|Domain],Var,Explanation0,Value) :- get_conflict(Conflict), remove(Var,Conflict,Explanation1), set_union(Explanation0,Explanation1,Explanation), my_assign(Domain,Var,Explanation,Value). my_assign([],_Var,Explanation,_Value) :- save_conflict(Explanation), fail. It is called from [extend\_solution/4]{} as [ myassign(Domain,Var,\[\],Value)]{} (what remains of the domain is the first argument, the second argument is the variable being assigned, the third argument is the initially empty eliminating explanation and the fourth argument returns the assigned value). The initial call together with the base case perform the otherwise branch of step 2. The second clause, entered upon backtracking when the domain is nonempty, checks whether the variable being assigned is the culprit. To do so, it needs the conflict. As this information is computed just before failure occurs, it cannot survive backtracking when using the pure features of Prolog. One has to rely on the impure features for asserting/updating clauses. Either [assert/1]{} and [retract/1]{} or more efficient variants of specific Prolog systems[^1]. The call to [ get\_conflict(Conflict)]{} picks up the saved conflict[^2]; next, the call [ remove(Var,Conflict,Explanation1)]{} checks whether [Var]{} is part of it. If not, [my\_assign/4]{} fails and backtracking returns to the previous assignment. If [Var]{} is the culprit, then the code performs step 5 of the algorithm: [remove/3]{} returns the eliminating explanation in its third argument, [set\_union/3]{} adds it to the current eliminating explanation and the recursive call checks whether the domain is empty. If not, the base case of [ my\_assign/4]{} assigns a new value. If the domain is empty, then the last clause is selected. The eliminating explanation becomes the conflict and is saved with the call to [ save\_conflict(Explanation)]{} that relies on the impure features[^3] and the clause fails. Further modifications are in the predicates [constraint1/2]{} and [constraint2/2]{} that perform the constraint checking. If a constraint fails, the variables involved in it make up the conflict and have to be saved so that after re-entering [myassign/4]{} the conflict can be picked up and used to compute an eliminating explanation (step 3). As the last assigned variable participates in all constraints, it is part of the conflict. For example, the code for [constraint1/2]{} becomes: constraint1(assign(Var0,Value0),assign(Var1,Value1)) :- ( Value0 \== Value1 -> true ; save_conflict([Var0,Var1]), fail ). The modification to [constraint2/2]{} is similar. Recall that the order in which constraints are checked determines the amount of pruning that is achieved. Finally, if one is interested in more than one solution then also a conflict has to be stored when finding a solution. It consists of all variables making up the solution. Using a predicate [allvars/2]{} that extracts the variables from a solution, the desired behavior is obtained as follows: problem(VarCard,ValueCard,Solution) :- init_domain(ValueCard,Domain), init_pairs(VarCard,Domain,Pairs), extend_solution(Pairs,[],Solution), initbacktracking(Solution). initbacktracking(Solution) :- allvars(Solution,Conflict), save_conflict(Conflict). The enhanced program generates the same solutions as the original, and in the same order. For [problem(16,8,S)]{} the number of assignments goes down from 32936 to 4015 and the execution time from 140ms to 70ms; for [problem(20,10,S)]{}, the reduction is respectively from 75950 to 15813 and from 370ms to 310ms. The achieved pruning more than compensates for the (substantial) overhead of recording and updating conflicts[^4] and of the calls to [remove/3]{} and [set\_union/3]{}. Note that the speed-up decreases with larger instances of this problem. This is likely due to the increasing overhead of the latter two predicates. Keeping the conflict set sorted (easy here because the variable numbers corresponds with the order of assignment) such that the culprit is always the first element could reduce that overhead. Discussion {#sec:discussion} ========== In this black pearl, we have illustrated by a simple example how a chronological backtracking algorithm can be enhanced to perform intelligent backtracking. As argued in the introduction, look-back techniques are useful in solving various search problems. Hence exploring their application can be very worthwhile when building a prototype solution for a problem. The technique presented here illustrates how this can be realized with a small effort when implementing a prototype in Prolog. Interestingly, the crucial feature is the impurity of Prolog that allows the search to transfer information from one point in the search tree (a dead end) to another. It illustrates that Prolog is a multi-faceted language. On the one hand it allows for pure logic programming, on the other hand it is a very flexible tool for rapid prototyping. Note that the savings due to the reduction of the search space could be undone by the overhead of computing and maintaining the extra information, especially, when the amount of computation between two choice points is small. The combination of look-back and look-ahead techniques can be useful, and algorithms integrating both can be found, [*e.g.*]{}, [@Dechter02]. The question arises whether our solution can be extended to incorporate look-ahead. This requires some work, however, much of the design can be preserved. The initialization ([ init\_domains/3]{}) should not only associate variables with their initial finite domain, but also with their eliminating explanations (initially empty). Then the code for the main iteration could be as follows: extend_solution([],Solution,Solution). extend_solution(Vars,PartialSolution,Solution) :- selectbestvar(Vars,var(Var,Values,Explanation),Rest), myassign(Values,Var,Explanation,Value), consistent(PartialSolution,assign(Var,Value)), propagate([assign(Var,Value)|PartialSolution], NewPartialSolution) extend_solution(Vars,NewPartialSolution,Solution). The predicate [selectbestvar/3]{} is used to dynamically select the next variable to assign. It returns the identity of the variable ($\mathit{ Var}$), the available values ($\mathit{ Values}$) and the explanation ($\mathit{ Explanation}$) for the eliminated values. When a partial solution is successfully extended, the predicate [ propagate/2]{} has to take care of the constraint propagation: eliminating values from domains and updating the corresponding explanations after which the next iteration can start. Computing the eliminating explanation for each eliminated value requires great care and depends on the kind of look-ahead technique used. It is pretty straightforward for forward checking but requires careful analysis in case of [*e.g.*]{}, arc consistency as no pruning will occur on backjumping when the elimination is attributed to [*all*]{} already assigned variables. Acknowledgments {#acknowledgments .unnumbered} =============== I am grateful to Bart Demoen, Gerda Janssens and Henk Vandecasteele for useful comments on various drafts of this pearl. I am very grateful to the reviewers. Indeed, as often is the case, their persistence and good advise greatly contributed to the clarity of the exposition. 1981\. olving combinatorial search problems by intelligent backtracking.  [*12,*]{} 1, 36–39. 1991\. Intelligent backtracking revisited. In [*Computational Logic, Essays in Honor of Alan Robinson*]{}, [J.-L. Lassez]{} [and]{} [G. Plotkin]{}, Eds. MIT Press, 166–177. 1984\. eduction revision by intelligent backtracking. In [*Implementation of Prolog*]{}, [J. Campbell]{}, Ed. Ellis Horwood, 194–215. , [Vandecasteele, H.]{}, [de Waal, D. A.]{}, [and]{} [Denecker, M.]{} 1998. etecting unsolvable queries for definite logic programs. In [*Principles of Declarative Programming, Proc. PLILP’98 and ALP’98*]{}, [C. Palamidessi]{}, [H. Glaser]{}, [and]{} [K. Meinke]{}, Eds. LNCS. Springer, 118–133. , [Vandecasteele, H.]{}, [de Waal, D. A.]{}, [and]{} [Denecker, M.]{} 1999. etecting unsolvable queries for definite logic programs.  [*1999*]{}, 1–35. 2002\. Backjump-based backtracking for constraint satisfaction problems.  [*136,*]{} 2, 147–188. 1993\. Dynamic backtracking.  [*1*]{}, 25–46. 1965\. Automated deduction with hyper-resolution.  [*1*]{}, 227–234. 1987\. mpirical study of some constraint satisfaction algorithms. In [*Artificial Intelligence II, Methodology, Systems, Applications, Proc. AIMSA’86*]{}, [P. Jorrand]{} [and]{} [V. Sgurev]{}, Eds. North Holland, 173–180. 1989\. . MIT Press. \[lastpage\] [^1]: In our experiments, we made use of SICStus Prolog and employed [ bb\_put/2]{} and [bb\_get/2]{}. [^2]: We implemented it as [get\_conflict(Conflict) :- bb\_get(conflict,Conflict)]{}. [^3]: We implemented it as [save\_conflict(Conflict) :- bb\_put(conflict,Conflict)]{}. [^4]: Using [bb\_get]{} and [bb\_put]{} to count the number of assignments increases execution time of the initial algorithm for [problem(16,8,S)]{} from 140ms to 400ms.
--- author: - 'I. M. Stewart' date: 'Received January 0, 0000; accepted January 0, 0000' title: Matched filters for source detection in the Poissonian noise regime --- Introduction ============ Over the past few years, CCD cameras on satellite observatories such as ROSAT, ASCA, Chandra and most recently XMM-Newton have generated high-resolution digital images of the x-ray sky which are characterized by relatively faint background (for example, background fluxes of less than 1 count per CCD pixel are often seen in typical-duration XMM-Newton exposures). The number of events per pixel follows a Poisson probability distribution which, at such low flux levels, deviates markedly from its Gaussian bright-end limit. It seems likely that a significant part of the (non-instrumental) x-ray background is comprised of point sources too faint to be distinguished from one another by present techniques (Mushotzky et al [@mushotzky], Hasinger et al [@hasinger]). The desire to characterise this population of sources is one reason for attempting to push the sensitivity of x-ray point source detection to the lowest limits allowed by the data. Several source-detection procedures have been applied to x-ray images, eg: sliding-box (DePonte & Primini [@pros]; Dobrzycki et al [@celldetect]; see also task documentation for the exsas task *detect* and the SAS task *eboxdetect*); wavelet (Damiani et al [@damiani]; Starck & Pierre [@starck_and_pierre]; Pierre et al [@lss]; see also task documentation for the CIAO task *wavdetect* and the SAS task *ewavelet*); maximum-likelihood PSF fitting (Cruddace et al [@cruddace]; Boese and Doebereiner [@ml_fitting]); and Voronoi tessellation (Ebeling & Wiedenmann [@voronoi]). There are also occasional references to use of a ‘matched filter’ technique, but in the x-ray sphere at least this appears to consist just of convolution by the Point Spread Function or PSF (Vikhlinin et al [@vikhlinin]; Alexander et al [@chandra_matched]). A similar technique has also been applied to ASCA data (Ueda et al [@ueda]). As is shown in the present paper, the PSF approaches a true matched filter in the limit of white Gaussian noise but is sub-optimal at low levels of background. A useful review of source-detection procedures as applied to x-ray images can be found in Valtchanov et al ([@valtchanov]). Many of these techniques include a step in which the raw image is subjected to a convolution, often as a first step in preparing a detection-likelihood map. Even PSF fitting, in simplified form, can be shown to be equivalent to a convolution (Stetson [@stetson]). Voronoi tessellation seems to be the only technique which cannot readily be understood in this form. Extraction of source positions and detection likelihoods from the convolved image is in general not simple, because the characteristic width of the PSF of the mirror system is usually larger than the CCD pixel size. In a (raw) image where the PSF is resolved, neighbouring pixels are not statistically independent - detection of some source flux in one pixel makes it more likely to detect it in neighbouring pixels. However, if the null hypothesis (ie, that there are no sources in the field) is assumed, detected counts are just due to background. But the background (by definition) must be slowly varying over the scale of the PSF, otherwise it would be impossible to separate it from the sources; and in the limit of smooth background the event counts in the raw image *are* statistically independent. On the other hand, as has already been pointed out, a significant fraction of the cosmological x-ray background (perhaps approaching 100% at energies above 1 keV) actually consists of sources. This seems in direct contradiction to the null hypothesis. It is shown however in appendix \[confusion\] that the bulk of these background sources must be very faint, and that for x-ray telescopes of presently achievable effective areas, one can model the net contribution of this source population by a relatively sparse population of brighter sources superimposed upon a smooth background. I conclude therefore that it is acceptable to test the null hypothesis, thus in effect to detect sources, on a pixel-by-pixel basis. Some source-detection chains which assume the null hypothesis have adopted the following simplified scheme: 1. Determination of the expectation value of background at all pixels; 2. Convolution of the raw image; 3. Calculation of a null-hypothesis likelihood map, which is just the likelihood, given the assumed background, of each value of the convolved image arising from it by chance; 4. Location of sources by centroiding of troughs in the likelihood map; dealing with confused sources; parameter measurement, etc. The purpose of the present paper is to describe methods of calculating and applying a ‘matched filter’, that is a convolver which is optimized for the detection, on a pixel-by-pixel basis, of sources of a given PSF against a given background. Only steps 2 and 3 of the above sequence are considered - ie, it is assumed that a good estimate of the background is available, and that source centroiding, confusion resolution etc in the likelihood map may be independently optimized. The matched-filter detection scheme is compared with the sliding-box technique, specifically that used in the construction of the 1XMM catalog (Watson et al [@1xmm_paper]). The sliding box technique is arguably the simplest and most widely used of the detection techniques and thus provides a convenient baseline for comparison. Linear signal detection ======================= General {#sd_gen} ------- Suppose we have a parent function $C$ which is a function of some independent variables $\vec{x}$ (eg time, position, energy) and which comprises a background $B$ and signal $S$ such that $$\label{equ0} C(\vec{x}) = B(\vec{x}) + \alpha S(\vec{x}-\vec{x}_0)$$ where $$\int_{-\infty}^{\infty} d\vec{x} \ S(\vec{x}) = 1,$$ $\alpha$ is the signal amplitude and $\vec{x}_0$ is a reference point which serves to locate the signal in $\vec{x}$ space. Let each experimental measurement of $C$ return a random variable $c$ which is distributed according to a probability distribution with an expectation value $\langle c \rangle = C$. A common problem in signal detection occurs when one one knows (or can estimate) $B(\vec{x})$ and $S(\vec{x})$ *a priori* and one wishes to calculate the most likely values of $\alpha$ and $\vec{x}_0$ from a set of samples $c_i$ of $C$ at different values of $x$.[^1] In the case of point source detection, $\vec{x}$ is a two-coordinate vector which specifies position in the focal plane of a camera, $S$ is the point spread function (PSF) of the camera, and the samples $c_{i,j}$ are measurements of flux made on a pixel grid in the focal plane. As outlined in the introduction, to assess whether there is any signal present in a given channel $i$ one calculates the probability of the parent background $B_i$ alone generating either the observed count $c_i$ or any value higher than this. This is called the probability of the null hypothesis ($P_\mathrm{null}$). A signal is judged to be ‘detected’ in channel $i$ if the null probability of the measured $c_i$ falls below a previously selected cutoff value. It is very often the case in practice that $S$ extends over several channels. In this case it is usually possible to improve the signal-to-noise ratio in at least one of the channels spanned by the signal, hence the detection sensitivity, by performing a weighted sum of the counts measured over several adjacent channels. In the XMM-Newton case which we are going to consider, $\vec{x}$ extends along spatial dimensions $x$ and $y$ and energy dimension $E$. The weighted sum is to be computed for each spatial pixel $(i,j)$, the eventual aim being to produce a map of the null-hypothesis likelihood at each $(i,j)$. For computational purposes the spatial sum is most conveniently expressed as a convolution; the actual expression employed to calculate the weighted sum for each pixel is therefore $$\label{true_weighted_sum} c^\prime_{i,j} = \sum_{p=-M}^{M} \sum_{q=-M}^{M} \sum_{k=1}^{N_\mathrm{bands}} w_{p,q,k} c_{i-p,j-q,k}.$$ For present purposes however it is unnecessary to retain such complication. As far as the statistical analysis goes, equation \[true\_weighted\_sum\] at any given spatial pixel $(i,j)$ is simply a weighted sum of random variates: $$\label{weighted_sum} c^\prime = \sum_{i=1}^{N} w_i c_i.$$ It is also convenient for the present to assume that the signal is bracketted between $i=1$ and $i=N$, or, in other words, that the sampled parent function $C_i$ is given by $$C_i = B_i + \alpha S_i.$$ This is equivalent to assuming that a source is centred on pixel $(i,j)$ of equation \[true\_weighted\_sum\]. The standard deviation $\sigma^\prime$ of $c^\prime$ as given in equation \[weighted\_sum\] is estimated from the usual error propagation relations to be $$\label{std_dev} \sigma^{\prime 2} = \sum_{i=1}^N w^2_i \sigma^2_i$$ where $\sigma_i$ is the standard deviation of $c_i$. Clearly, whatever weight scheme is adopted, the stronger a signal is (ie, the larger the signal amplitude $\alpha$), the higher the probability that it will be detected (the smaller the value of $P_\mathrm{null}$). Ideally we would like to be able to calculate some cutoff value of $\alpha$, above which the signal definitely would be detected, and below which it definitely would not. It is however not possible to do this, for the reason that the any non-trivial function $c^\prime$ of the detected counts must itself be a random variable. A given value of $\alpha$ will not always give rise to the same value of $c^\prime$, and any value of $\alpha$ will produce any chosen value of $c^\prime$ within a sufficiently large ensemble. To get around the problem and so to permit the comparison of different choices of weights it is convenient to define a ‘counts amplitude’ $\beta$ such that $$\label{beta_def} \beta = \frac{c^\prime - B^\prime}{S^\prime}$$ where the significance of the primes here is that, for any function $f$, $$f^\prime = \sum_{i=1}^N w_i f_i.$$ It is not hard to show that $\langle \beta \rangle = \alpha$. Since any given null-hypothesis probability is associated with a definite value of the weighted sum of counts $c^\prime$, it is also therefore unambiguously associated with a definite value of $\beta$. This allows us to define the detection sensitivity $\beta_\mathrm{det}$ of any convolution as that value of $\beta$ which is associated with the chosen detection-cutoff value of $P_\mathrm{null}$. Clearly it is also the case that, for a given $B_i$ and $S_i$, there will be a set of weights (not necessarily unique) which gives maximum sensitivity, ie the smallest possible value of $\beta_\mathrm{det}$. The Gaussian case ----------------- For purposes of comparison I’ll briefly reprise the well-studied case where the probability distribution of $c_i$ for each $i$ is Gaussian, with constant $\sigma_i = \sigma$. In this case any weighted sum returns $c^\prime$ values which also have a Gaussian distribution, with $$\sigma^{\prime 2} = \sigma^2 \sum_{i=1}^N w^2_i.$$ One can calculate the signal-to-noise ratio $snr$ as follows: $$snr = \frac{c^\prime - B^\prime}{\sigma^\prime}.$$ The null-hypothesis probability is then given by $$P_\mathrm{null}(snr) = 0.5 \left( 1 - \mathrm{erf} \left[ snr/\sqrt{2} \right] \right).$$ By differentiating $snr$ with respect to the weights $w_i$ and equating each of the $N$ resulting derivatives to zero one arrives at the well-known result that the optimum set of weights is proportional to the signal itself: ie $$w_i = k S_i \ \forall \ i,$$ where $k$ is some non-zero constant. Weighted-Poisson case with a single weight ------------------------------------------ In the case where the probability distribution of the observed values $c_i$ is Poissonian, the optimum weights are not so easy to come by, and in general will depend in a non-trivial fashion on the level of background $B_i$. Consider first the simplest case, in which there is only one weight ($N=1$ in equation \[weighted\_sum\]). The solution in this case is trivial, but provides a useful mathematical template for the more difficult case in which $N > 1$. Let $c$ be a random (necessarily integer) variable which has a Poisson probability distribution. A second variable $c^\prime$ which is just $c$ multiplied by a single weight $w$ retains the Poisson probability distribution $$\label{pnull_basic} p(c^\prime) = \frac{\nu^c e^{-\nu}}{c!},$$ but with $\nu$ now given by $$\nu = \langle c \rangle = \frac{\langle c^\prime \rangle }{w}.$$ In the null hypothesis, $\langle c \rangle = B$ and thus $\langle c^\prime \rangle = wB = B^\prime$. If we extrapolate from the unscaled Poissonian case it is also clear that the null-hypothesis probability $P_\mathrm{null}$ is given by $$\label{equ1} P_\mathrm{null}(c^\prime) = 1-Q \left(\frac{c^\prime}{w}, \frac{\langle c^\prime \rangle }{w} \right) = 1-Q \left(\frac{c^\prime}{w}, \frac{B^\prime}{w} \right)$$ where $Q$ is the (complementary) incomplete gamma function, defined by $$Q(a,x) = \frac{1}{\Gamma(a)} \int_x^{\infty} dt \, e^{-t} t^{a-1}.$$ $\Gamma$ here represents the gamma function. Weighted-Poisson case with $N>1$ weights {#many_n_theory} ---------------------------------------- Formally speaking, we are now no longer in the Poissonian regime, since a weighted sum of two or more Poissonian variates does not itself in general have a Poissonian probability distribution. I have not been able to find a closed-form expression for the probability density function in this case. However, two empirical approximations are presented in the present subsection. ### The Fay and Feuer approximation {#ff_approx} Fay and Feuer ([@fay_and_feuer]) suggested on heuristic grounds that the null-hypothesis probability in this general case might be given to a good approximation by equation \[equ1\] with $w$ replaced by an appropriate equivalent weight $w_\mathrm{equiv}$. Their prescription for $w_\mathrm{equiv}$, $$w_\mathrm{equiv} = \frac{\sigma^{\prime 2}}{\langle c^\prime \rangle}.$$ where $$\langle c^\prime \rangle = \sum_{i=1}^N w_i \langle c_i \rangle$$ and, from equation \[std\_dev\] $$\sigma^{\prime 2} = \sum_{i=1}^N w_i^2 \langle c_i \rangle ,$$ is obtained essentially by equating respectively the first and second moments of the single-weight and many-weight probability functions. In fact when one compares the integrated $P_\mathrm{null}(c^\prime)$ function derived from the Fay and Feuer approximation against Monte Carlo data (see figure \[nullHypProbs\]), one sees that the Fay and Feuer curve seems to be displaced too far to high $c^\prime$. Essentially this is because equation \[equ1\] defines a continuous *envelope* function to the actual discontinuous, stepped $P_\mathrm{null}$ of a single Poisson variate. Consider now a weighted sum of $N$ Poisson variates, but with all the weights having the single value $w$: here the sum itself remains a Poisson variate, in which case the true integrated probability distribution remains coarsely stepped as in figure \[nullHypProbs\], the envelope to this being exactly given by equation \[equ1\], with $w_\mathrm{equiv} = w$. The coarseness is because there are no combinations of $c_i$ which can produce any value of $c^\prime$ other than $c^\prime = jw$ for integer $j$; the steps in the integrated probability distribution $P_\mathrm{null}$ are thus of width $w$. If we now allow the weights to be randomly perturbed by small amounts, the effect is to smear out the steps: ie, a range of values of $c^\prime$ close to $iw$ now become possible. Where the weights $w_i$ are allowed to become entirely random and independent (and are sufficently numerous), the coarse steps disappear entirely[^2], and the probability curve appears to steer a middle course (eg the dotted line in figure \[nullHypProbs\]) through the coarse steps of the single-weighted Poisson distribution of the same equivalent weight. It seems clear then that the approximation formula suggested by Fay and Feuer could be made more closely applicable to the general case by shifting it towards lower $c^\prime$ values by an amount $0.5 w_\mathrm{equiv}$. The resulting formula is $$\label{equ3} P_\mathrm{null}(c^\prime) = 1-Q \left( \frac{c^\prime}{w_\mathrm{equiv}} + \frac{1}{2}, \frac{B^\prime}{w_\mathrm{equiv}} \right).$$ Figure \[nullHypProbs\] shows an example of a $P_\mathrm{null}$ distribution derived from a Monte Carlo exercise (dotted line). The Monte Carlo ensemble consisted of $10^6$ values of the weighted sum $c^\prime$. 25 random weights $w_i$ were chosen before the start of the exercise: these initially had a uniform probability distribution between 0 and 1 but were then normalized so that they summed to 1. (The only effect of this normalization is to change the x-axis scale.) For each member of the ensemble, a Poisson-random integer $c_i$ was generated for each of the 25 bins, using a constant background value $B = 0.3$ counts per bin as the Poisson expectation value. The weighted sum $c^\prime$ of these 25 random values was made and added to the ensemble. The cumulative probability was then formed by summing, from high towards low values of $c^\prime$, a 100-bin, normalized histogram of the ensemble values. The modified Fay and Feuer curve (equation \[equ3\]) has not been plotted, but its path can be easily visualised by mentally shifting the dashed curve to the left by half the width of the coarse steps. Empirical tests with randomly-chosen weights and background suggest that equation \[equ3\] is a reasonable fit to actual distributions of $c^\prime$ for values of null probability greater than about $10^{-2}$. Better fits were observed when the distribution of both weights and background was even, and $B^\prime$ was greater than about 0.1. At values of $P_\mathrm{null}$ smaller than about $10^{-2}$, test data appear to diverge from equation \[equ3\], such that the actual null probability at a given value of $c^\prime$ is larger than the predicted value. The divergence appears to worsen at low values of background, or if the weights are not very homogeneously distributed. Comparison of the respective expressions for the third moments $\mu_3 = \langle (c^\prime)^3 \rangle$ of the single-weight and multiple-weight distributions shows that $\mu_{3,\mathrm{multi}}$ is always greater than $\mu_{3,\mathrm{single}}$ for positive, nonequal $w_i$. It is therefore almost certain that all the higher moments differ as well. If this is the case a divergence at high $c^\prime$ between the many- and single-weight distributions is to be expected. Attempts to fit a function of the form of equation \[equ3\] to test data were not satisfactory - that is, the best fit was still clearly not a good approximation to the parent distribution at low $P_\mathrm{null}$. Since source detection is necessarily concerned with low values of the probability of the null hypothesis, it is desirable to find a better approximation in this region. ### The $\chi^2$ approximation {#chi2_approx} The $\chi^2$-like integrated probability distribution $$\label{chi_for_many_W} P_\mathrm{null}(c^\prime) = Q \left(\frac{B^\prime}{w_\mathrm{equiv}}, \frac{c^\prime}{w_\mathrm{equiv}} \right)$$ proved to be an acceptable approximation down to at least $P_\mathrm{null} = 10^{-5}$ over a wide range of background values. An example is shown in figure \[low\_P\_null\]. The Monte Carlo simulation used to generate the results of figure \[low\_P\_null\] was similar to that of figure \[nullHypProbs\]. It consisted of an ensemble of $10^7$ weighted sums. The weights in this case constituted a matched filter for detection of sources in images with an average background of 0.1 counts/pixel, the source shape being the on-axis point spread function at 1.25 keV of the XMM-Newton EPIC PN camera. It might be possible to improve the fit of equation \[chi\_for\_many\_W\] by choosing a different value of $w_\mathrm{equiv}$, but this possibility has not been explored. It is not at present known why the $\chi^2$ cumulative distribution given in equation \[chi\_for\_many\_W\] seems to provide such a good model of the weighted-sum $P_\mathrm{null}$. ### Consequences of divergence of an approximation {#consequences} Suppose one inverts a formula such as equation \[equ3\] in an attempt to deduce the detection sensitivity at a given value of $P_\mathrm{null}$. What in particular goes wrong if the formula is not a good approximation to the true probability distribution? The answer is that the returned sensitivity value is correct, but the assumed $P_\mathrm{null}$, which controls the number of false positives expected, will be incorrect. In the example of figure \[low\_P\_null\]**a**, cutting the source list at an apparent null likelihood of 8 will yield a sensitivity of about 0.31 weighted counts if the Fay and Feuer expression is used. The true null likelihood at that value of $c^\prime$ is that of the Monte Carlo, ie about 6.7. This means that about 3.7 ($=e^{8-6.7}$) times more false positives will be encountered than expected. The $\chi^2$ formula on the other hand diverges from the Monte Carlo data in the other direction. Use of this formula gives a conservative result, being apparently slightly less sensitive at $c^\prime=0.36$, with the true null likelihood cutoff of about 8.6 yielding approximately 0.55 fewer false positives than expected. In order to get something like the desired rate of false detections, one must skew the apparent null likelihood cutoff, to about 10 in the Fay and Feuer case and 7.5 in the $\chi^2$ case. The sensitivity in both cases will then be ‘correct’ at about 0.34 weighted counts. Optimization of the weights {#optimization} --------------------------- As described in section \[sd\_gen\], for a given background $B$ and signal template $S$, any given value of the weighted sum of counts $c^\prime$ is associated with a unique value of the null-hypothesis probability $P_\mathrm{null}$. In the preceding subsection two equations (\[equ3\] and \[chi\_for\_many\_W\]) were given which approximate this relation for the situation in which the observed data are random Poisson variates. In order to calculate optimum weights in this case we must first invert an equation of this form to obtain $c^\prime$, then invert equation \[beta\_def\] to obtain the counts amplitude $\beta$. Inversion of either equation \[equ3\] or \[chi\_for\_many\_W\] to obtain $c^\prime$ involves an inversion of the incomplete gamma function $Q$. For the sake of practicality let us define the two inverse functions $Q_1^{-1}$ and $Q_2^{-1}$ as follows: if $$P = 1-Q(a,x),$$ then let $$a = Q_1^{-1}(1-P,x).$$ and $$x = Q_2^{-1}(1-P,a).$$ No closed-form expressions for either $Q_1^{-1}$ or $Q_2^{-1}$ seem to be known; for present purposes I have performed the inversions numerically by means of a Ridders-method routine (Ridders [@ridders]) as given in Press et al ([@numerical_recipes]). After insertion of the appropriate inverse gamma function, the counts amplitude $\beta$ is thus given for the modified Fay and Feuer approximation by $$\label{beta_def_ff} \beta = \frac{w_\mathrm{equiv} \left[ Q_1^{-1} \left( 1-P_\mathrm{null}, \frac{B^\prime}{w_\mathrm{equiv}} \right) - \frac{1}{2} \right] - B^\prime}{S^\prime}$$ and for the $\chi^2$ approximation by $$\label{beta_def_chi2} \beta = \frac{w_\mathrm{equiv} \left[ Q_2^{-1} \left( P_\mathrm{null}, \frac{B^\prime}{w_\mathrm{equiv}} \right) \right] - B^\prime}{S^\prime}.$$ The inputs to these formulae are (i) the set of weights, (ii) the value of $P_\mathrm{null}$, and (iii) the background and signal-shape information. Only the first two of these are under our control. Before we may begin to seek for the optimal set of weights, we must choose a value of $P_\mathrm{null}$. The value we choose should be governed by the maximum fraction of false detections we are prepared to tolerate. The desired cutoff value $P_\mathrm{det}$ of null probability can be obtained from the maximum acceptable number $n_\mathrm{false}$ of false detections by dividing by the number of ‘beams’ in a typical image, which is just the image solid angle divided by the ‘beam’ solid angle. In the present case, in which images are convolved before the null hypothesis is tested, the beam solid angle must be some kind of equivalent solid angle of the appropriately convolved PSF. For XMM-Newton at least, where the shape of the PSF varies across the field of view, its average value is probably best estimated via Monte Carlo trials. If however, for the sake of obtaining at least a rough approximation to the ratio between $P_\mathrm{det}$ and $n_\mathrm{false}$, we use the value $2.34 \times 10^{-5}$ deg$^2$ calculated in appendix \[confusion\] for the equivalent solid angle of the on-axis EPIC PN PSF as a lower limit to the beam $\Omega$, we find that a null-probability cutoff of $\exp(-8.0)$, the value used in the making of the 1XMM catalog, corresponds to at most 2 expected false detections per image. Let us define $\beta_\mathrm{det}$ as the counts amplitude $\beta$ which, for a given set of weights, corresponds to the chosen value of $P_\mathrm{det}$. $\beta_\mathrm{det}$ can be viewed as the amplitude of a source which is just detectable under these conditions. The optimum weights are then clearly those which yield the smallest value of $\beta_\mathrm{det}$. In the present study, Powell’s direction-set method as modified by Press et al (Press et al [@numerical_recipes], chapter 10.5) was used to optimize sets of weights by minimizing $\beta_\mathrm{det}$, as defined by either equation \[beta\_def\_ff\] or \[beta\_def\_chi2\], as a function of the weights. Detection of x-ray point sources in XMM-Newton data =================================================== The EPIC x-ray cameras of XMM-Newton are described in Strüder et al ([@strueder]) and Turner et al ([@turner]). There are three cameras: the telescope in each case is similar but two of the CCD detectors comprise 7 chips of MOS type, whereas the third has 12 chips of pn composition. The ‘good’ area of each of the three cameras occupies about 94% (MOS) and 82% (pn) of a 30$\arcmin$ diameter field of view. CCD pixel dimensions are 1.1$\arcsec$ square (MOS) and 4.1$\arcsec$ square (pn). The source-detection strategy used for the 1XMM catalog {#1xmm_strategy} ------------------------------------------------------- For each exposure, images in sky coordinates were made in five separate energy bands. The images had square pixels of $4 \times 4$ arcsec dimension. Images were made by transforming the position on the detector of each selected x-ray event into sky coordinates, then binning up the events into the image pixels. The position of each event was dithered within the boundaries of the CCD pixel in which it was detected. Variations over time of the spacecraft attitude were also taken into account. Source detection was performed on the five images in parallel. The source detection comprised a convolution and detection stage (steps 1 to 3 of the sequence described in the introduction), followed by a source-parameterisation stage (step 4 in the sequence). The detection etc stage was performed by the XMM-Newton SAS task *eboxdetect*, the parameterisation by *emldetect*. Both stages involve the calculation of a detection likelihood; since the value calculated by *emldetect* is arguably more sensitive than that of *eboxdetect*, the ideal procedure is to run *eboxdetect* with a deliberately low detection threshold, submit the resulting long list of source candidates to *emldetect* and to then accept as genuine sources only those for which the *emldetect* detection likelihood exceeded a second, more reasonable threshold. The *eboxdetect* threshhold should not be so close to the *emldetect* threshhold that the two selections interfere. In 1XMM practice there is some doubt as to whether the two detection threshholds were sufficiently far apart. For this reason, and because I don’t understand enough of the *emldetect* likelihood calculation to be able to replicate it, I have in the present paper only considered the sliding-box stage of the 1XMM detection procedure. The first step of this procedure was to make maps, 1 per energy band, of the estimated background in each pixel. The five images and five background maps were then each convolved with a square, $5 \times 5$ array of unit values.[^3] In mathematical form this processing can be represented as follows: $$c^\prime_{i,j,k} = \sum_{p=-2}^2 \sum_{q=-2}^2 c_{i-p,j-q,k}$$ and $$B^\prime_{i,j,k} = \sum_{p=-2}^2 \sum_{q=-2}^2 B_{i-p,j-q,k}$$ where $i$ and $j$ indicate the position on the image pixel grid and $k$ refers to the energy band. The overall null-hypothesis probability was calculated as follows. Firstly, because all the weights are equal to 1, $c^\prime$ remains a Poissonian variate; equation \[equ1\] is therefore exact at permitted values of $c^\prime$, with $$w = w_\mathrm{equiv} = 1.$$ The probabilities $P_{i,j,k}$ returned by equation \[equ1\] were converted to likelihoods (ie, negative logs of the probabilities). For each image pixel, a summed likelihood $L_{i,j}$ was then generated: $$\label{summed_like} L_{i,j} = -\sum_{k=1}^M \ln(P_{i,j,k}),$$ where $M$ is the number of energy bands (here 5). The probability distribution of $L$ is however open to some question. Bevington and Robinson ([@bevington]) show that a sum of the same form as equation \[summed\_like\] is distributed approximately as $\chi^2/2$ (up to an additive constant) with $\nu = M$ degrees of freedom. Wilks ([@wilks]) as cited in Cash ([@cash]) comes to similar conclusions. Perhaps for this reason the authors of the XMM-Newton SAS task *eboxdetect*, which performed the calculation of detection likelihoods for the 1XMM catalog, used the following approximate formula for the integrated probability of the null hypothesis across 5 bands: $$\label{chi_null_prob} P_{\mathrm{null},i,j} = Q \left(5, L_{i,j} \right).$$ Note however that the first term represents 1/2 the degrees of freedom, hence should be 2.5 not 5; also, the $P$ in equation \[summed\_like\] are integrated probabilities, not probability densities. Although a full analysis of the 1XMM source-detection technique is beyond the scope of the present paper, a Monte Carlo simulation of the statistical fluctuation of background was performed in order to check the accuracy of equation \[chi\_null\_prob\]. An ensemble of $5\times 10^7$ values of $L$ was accumulated. The procedure for each member of the ensemble was as follows. For each of the 5 bands, a $5\times 5$ array of Poisson-random integers was generated. To calculate the expectation value $B_{i,j,k}$ for pixel $(i,j)$ in the $k$th band, the $k$th normalized background weight for the XMM PN camera, as listed in table \[table:1\], was multiplied by 0.1 counts/pixel. $c^\prime_k$ was calculated by summing the 25 counts values of the $k$th band, $B^\prime_k$ being of course simply equal to $25 \times B_{i,j,k}$. Likelihoods were then generated for each band by use of equation \[equ1\], and summed to give $L$. The accumulated histogram of the resulting data, shown in figure \[Chi2MonteCarlo\], shows that there is a significant difference between the Monte Carlo results and the prediction of equation \[chi\_null\_prob\]. Attempts were made to fit a variety of functions to the Monte Carlo data. One must of course fit to the raw histogram data, the adjacent channels of which are statistically independent, rather than to the integrated curve displayed in figure \[Chi2MonteCarlo\]. It is not easy to fit a smooth function to the Monte Carlo data. Partly this is because the raw data are rather ‘noisy’, particularly at low values of $L$, a phenomenon which arises because in this range the input counts values to each of the five channels are usually small integers which, when combined, give rise to an sparse distribution in the allowed resulting $L$ values. The effect of this can be seen in the jumpy nature of the accumulated Monte Carlo data shown in figure \[Chi2MonteCarlo\]. It is also not straighforward to define a statistic to be minimized in order to generate the fit. A straightfoward sum of squared residuals, or even a chi squared sum (ie, sum of squared residuals, each divided by the variance in that channel), tends to result in the low-$L$ part of the data dominating the fit. This is undesirable if one is interested in minimizing false detections due to statistical fluctuations in background, in which case intermediate values of $L$ are more important. In the end I chose arbitrarily to minimize a sum of terms $$Z = \sum_i \frac{(f_i - y_i)^2}{\sigma_i^4}$$ where $f$ is the function to be fitted, $y$ represents the Monte Carlo data and $\sigma^4$ is the square of the variance. The fit was however restricted to values of $L$ less than 15, to avoid statistical noise in the Monte Carlo values at high $L$. There is an infinity of functions one could choose to fit to the data, but in fact a good fit as shown was obtained simply by allowing the first term $\nu /2$ in the $Q$ function in equation \[chi\_null\_prob\] to vary. The best fit, shown in figure \[Chi2MonteCarlo\], occurred at $\nu = 5.81$. This however suggests no obvious systematic correction to equation \[chi\_null\_prob\] and, since I have at this time no better analysis of the problem to put forward, and since the unmodified equation \[chi\_null\_prob\] was in fact used in the 1XMM source detection, I have retained it for comparison with the matched-filter method. It seems clear however that the rate of statistical false detections in 1XMM was probably lower than originally estimated, and thus that a lower detection cutoff could have been used in that survey without penalty. There appears to be no better way to correct estimates of the sensitivity of this method at the moment than by calibrating it via several Monte Carlos performed at different values of background (see eg section \[matched\_at\_5\]). A final point to note about the 1XMM detection procedure: it did not make use of the fact that, for any given exposure, images from more than one of the XMM-Newton x-ray cameras were usually available. All source detection was performed instead on a camera-by-camera basis. Clearly one would expect a technique which made use of the parallel information to yield an improvement in detection sensitivity. Matched filters for XMM-Newton source detection ----------------------------------------------- There are three obvious ways to improve on the 1XMM source-detection procedure by use of matched filters (weighted sums). Firstly, taking one energy band at a time, since we know to a reasonable approximation the point spread function (PSF) of the XMM-Newton telescopes, we could choose in each energy band a set of weights optimized for detecting that shape of signal; secondly, we might do the same thing across the energy bands by the expedient of assuming a common spectrum for the sources; thirdly, we might in similar fashion add together images from the three x-ray cameras of XMM-Newton. Only the first two alternatives are explored in the present study. ### Examples of matched filters for 1 energy band {#matched_1_band} We take a square array of dimension $2N+1$. Let us assume a parent function $C_{i,j}$ as follows: $$C_{i,j} = B_{i,j} + \alpha S_{i,j}, \ \forall \ i,j \ \mathrm{in} \ [-N,N]$$ with the normalization condition $$\sum_{i=-N}^N \sum_{j=-N}^N S_{i,j} = 1.$$ For simplicity, the background $B$ is assumed to be constant across the array. Let $S_{i,j}$ be the PSF on the optic axis of the PN telescope of XMM-Newton, at an energy of 1.25 keV (the mean energy of band 2 as defined in the 1XMM catalog (Watson et al [@1xmm_paper])). The PSF model used in the present exercise was originally calculated via a ray-tracing approach (Gondoin et al [@gondoin]), and is the same that was used to determine positions and fluxes of the sources in 1XMM. In the present exercise the PSF was centred on the middle of the $(i,j)=(0,0)$ pixel.[^4] We assume that we have an array of measured count values $c_{i,j}$ across the array, each of which is a random, Poisson variable, with $\langle c_{i,j} \rangle = C_{i,j}$. We make a weighted sum of the $c_{i,j}$ as follows: $$c^\prime = \sum_{i=-N}^N \sum_{j=-N}^N w_{i,j} c_{-i,-j},$$ and likewise for $B^\prime$. The two cases we want to compare are, firstly, the 1XMM-like case in which the $w_{i,j} = 1$ and $N = 2$; secondly the case in which the $w_{i,j}$ are optimized according to the procedure described in section \[optimization\]. But before the second procedure can be used, a value of $N$ must be chosen. It is not hard to see that, in principle, the larger the $N$ the better. To see this, suppose we compare two convolvers: one optimised on a ($2N+1$)-square array, the other on a ($2(N-1)+1$)-square array. The optimized values of the small convolver can be thought of as a subset of the possible values of the large convolver; one just sets the extra ring of pixels to zero. So, if the smaller convolver were a better source-finder than the large, the optimisation routine would have set the outer pixel values to zero automatically, giving rise to the same sensitivity of detection with the large as with the small. Thus a small optimized convolver can never be better than a large one; and the only limiting factor to $N$ becomes computational practicality. A value of $N = 4$ for the optimized convolvers has been chosen as the largest value which can be processed in the 5-band case (see section \[matched\_at\_5\]) in a reasonable time. The difference in size between the 1XMM and the optimized convolvers makes it more difficult to compare their efficiency. It seems a bit pointless to hobble the optimized convolver by restricting its size to the 1XMM $5\times5$ - after all, the whole aim of the exercise is to achieve the maximum practical improvement in sensitivity. It might be argued though that the discrepant sizes are unfair to the 1XMM ‘box’ convolver - if bigger convolvers are sometimes better, might not much of the gain in going from a $5\times5$ box-type to a $9\times9$ optimized simply be due to the increase in size? It turns out however that box-type convolvers larger than $5\times5$ perform uniformly worse than the $5\times5$, as is shown in figure \[SensyVsBoxSize\]; in fact, if we go in the other direction, to a $3\times3$ box, we get a slightly improved performance at all but the lowest background fluxes. Hence one can conclude that the comparison between convolvers of different size is probably being rather kind to the 1XMM algorithm than otherwise. Some examples of optimized $9\times9$ kernels $w_{i,j}$ are compared to the PSF $S_{i,j}$ in figure \[Neq1OnAxisContourfig\]. One would expect that $w \to 1$ as $B \to 0$ (all the counts are equally valuable) and $w \to S$ as $B \to \infty$ (Gaussian limit). This is consistent with the form of the high- and low-background kernels shown in the figure. The quantity which should be compared is the counts amplitude $\beta_\mathrm{det}$ (as given for example in equation \[beta\_def\_ff\]) at which the signal is just detectable - that is, at which the resulting null-hypothesis probability is just equal to some previously decided cutoff $P_\mathrm{det}$. Since the 1XMM sources were detected at a probability cutoff of $\exp(-8.0)$ (equivalent to about a 4.3-sigma detection), this is the value that was chosen for the present exercise. $\beta_\mathrm{det}$ as a function of background $B$ is plotted in figure \[Neq1OnAxisBetafig\] for both the 1XMM and the matched-filter procedures. For the latter, since there is no exact formula for the null-probability distribution $P_\mathrm{null}$, the calculated sensitivity depends on the approximation used to represent $P_\mathrm{null}$. Shown on the figure are results (at finely-spaced values of background) of using respectively the Fay and Feuer (equation \[equ3\]) and the $\chi^2$ (equation \[chi\_for\_many\_W\]) approximations, as well as four points (diamonds) where the true $P_\mathrm{null}$ has been estimated via Monte Carlos of $10^6$ iterations each. As described in the introduction, several of the x-ray source detection procedures to be found in the literature include a step in which the raw images are convolved with the telescope PSF. The PSF is an optimum convolver in the Gaussian limit but may be expected to depart from the ideal at low values of background flux. To investigate this, the detection sensitivity obtained by use of the PSF as convolver was calculated at four levels of background. The resulting sensitivities, corrected via use of Monte Carlos to estimate $P_\mathrm{null}$, are plotted as black squares on fig \[Neq1OnAxisBetafig\]. Several conclusions can be drawn from figure \[Neq1OnAxisBetafig\]. Firstly, for this form of signal, the Fay and Feuer approximation appears to be unserviceable for all but the highest values of background. The $\chi^2$ formula performs much better, accurately representing the true data down to about 0.2 counts/pixel background, below which it begins to diverge. Clearly though the matched-filter approach yields a better sensitivity than the $5\times5$ box convolver at all values of background, apparently asymptoting to about 25% better at high background, the advantage decreasing to zero at low. The PSF appears to be a useful approximation to the optimum convolver for background levels greater than about 0.03 counts s$^{-1}$. The PSF of the XMM-Newton EPIC cameras becomes azimuthally distorted with distance from the centre of the field of view. It is therefore of interest to repeat the above exercise for an example of the off-axis PSF. In the present case, a PSF at 850 arcsec from the edge of the optical axis (94% of the radius of the field-of-view of the EPIC cameras), at an azimuth of $45 \degr$, was arbitrarily chosen. All other variables were retained unchanged. No Monte Carlos were performed in this case, and only the results of the $\chi^2$ formula were used. The results can be seen in figures \[Neq1OffAxisContourfig\] and \[Neq1OffAxisBetafig\]. Comparison between figures \[Neq1OnAxisBetafig\] and \[Neq1OffAxisBetafig\] shows that the degree of improvement to be gained through the use of matched filters is approximately constant across the field of view. Also, whatever the method used, the sensitivity decreases by about 15% towards the edge of the field of view. This is because the less compact the PSF, the more difficult it is to sequester source from background counts - the effective background counts involved with the source are higher. For similar reasons, a decrease in sensitivity is seen in the detection of extended (ie non-pointlike) sources. Use of on-axis weights for the off-axis signal degrades the sensitivity by about 10% over the whole range of background. For values of background lower than about 0.1 counts/pixel the sensitivity using this un-matched filter becomes nominally worse than that achievable via the box-convolver method, although no doubt some ground could be recovered by correction of the formula via Monte Carlos. The variation in the shape of the PSF in XMM-Newton EPIC images puts some practical difficulties in the way of source detection via matched filters, because one has, in effect, to employ a different convolving kernel for each pixel of the image. This was the method adopted by Ueda et al ([@ueda]). An approximation to this which still allows one to use the useful mathematical properties of convolution is to divide the image into patches, convolve each patch separately, then add the results. Vikhlinin et al ([@vikhlinin]) used this technique. The XMM-Newton SAS (Gabriel et al [@carlos]) task *asmooth* (Stewart [@asmooth]) can also perform such a piece-wise convolution. ### Matched filters for 5 energy bands {#matched_at_5} In this case it is no longer possible to invert the 1XMM and matched-filter methods in the same way, since the summed-likelihood approach used to find 1XMM sources in 5 bands (described in section \[1xmm\_strategy\]) cannot be expressed as a weighted sum of Poissonian integers. However, equations \[beta\_def\], \[pnull\_basic\] and \[chi\_null\_prob\] taken together amount to a relationship between the counts amplitude $\beta$ and the null-probability $P_\mathrm{null}$: hence one can numerically invert this relationship to obtain the sensitivity $\beta_\mathrm{det}$ which corresponds to the cutoff probability $P_\mathrm{det}$, which is left at $e^{-8}$ as before. In order to calculate a matched filter for summing images in several energy bands, one must know the relative strength of the signal in each band, which amounts to knowing the source spectrum. Where sources have a variety of spectral shapes, as is the case for cosmic sources of x-rays, the matched-filter technique can only be optimized for a single class of sources at a time. The performance of the matched filter against an unmatched spectrum is examined toward the end of the present section. For purposes of the present study, weights were optimized to detect x-ray sources with an absorbed power-law spectrum having a photon index 1.7 and a HI column density of $3.0 \times 10^{20}$ cm$^{-2}$. The relative count rates in each band were obtained, via the program *xspec*, by folding this spectrum with the on-axis effective area function of the XMM-Newton PN camera. These weights are given in column 4 of table \[table:1\]. The energy band definitions (columns 1 and 2) are those of 1XMM. The spectrum of the background (column 3), which is just as important for present purposes as that of the sources, was obtained from the 1XMM catalog in the following way: for each energy band, a 2-dimensional histogram was made of the background counts for each source versus the exposure time; the approximate minimum background count rate for the band was then estimated from this plot. To enable a check of the efficiency of the matched filter at detecting a source of spectrum far from nominal, weights derived from the EPIC PN count rates of 1XMM J175921.7-335322, one of the hardest sources in the 1XMM catalog, were also obtained: these are given in column 5 of the table. ----- ------------ ---------- ---------- --------- Background Source Hard Low High weights: weights: source: 0.2 0.5 0.0842 0.1208 0.0023 0.5 2.0 0.1733 0.3763 0.0193 2.0 4.5 0.0941 0.0943 0.2993 4.5 7.5 0.0941 0.0350 0.6172 7.5 12.0 0.1485 0.0094 0.0619 ----- ------------ ---------- ---------- --------- : Source and background weights for the XMM-Newton PN camera.[]{data-label="table:1"} As noted in section \[matched\_1\_band\], a value of $N=4$ for the size of the optimized convolvers was chosen as giving the largest convolver which was still practical to compute. The optimization with 5 bands is observed to be a little slower than for the single-band case, understandable because there are are now $5 \times (2N+1)^2=405$ weights which must be optimized in parallel. The two approaches are compared in figure \[Neq5OnAxisBetafig\]. Before commenting on the difference in sensitivity between the two methods we ought to make sure we have an accurate measure of those sensitivities. As shown via a Monte Carlo experiment in section \[1xmm\_strategy\], equation \[chi\_null\_prob\] appears to be a poor approximation at a nett background level of 0.1 counts/pixel. The Monte Carlo exercise was repeated for six more background levels logarithmically spaced between 0.01 and 10 counts/pixel: that is, for each value of background, a Monte Carlo ensemble was generated, a cumulative distribution of the ensemble values $L$ was calculated, and finally a function $Q(\nu /2, L)$ was fitted to this distribution curve. The resulting values of $\nu$ are tabulated in table \[table:2\]. The ‘canonical’ value of $\nu$ in this case is 10 ($=2 \times 5$ bands). ----------------- -------- Nett background Fitted (counts/pixel) $\nu$ 0.01 4.31 0.032 4.21 0.1 5.81 0.32 7.26 1.0 8.22 3.2 8.90 10.0 9.35 ----------------- -------- : Values of $\nu$ obtained by fitting $Q(\nu /2,L)$ to Monte Carlo distributions of $L$ at different background levels.[]{data-label="table:2"} Sensitivity values obtained by replacing the 5 in equation \[chi\_null\_prob\] by the appropriate value of $\nu /2$ are shown by crosses in figure \[Neq5OnAxisBetafig\]. Clearly, use of the unmodified formula exacts a significant sensitivity penalty (because of over-estimation of the rate of false positives) at values of background less than about 1.0 counts/pixel. As was seen in section \[matched\_1\_band\], the $\chi^2$ formula used to approximate the null-probability distribution of weighted-Poisson-sum values also tends to diverge from the true distribution at low values of background. A similar set of Monte Carlo corrections was therefore performed for the weighted-sum data. Fortuitously, for the 5-band signal presently chosen, the corrections appear to be insignificant. The reader is cautioned not to expect this to be the case for all signal shapes. As mentioned in section \[many\_n\_theory\], one may compensate for the distortion in the true null likelihood (thus in the rate of false positives due to background fluctuations) by changing the value of nominal null likelihood used to sort ‘sources’ from ‘non-sources’. Such compensating values of null likelihood for seven values of background, evenly-spaced on a logarithmic scale, are plotted in figure \[likeMinDeviation\]. Taking the corrections into account, at high background values the matched-filter method appears to offer about 1.6 times the sensitivity of the 1XMM procedure, the advantage decreasing gradually to about 1.2 at the lowest background value plotted. A hasty comparison of figures \[Neq1OnAxisBetafig\] and \[Neq5OnAxisBetafig\] may lead one to conclude that there is, paradoxically, not much advantage to be gained by detecting sources over 5 bands rather than 1. However, the vertical scales of the two graphs are not equivalent, because they refer to different signal shapes. To make a proper comparison one would need to multiply all the background values of figure \[Neq5OnAxisBetafig\] by 0.1733 (the proportion of the total background found in energy band 2) and, for like reasons, all the amplitude values by 0.3763. Comparison with figure \[Neq1OnAxisBetafig\] then reveals an improvement in sensitivity for 5- versus 1-band detection by about a factor of 2 for the matched-filter method and 1.5 for the 1XMM method. As mentioned earlier, a filter which has been optimized to detect sources of a particular spectrum may not perform well in the detection of sources with very different spectra. To investigate this, the exercise of figure \[Neq5OnAxisBetafig\] was repeated. The filter was optimized as before for a source having weights listed in column 4 of table \[table:1\], but the sensitivity values were calculated under the assumption that the source weights came from column 5. The results are plotted in figure \[Neq5OnAxisBetaHardfig\]. It is apparent that the sensitivity of the matched filter to the hard-spectrum source is significantly degraded - indeed by a factor of 2 - whereas the detection efficiency of the 1XMM procedure is if anything slightly improved. Although this hard spectrum is the worst case likely to be encountered in practice, the results suggest the desirability of using a non-matched procedure in parallel. It is however also worth noting in passing that, regardless of whether the detection technique is matched to a particular source spectrum or not, some spectrum must be assumed in order to calculate any kind of multi-band sensitivity value. ### A check on the sensitivity results So far we have been ‘working backwards’, inverting expressions to obtain estimates of the minimum signal detectable under a variety of conditions. As a check on this procedure, a ‘forwards’ Monte Carlo experiment was performed as follows. Random data were generated from parent distributions of the form of equation \[equ0\] for a sequence of values of the amplitude $\alpha$. The data were generated in 5 bands, using the PN background ratios tabulated in table \[table:1\], with a net background flux of 1.0 counts/pixel. The usual PSFs provided the signal appropriate to each band. An ensemble of $10^4$ mini images was accumulated at each of 200 equally-spaced values of alpha. Each 5-band image stack was submitted to both the 1XMM and the matched-filter source detection procedures. The detection frequencies as functions of $\alpha$ are compared in figure \[forwardMonte\]. From figure \[Neq5OnAxisBetafig\], one would expect that signals of $\alpha >$ about 16 would be detected by the matched-filter approach, as opposed to a cutoff of about 25 for the (uncorrected) 1XMM approach. Although statistical fluctuations mean that signals with $\alpha < \beta_\mathrm{det}$ for that background flux are occasionally detected, and signals with $\alpha > \beta_\mathrm{det}$ are occasionally not, the results of this Monte Carlo are consistent with the earlier analysis. Conclusions =========== It is not possible to detect with certainty a signal superimposed on a noisy background where there is some non-zero probability that a combination of random background values can mimic the signal: the best that can be done is to calculate a detection probability - or its complement, the null or non-detection probability. The aim of any method to enhance signal detectability is therefore to decrease the null probability of a signal of any given amplitude, or, equivalently, to decrease the amplitude at which a signal generates a given value of null probability. This must be achieved by performing some transform upon the set of measured values of signal plus background plus noise. The main thrust of the present paper has been to examine that subset of transforms which may be expressed as discrete convolutions of the input data. There appears to be nothing new to say about the case in which the probability distribution of the noise value in any given data channel is Gaussian; because of this, attention has been further restricted to the Poissonian regime. Two independent approximations to the null-probability distribution for a convolution of Poissonian data have been described and compared; such approximate formulas allow one to optimize the convolution for detection of signals of any specified shape without performing a Monte Carlo on each occasion. Although the method described here yields optimum signal detection via convolution, the theory says nothing about the possibilities or otherwise of obtaining better detection sensitivity via other, non-linear transforms. Despite this, the optimized-convolution method has been shown to perform substantially better over a wide range of background levels than the more complicated technique which was employed to detect, in XMM-Newton x-ray images, the sources which comprise the 1XMM catalog. However this is perhaps not surprising in view of the fact that the convolution method makes use of more information about the signal (in the 1XMM case ‘information’ means the average spectrum and point-spread function (PSF) of the x-ray sources) than the 1XMM method. The greater use of information about the signal exposes one of the drawbacks of the optimized-convolver method: namely, that if the shape of the signal is not known, or can vary, then assumptions must be made about it. Any given convolver is optimized for only one shape of signal (and for one particular level of background). To detect with optimal sensitivity signals having a variety of shapes, under a variety of background conditions, one would need to repeat the convolution with many different convolvers, each tailored to a different signal/background combination. In practice this may not be worth the extra effort, since in many cases it may happen that acceptable results can be obtained by using a relatively small set of convolvers. Taking the XMM-Newton x-ray images as an example, it was shown in section \[matched\_1\_band\] that use of the on-axis PSF across the whole field results in only a few percent loss of sensitivity even towards the edges of the field of view where the PSF becomes significantly elongated in the azimuthal direction. In addition, the form of the optimal convolver appears to be relatively insensitive to background level. As regards the x-ray spectrum, it should be remembered that the weights tabulated in table \[table:1\] represent x-ray spectra after convolution by the strongly peaked response function of the instrument, and their trend is therefore dominated by that response function. Sources with quite different spectra may thus be expected to yield similar sets of spectral weights; for example, the bulk of x-ray sources in 1XMM exhibit their highest and lowest fluxes in bands 2 and 5 respectively. However it is probably desirable in practice to supplement the full matched-filter procedure with a non-spectrum-specific detection algorithm. The 1XMM sliding-box method, perhaps with the ‘boxes’ replaced by PSF-matched convolvers for increased sensitivity, is a possible choice for the latter. Finally, some discrimination between signals may be desirable, since not all signal shapes represent sources we would wish to detect. In the x-ray case, whereas the 1XMM detection method cannot discriminate between an x-ray source and a bright pixel on the CCD[^5], the matched-filter method does offer some degree of selection against bright pixels and other artifacts. Another caveat to be mentioned is the fact that both the formulas presented for null probability of a weighted sum of Poissonian data are only approximate; and what is perhaps worse, no analysis has yet been presented which would allow one to estimate the goodness of the approximations. In this case one can only fall back on probability curves derived from Monte Carlo data with which to calibrate the approximation formulae. A good example of the desirability of checks of this kind is the large gap demonstrated between the null-probability approximation used for 1XMM source detection and the results of Monte Carlos at a variety of levels of background. Although the approximations for the matched-filter method do not appear to be nearly so unsatisfactory, they are not immune from difficulties of this sort and should be subject to similar checks in practice. A natural extension of the convolver method as applied to x-ray source detection is to allow one to correctly add together overlapping images. One’s first impulse in this situation is simply to add the images without weighting, but (certainly in the XMM-Newton case) because the background rate can vary with time, separate images may have different source/background ratios and should therefore be weighted accordingly. The theory described in this paper allows one to estimate the optimum weights for such combinations. The author hopes to make the matched-filter method available in the *edetect* package of the v-6.5 release of the XMM SAS. The x-ray background {#confusion} ==================== As mentioned in the introduction, there is good evidence that the bulk of the cosmological x-ray background consists of numerous faint discrete sources. The present paper assumes however that the x-ray sky may be modelled by sparse, relatively bright sources (though not all necessarily detectable at any given exposure duration) upon a relatively smooth background. It is therefore of interest to see how consistent this model is with our current understanding of the real sky. The fundamental quantity to deal with is the probability distribution of detected flux, where the ensemble is taken to consist of measurements at random directions in the sky; non-source background is neglected and it is assumed that each measurement is free from other sorts of variation, eg the Poissonian detection noise we have been discussing so far. The connection between the distribution of source flux and the distribution of detected source counts depends in a mathematically complicated way on the ‘beam shape’, or PSF in our case. Useful treatments of the topic can be found in Condon ([@condon]) and Scheuer ([@scheuer]). For present purposes it is enough to consider the standard deviation of such a probability distribution. Let us consider the case in which the distribution of source flux $n(S)$ obeys a power law, viz: $$n(S) = kS^{-\gamma}.$$ Condon derives the following formula for standard deviation $\sigma$ in this case:[^6] $$\label{conf_sigma} \sigma = \left( \frac{k\Omega_\mathrm{e}C_\mathrm{max}^{3-\gamma}}{3-\gamma} \right)^{1/2},$$ where $C_\mathrm{max}$ is some upper limit on the detected source counts and $\Omega_\mathrm{e}$, the equivalent solid angle of the PSF, is defined as $$\label{equiv_omega} \Omega_\mathrm{e} = \int \left[ psf(\theta,\phi) \right]^{\gamma-1} \, d\Omega.$$ Note that $\sigma$ is unbounded for the entire probability distribution (ie, as $C_\mathrm{max} \to \infty$). As for $n(S)$, the measurements described in Mushotzky et al ([@mushotzky]) are consistent with a ‘two-slope’ model in which, for energies from 0.5 to 2 keV (corresponding to energy band 2 of 1XMM), $\gamma$ has the value 1.7 below $S=7 \times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$ and 2.5 above it. The $k$ values can be evaluated from $$\frac{k_\mathrm{faint}}{1-\gamma_\mathrm{faint}}S^{1-\gamma_\mathrm{faint}} = \frac{k_\mathrm{bright}}{1-\gamma_\mathrm{bright}}S^{1-\gamma_\mathrm{bright}} \sim 200 \, \mathrm{deg}^{-2}$$ where $S$ is the flux at the ‘knee’. Which $\gamma$ to choose? 200 sources per square degree, the integrated number density at the knee, yields about 30 sources per EPIC PN field. This is quite a typical number of serendipitous sources to find in such fields: hence one can say that the PN camera will, in a typical exposure, detect sources fainter than the flux at the knee. I therefore choose a value of 1.7 for $\gamma$. Substitution of this value into equation \[equiv\_omega\] yields $2.34 \times 10^{-5}$ deg$^2$ for the equivalent area of the on-axis, band 2 EPIC PN PSF. At this point it is convenient to convert fluxes $S$ to counts $C$. The highest value of background considered in the present paper is 10 counts pixel$^{-1}$. By use of the same histogram technique as described in section \[matched\_at\_5\], one may deduce that the typical background count rate for EPIC PN in 1XMM band 2 is about $2 \times 10^{-5}$ counts pixel$^{-1}$ s$^{-1}$. A background value of 10 counts pixel$^{-1}$ thus corresponds to an exposure time of $5 \times 10^5$ s, which is 5 times longer than the longest (PN) exposure time in the 1XMM catalog. Some multi-epoch observations made with XMM-Newton may approach this duration however; hence I take it as a reasonable upper limit to practical XMM-Newton observations. The flux to count-rate conversion factor for 1XMM band 2 (calculated in the same exercise as the source weights tabulated in table \[table:1\]) is $7.50 \times 10^{11}$ counts erg$^{-1}$ cm$^2$; the ‘knee’ in the Mushotzky soft-band log$N$-log$S$ diagram thus falls, for a $5 \times 10^5$ s exposure time, at $\sim$2600 counts in 1XMM band 2 and (making use of the source weights in table \[table:1\]) 7000 counts in the total band. Comparison of these numbers with figure \[Neq5OnAxisBetafig\] shows that even the 1XMM algorithm could detect sources 2 orders of magnitude fainter than the ‘knee’ at this exposure duration; clearly the estimate in the previous paragraph that XMM-Newton is capable of seeing far past the ‘knee’ in the log$N$-log$S$ curve is correct. One finds that $k_\mathrm{faint}$ in these units works out to be $6.88 \times 10^4$. We are now in a position to use equation \[conf\_sigma\] to calculate the ‘standard deviation’ $\sigma$ of the noise in the ensemble of measurements. Equation \[conf\_sigma\] evaluates to $$\sigma = 1.11 \times C_\mathrm{max}^{0.65}.$$ It only remains to select a value for $C_\mathrm{max}$. Suppose we choose $C_\mathrm{max}=47$ counts, which from figure \[Neq5OnAxisBetafig\] is the detection sensitivity obtainable (under the assumption that the background is flat) at this exposure length using the matched-filter algorithm described in the present paper. $\sigma$ evaluates to $\sim$13 counts, which is several times larger than the standard deviation $\sqrt{10}$ of the Poisson noise. However, the log$N$-log$S$ model indicates that the total source density at this counts value is 6640 deg$^{-2}$, which is still only 0.16 sources per PSF equivalent area $\Omega_\mathrm{e}$. The counts value at which one expects 1 source in total per $\Omega_\mathrm{e}$ is 3.3; using this for $C_\mathrm{max}$ yields 2.4 counts for $\sigma$ instead, just lower than the Poisson noise. Thus we may conclude that, for XMM-Newton exposures of total duration up to $5 \times 10^5$ s in length, the x-ray sky may still be modelled to acceptable accuracy by a flat background with superposed sources. It is clear though that the next generation of x-ray telescopes may not have life so easy. Finally, a word about background estimation. So far in the present paper it has been assumed that the background contribution is known. Background can be difficult to estimate, though - to calculate the background one must first excise or avoid the sources, but it is difficult to find sources without first having a knowledge of the background. However, one can conceive of an iterative process in which the detectable sources are gradually detected and excised in parallel with improvements in the background estimate. In the present case that would still leave a population of sources which are too faint to be detected but which are brighter than the confusion level calculated in the preceding paragraph. The sum of these faint sources will bias the background estimate upward. The average counts deg$^{-2}$ contributed by these sources is $$\phi = \int_{C_\mathrm{conf}}^{C_\mathrm{det}} dC \ C \ n(C).$$ For a single-power-law $n=kC^{-\gamma}$ this gives $$\phi = \frac{k}{2-\gamma} \left( C_\mathrm{det}^{2-\gamma} - C_\mathrm{conf}^{2-\gamma} \right).$$ If we use $6.88 \times 10^4$ for $k$, 1.7 for $\gamma$, 47 for $C_\mathrm{det}$ and 3.3 for $C_\mathrm{conf}$, $\phi$ evaluates to $4 \times 10^5$ counts deg$^{-2}$, or 0.5 counts pixel$^{-1}$ for 4 arcsec square pixels. A similar calculation can of course be performed for any other exposure duration. I would like to thank Silvia Mateos for calculating the source weights in table \[table:1\]. Alexander, D. M., Bauer, F. E., Brandt, W. N. et al 2003, Ap. J., 126, 539 Bevington, P. R. & Robinson, D. K. 1992, Data Reduction and Error Analysis for the Physical Sciences, 2nd ed. (WCB/McGraw-Hill, Boston) Boese, F. G. & Doebereiner, S. 2001, A&A 370, 649 Cash, W. 1979, Ap. J., 228, 939 Condon, J. J. 1974, Ap. J., 188, 279 Cruddace, R. G., Hasinger, G. R. & Schmitt, J. H. 1987, Astronomy from Large Databases, ed. F. Murtagh & A. Heck, p177. Damiani, F., Maggio, A., Micela, G. & Sciortino, S. 1997, Ap. J., 483, 350 DePonte, J., & Primini, F. A. 1993, Astronomical Data Analysis Software and Systems II, ASP Conference Series 52, 425 Dobrzycki, A., Jessop, H., Calderwood, T. J. & Harris, D. E. 2000, Bull. Am. Astron. Soc. 32, 1228 Ebeling, H. & Wiedenmann, G. 1993, Phys. Rev. 47, 704 Fay, M. P. & Feuer, E. J. 1997, Statistics in Medicine, 16, 791 Gabriel, C., Denby, M., Fyfe, D. J. et al 2004, Astronomical Data Analysis Software and Systems XIII, ASP Conference Series, 314, 359, ed. F. Ochsenbein, M. Allen & D. Egret. Gondoin, P., Aschenbach, B. R., Beijersbergen, M. W. et al 1998, Proc. SPIE, 3444, 278 Hasinger, G., Altieri, B., Arnaud, M. et al 2001, A&A 365, L45 Mushotzky, R. F., Cowie, L. L., Barger, A. J. & Arnaud, K. A. 2000, Nature 404, 459 Pierre, M., Valtchanov, I., Altieri, B. et al 2004, J. Cosmol. Astropart. Phys. 9, 1 Press, W. H., Teukolsky, S. A., Vetterling, W. T. & Flannery, B. P. 2003, Numerical Recipes in Fortran 77: the Art of Scientific Computing, 2nd ed., vol. 1 (Cambridge University Press, Cambridge UK) Ridders, C. J. F. 1979, IEEE Transactions on Circuits and Systems, CAS-26, 979 Scheuer, P. A. G. 1974, Mon. Not. R. astr. Soc. 166, 329 Starck, J.-L. & Pierre, M. 1998, Astron. Astrophys. Suppl. Ser. 128, 397 Stetson, P. B. 1987, PASP 99, 191 Stewart, I. M. 2004, http://xmm.vilspa.esa.es/sas/current/doc/asmooth.ps.gz Strüder, L., Briel, U., Dennerl, K. et al 2001, A&A, 365, L18 Turner M. J. L., Abbey, A., Arnaud, M. et al 2001, A&A, 365, L27 Ueda, Y., Takahashi, T., Inoue, H. et al 1999, Ap. J. 518, 656 Valtchanov, I., Pierre, M. & Gastaud, R. 2001, A&A 370, 689 Vikhlinin, A., Forman, W., Jones, C. & Murray, S. 1995 Ap. J. 451, 542 Watson, M. G., Pye, J. P., Denby, M. et al 2003, Astron. Nachr. 324, 89 Wilks, S. S. 1963, Mathematical Statistics (Princeton University Press, Princeton), chapter 13 [^1]: For simplicity I’ll assume in this section that there is only one signal $S$ to be found, although, since convolution filtering is a linear process, in principle there is no difficulty in detecting many superposed signals provided they have sufficiently different values of $\vec{x}_0$. [^2]: Note that the true probability curve in the case that $N > 1$ still falls in stepwise fashion, since the possible values of $c^\prime$ in this case are still discrete; it is just that the steps are much more finely spaced, since the spacing between possible $c^\prime$ values (and thus also between steps) decreases on average at a rate proportional to $(c^\prime)^{1-N}$. [^3]: Two further rebinning and convolution steps, approximately equivalent to convolution respectively by $10 \times 10$ and $20 \times 20$ arrays, were also performed. However, the principal purpose of these extra steps was to detect extended sources: therefore they can safely be neglected for purposes of the present discussion. [^4]: In practice, the centre of the PSF for any real source can of course be located anywhere within the ‘central’ pixel. A more rigorous procedure would take this into account. However, because it is not clear how best to do this, the simpler assumption has been made for the present study. [^5]: a further *characterisation* step in the 1XMM chain, employing the SAS task *emldetect*, does however (in principle) allow the two to be discriminated. [^6]: Actually Condon’s equation (13) appears to be incorrect: according to my working, the $D^{3-\gamma}_\mathrm{c}$ should be inside the square root. In addition, the condition $\gamma>2$, although necessary for others of his results, is not required here.
--- abstract: 'A plethora of research has been done in the past focusing on predicting student’s performance in order to support their development. Many institutions are focused on improving the performance and the education quality; and this can be achieved by utilizing data mining techniques to analyze and predict students’ performance and to determine possible factors that may affect their final marks. To address this issue, this work starts by thoroughly exploring and analyzing two different datasets at two separate stages of course delivery (20% and 50% respectively) using multiple graphical, statistical, and quantitative techniques. The feature analysis provides insights into the nature of the different features considered and helps in the choice of the machine learning algorithms and their parameters. Furthermore, this work proposes a systematic approach based on Gini index and p-value to select a suitable ensemble learner from a combination of six potential machine learning algorithms. Experimental results show that the proposed ensemble models achieve high accuracy and low false positive rate at all stages for both datasets.' address: - 'Electrical & Computer Engineering Dept., University of Western Ontario, London, ON, Canada' - 'Computer Engineering Dept., University of Sharjah, Sharjah, UAE' author: - MohammadNoor Injadat - Abdallah Moubayed - Ali Bou Nassif - Abdallah Shami bibliography: - 'ref.bib' title: Systematic Ensemble Model Selection Approach for Educational Data Mining --- e-Learning ,Student Performance Prediction,Educational Data Mining,Ensemble Learning Model Selection,Gini Index,p-value Introduction ============ The advancement of technology and the Internet has significantly affected learning and education. Within that context, e-learning was developed and can be defined as “the use of computer network technology, primarily over an intranet or through the Internet, to deliver information and instruction to individuals” [@00][@7]. However, there are various challenges regarding e-learning, such as the assorted styles of learning, and challenges arising from cultural differences [@9]. Other challenges include pedagogical e-learning, technological and technical training, and the management of time [@10]. That is why the need for more personalized learning has emerged. Personalized learning can be considered as one of the biggest challenges of this century [@11], where the personalization of e-learning includes adaptation of courses to different individuals. One of the biggest learning differences includes the level of knowledge an individual has, and it is being assessed through the learner profile. Learner profile is the most crucial step of the personalization process [@KBS3][@11]. To make learning more personalized, adaptive techniques can also be implemented [@12], [@13]. Data can be automatically collected from the e-learning environment [@13] and then the learner’s profile can be analyzed. As part of the learner profile analysis process, predicting student performance plays a crucial role as it can help reduce and prevent student dropout. This is particularly important in an e-learning environment given that it was reported in 2006 that students were 10% to 20% more likely to dropout of online courses than traditional classes [@s17]. High dropout rates can effect the future of colleges and universities, because policymakers, higher education funding bodies, and educators consider dropout rates to be an objective outcome-based measure of the quality of educational institutions [@s18]. Australia, the European Union, the United States of America, and South Africa all use dropout rates as an indicator of the quality of colleges and universities [@s19]. As such, universities need to be able to provide accurate analysis of learner profiles and prediction of student performance as well as customize their courses according to the participants’ needs [@13], [@14], [@15], [@KBS2]. In turn, this can improve the universities’ enrollment campaigns and student retention efforts resulting in a higher quality of education [@29]. To analyze collected data, the field of data mining (DM) and machine learning (ML) has emerged and garnered attention in recent years. DM is best defined as an extraction of data from a dataset and discovering useful information from it [@1]. Data collected is then analyzed and used for enhancing the decision-making process [@2]. DM uses different algorithms in an attempt to establish certain patterns from data [@3]. ML and DM techniques have proved to be effective solutions in a variety of fields including education, network security, and business [@sc; @sd; @se]. More specifically, a new sub-field named Educational Data Mining (EDM) has been proposed that focuses on analyzing educational data in order to understand and improve students’ performance [@KBS1] and enhance learning and teaching [@2]. Data used for EDM includes administrative data, students’ performance data, and activity data [@5]. To implement EDM methods, data needs to be collected from different databases and e-learning systems [@2]. Accordingly, this paper uses the comparative analysis gained from various classification algorithms to predict student’s performance at earlier stages of the course delivery. The developed models use ensemble classification techniques to categorize the students and predict their final performance group. In these terms, few classification methods were used, such as K-nearest neighbor (k-NN), random forest (RF), Support Vector machine (SVM), Logistic Regression (LR), Multi-Layer Perceptron (MLP), and Naïve Bayes (NB). These techniques were used individually or as a part of an ensemble learner model to predict the final performance group during the course at two stages - at 20% and 50% of the coursework. Our research aims to predict the students’ grades during the course as opposed to previous works which focused on performing this analysis at the end of the course. The aim is to identify the best ML individual or ensemble classifier that performs well with e-Learning data. The remainder of this work is organized as follows: Section \[sec:related\_work\] describes the related work and its limitations as well as summarizes the research contributions of this work; Section \[sec:methodology\] presents the methodology used for the experiments, highlights and analyzes the utilized datasets, presents the evaluation method, and determines the appropriate parameters for each of the ML algorithms in each dataset; Section \[sec:disc\] presents and discusses the experimental results; Section \[sec:limitation\] lists the limitations of the research; and finally, Section \[sec:future\] concludes the paper and provides some future research direction. Related Work {#sec:related_work} ============ DM methods have great potential when it comes to analyzing educational data. There is a big interest for understanding the needs of students and their actual level of knowledge. Many researchers have been interested in this problem during the last few years. In 2000, researchers tried to determine low-performing students by using association rules [@14], so that they could involve them in additional courses. Luan [@15; @16] tried investigating which students are most likely to fail the course by using clustering, neural networks and decision tree methods. In 2003 [@17], Minaeli-Bidgoli et al. used classification for modeling online student grades, while in [@18] authors were investigating how students’ performance can be influenced by demographic characteristics and performance. Pardos et al. [@19] used LR to predict the test score in math based on students’ individual characteristics, while Superby et al. [@20] used decision tree techniques, RF method, Neural networks and Linear discriminant analysis for predicting students who will most likely drop-out. Vandamme et al. [@21] also used Decision tree methods, neural networks and linear discriminant analysis for their prediction of students who will fail the course by classifying them into three groups: low, intermediate and high-risk students. In 2008, Cortez and Silva [@22] compared DM algorithms from four different approaches, namely Decision Tree, RF, Neural Network and SVM for prediction of students’ failure. Kovacic [@23] developed a profile of students who would most likely fail or succeed by using classification techniques. He used socio-demographic and learning characteristics as variables for predicting students’ success. Ramaswami et al. [@24] tried developing a predictive model that will be used for identifying students who are slow at learning by using Chi-square Automatic Interaction Detector (CHAID) decision tree algorithm. Pandey [@27] used NB classification to accurately distinguish the bright students from the slow ones. Their model was able to predict students’ grades based on their previous grades. In 2012, authors conducted a comparative research to make a best guess of the student’s performance [@28]. The study used decision tree algorithms and it was aimed at finding the best decision tree algorithm that can accurately predict students’ grades. The authors found that CART algorithm that was designed as a decision tree algorithm was the most efficient as it produced the most desired results and concluded that it is desirable to try different classifiers first and then decide which one to use based on the precision and accuracy it gives. Kabakchieva in [@29] used four DM algorithms – OneR Rule Learner, Decision Tree, Neural Network and k-NN. Results indicated that the highest accuracy was achieved using the Neural Network algorithm, where the most influencing factors on the classification process were students’ score upon admission and the frequency of failures in the first-year examinations. Yadav et al. [@30] investigated how the marks from previous or first year exams impact the final grade of engineering students. In their experiments, the authors used classification algorithms such as ID3, J48 (C4.5) and CART and they found that J48 (C4.5) gives the most accurate results. In 2013, one research of secondary education data [@31], performed by using NB and decision tree algorithms, concluded that decision tree classification algorithm was the best for predicting students’ performance and that students’ previous data can be used to predict their final grade. Hung et al. [@32] proposed the use of different classification algorithms such as SVM, RF, and neural networks to improve at-risk student identification. Experimental results performed on two datasets collected from both a school and university environments showed that the proposed approach had a higher accuracy and sensitivity than other works in the literature. Similarly, Moubayed et al. [@ch5ref8a][@35] investigated the problem of identifying the student engagement level using K-means algorithm. Moreover, the authors derived a set of rulers that related student engagement with academic performance using Apriori association rules algorithm. Experimental results analysis showed that the students’ engagement level and their academic performance have a positive correlation in an e-learning environment. Helal *et al.* proposed different classification algorithms to predict student performance while taking into consideration multiple features including socio-demographic features, university admission basis, and attendance type [@KBS1]. The authors’ experimental results showed that rule-based algorithms as well as tree-based algorithms provided the highest interpretability which made them more useful in an educational environment [@KBS1]. Zupanc and Bosnic extended an existing automated essay evaluation system by considering semantic coherence and consistency features [@KBS4]. Through their experimentation, the authors showed that their proposed system provided better semantic feedback to the writer. Moreover, it achieved higher grading accuracy when compared to other state-of-the-art automated essay evaluation systems [@KBS4]. Xu *et al.* proposed a two-layered machine learning model to track and predict student performance in degree programs [@ch4_rev1a]. Their simulation results showed that the proposed approach achieved superior performance to benchmark approaches [@ch4_rev1a]. Sekeroglu *et al.* compare the performance of five machine learning classification models to predict the performance of students in higher education [@ch4_rev1b]. Their experimental results showed that the prediction performance can be improved by applying data pre-processing mechanisms [@ch4_rev1b]. Khan *et al.* compared the performance of eleven machine learning models in terms of accuracy, F-measure, and true positive rate [@ch4_rev1c]. Their experimental results showed that decision tree algorithm outperformed other classifiers in terms of the aforementioned metrics [@ch4_rev1c]. Limitations of Related Work --------------------------- The difference in the reported results of the previous research is due to multiple factors. First, the participants of the research in different models influence the decision of the studies and their preference. Different researchers have varying interpretation of the models. Moreover, researchers could be biased depending on the educational environment under consideration. Contradicting results could also be caused by prior knowledge of the researchers concerning the models. To carry out a research, one goes through literature from past studies and in doing so, their stand on the best model could have been biased. Also, the difference in results in related work is because they are not using the same dataset or the same sample in the case where the dataset is the same. The same models perform differently when evaluated using different datasets. Moreover, one major limitation that many of the previous works in the literature suffer from is the fact that they use data collected from one course/term to predict the performance of students in future courses/ terms. However, to the best of our knowledge, none of the previous works predict the student performance during the course delivery.\ After going through the related work, our research aims to confirm the claims and clear any doubts concerning the best model that can identify students who may need help during a course at two stages. By conducting a practical research, our study aims to evaluate the prior findings and their authenticity. Our study will not be biased in any manner and it will look into the nature of datasets. Moreover, our research explores all the six algorithms equally, and any possible ensemble learner that might be developed using these algorithms. The study design predicts the students’ grades during the course as opposed to other designs that prefer to conduct it at the end of the course because it is a more accurate predictor. The research assumes that the efforts and seriousness of a student are directly proportional to the final course performance and grade. Therefore, assuming that all factors are constant, the performance of a student can be accurately predicted in the course of the semester. Contribution of Proposed Research {#sec:contribution} --------------------------------- Based on the discussion of the related work limitations, the contributions of this work can be summarized as follows: - Analyze the collected datasets and their corresponding features using multiple graphical, statistical, and quantitative techniques (e.g. probability density function, decision boundaries, feature variance, feature weights, principal component analysis, etc.) - Conduct hyper-parameter tuning to optimize the parameters of the different ML algorithms under consideration using grid search algorithm. - Propose a systemic approach for building an unbiased (through multi-splits) ensemble learner to choose the best model based on multiple performance metrics, namely the Gini index and the p-value. - Evaluate the performance of traditional classification techniques compared to the proposed ensemble learner, and identify students who may need help with high accuracy using the proposed ensemble learner. ![\[fig:1.1\] Learning Management System (LMS) Analytical Module](Fig1.png) Methodology and Research Framework {#sec:methodology} ================================== General Research Framework -------------------------- The purpose of this study is to predict students’ final grades in order to identify students who may need help at earlier stages of the course. Figure \[fig:1.1\] shows the analytical process for the data collected through the Learning Management System (LMS). The “Data Collection from LMS” module represents the process of collecting data from the LMS. This includes two different types of data, namely the grades of each student (stored in the “Grade Dataset” module) and the event log dataset (stored in the “Events Log Dataset” module). This work focuses on the student status prediction using ML (highlighted in grey as the “ML-based student status predictor” module). More specifically, the *ML Based student status predictor* is structured as in Figure \[fig:1.2\]. Note that the “Engagement metrics”, “students clustering”, and “Association rule generator” modules try to gauge the engagement of students and identify students who may need help. This work was completed in our previous work [@35]. ![\[fig:1.2\] ML-Based Student Status Predictor](Fig2.png) Two datasets were used in this experiment. Dataset 1 consists of records of 52 first year students who completed the undergraduate engineering course (out of 115 registered students) at the University of Genoa [@67]. On the other hand, Dataset 2 consists of records of 486 students who attended undergraduate science course at University of Western Ontario, Canada. Event logs to the LMS and students’ individual marks were used in the analysis. Moreover, these experiments predict the final grade based on the individual marks during the course at two stages: $20\%$ and $50\%$ of the coursework. To improve the accuracy of the prediction, the individual marks were converted to percentage as this scaling of scores (grades) improved the experimental results accuracy. The scaling of scores was also important when it came to compare the performance of students. Furthermore, if a student was absent for a certain mark and it was empty in the dataset, it was replaced with the value of zero. This also improves the experimental results accuracy across the considered techniques. The final grade was classified into two categories (classes): - Good (G) – the student will finish the course with a good grade ($60\% -100\%$); - Weak (W) – the student will finish the course with a weak grade ($\leq 59\%$). This limit was chosen since the typical passing grade for undergraduate course is often set to 60 in most universities around the world. The second class represents the targeted learners, i.e. students who need additional assistance and concentration in order to improve their performance. Datasets Description {#sec:dataset} -------------------- In the 1950’s, an American Educational psychologist, Benjamin Bloom, developed his taxonomy of cognitive objectives. According to *Bloom’s Taxonomy*, thinking skills and objectives can be categorized and ordered following the thinking process, [@U]. Bloom’s Taxonomy was revised years later when the categories or taxonomic elements were associated with it Lower Order Thinking Skills (LOTS): - Remembering - Recognizing, listing, describing, identifying, retrieving, naming, locating, finding - Understanding - Interpreting, Summarizing, inferring, paraphrasing, classifying, comparing, explaining, exemplifying - Applying - Implementing, carrying out, using, executing - Analyzing - Comparing, organizing, deconstructing, Attributing, outlining, finding, structuring, integrating - Evaluating - Checking, hypothesizing, critiquing, Experimenting, judging, testing, Detecting, Monitoring - Creating - designing, constructing, planning, producing, inventing, devising, making In this section we describe two datasets at two different stages (20% and 50% of the coursework), consisting of the results of a collection of tasks performed by University students, and we conduct some Principal Components Analysis. Interestingly, the first four principal components for Dataset 1, stage 20% and 50%, correspond to four of the categories above.\ In this paper, R was used for numerical analysis, ML techniques, and data visualization [@39]. R is a language and environment for statistical computing and graphics. - *Dataset 1*: The experiment has been conducted with a group of 115 students of first-year, undergraduate engineering major of the University of Genoa [@67]. The dataset contains data collected using a simulation environment named Deeds (Digital Electronics Education and Design Suite) and it is used in e-Learning courses. The e-Learning platform offers the courses’ contents using a special browser which will ask the students to solve problems that lied under different complexity levels.\ The records were summarized in Table \[tab:table\_dataset\_1\_list\] in order to be analyzed with the features’ distribution shown in Fig. \[fig:data1vars\_distrib\]. Only 52 students completed the course.\ Features ES 1.1 to ES 3.5 were used in the 20% stage. Features ES 1.1 to ES 5.1 which used at the 50% stage. Note that the sum of features ES 1.1 to ES 5.1 is 47% of the course total mark. However, since ES 5.2 had a weight of 10% and to maintain consistency of performing the analysis at a similar stage during the course delivery among the two datasets, these features were considered at the 50% stage since their sum is close to that percentage. These features (ES 1.1 to ES 5.1) can be categorized based on their cognitive objectives using Bloom’s taxonomy as follows: features ES 1.1 and ES 3.4 belong to the *Understand* category; features ES 2.1, ES 2.2, and ES 3.3 belong to the *Apply* category; features ES 1.2, ES 3.1, ES 3.2, and ES 3.5 belong to *Analyze* category; and finally features ES 2.1, ES 3.4, and ES 3.5 belong to the *Evaluate* category. These features are used to predict the student performance on the remaining tasks/features regardless of their cognitive objective category.\ Any empty mark was replaced with 0. Also, all features were converted to a mark out of 100 which improves the accuracy of all classifiers. Any mark that consist of decimal point number was rounded to the nearest 1. [max width=]{} \[tab:table\_dataset\_1\_list\] ![\[fig:data1vars\_distrib\] Dataset 1 - Features distribution](Fig3.png) In particular, the Final Grade feature has a distribution as seen in Fig. \[fig:fingrdistr20\]. - *Dataset 2*: The collected dataset is from a second year undergraduate Science course offered at The University of Western Ontario. The dataset consists of two parts. The first part is an event log of 486 enrolled students and has a total of 305933 records collected from the university’s LMS. The second part which is used in this experiment is the obtained grades of the 486 students in the different assignments, quizzes, and exams. Features Quiz 01 and Assignment 01 were used in the 20% stage. Note that the sum of these two features represents 18% of the course total mark. However, as mentioned earlier, these features were considered at the 20% stage since their sum is close to the desired percentage and to maintain consistency of performing the analysis at similar stages during the course delivery among the two datasets. For the 50% stage, features Quiz 01 to Assignment 02 were used with their sum representing 50% of the course total mark. Any empty mark was replaced with 0. Also, all features were converted to a mark out of 100 which improves the accuracy of all classifiers. Any mark that consists of decimal point number was rounded to the nearest 1. The total course mark was counted out of 110 with the additional 10% being added to assignment 03’s grade as curving to help students in the course’s final grade. In Table \[tab:table\_dataset\_2\], we show the list of features corresponding to dataset 2. Note that due to the recent adoption of the “*General Data Protection Regulation*” which introduced many restrictions on the collection of data, and to maintain the privacy of users, the contents of the different tasks/features could not be accessed. As such, these tasks/features could not be categorized as per their cognitive objectives using Bloom’s taxonomy. [max width=]{} \[tab:table\_dataset\_2\] And the distribution of the features is shown in Fig. \[fig:data2vars\_distrib\]. ![\[fig:data2vars\_distrib\] Dataset 2 - Features distribution](Fig5.png) The distribution of the Final Grade of the second dataset is shown in Fig. \[fig:fingrdistr\_big\]. ![\[fig:target\]Target variable: Dataset 1 vs. Dataset 2](Fig7.png) Fig. \[fig:fingrdistr20\] shows that the first dataset is not normally distributed (due to the fact that only 52 students out of the 115 that were initially registered for the course completed it, thus directly impacting the normal distribution of the final grades) while Fig. \[fig:fingrdistr\_big\] shows that the second dataset has a skewed normal distribution. This means that some classifiers are unlikely to have a good performance on the given datasets. For example, NB, which is a technique that performs very well in case of normally distributed numerical input (not categorical), is not expected to perform well with the considered datasets. Note that the Dataset 2 is unbalanced (4.3% of Weak students) whereas Dataset 1 has 40.4% of Weak students, as summarized in Fig. \[fig:target\]. Note that to overcome the issue of data being imbalanced, multiple procedures were considered in this work. The first is using the Gini index and p-value as evaluation metrics as this makes the reported results more robust and statistically significant. The second is using multiple splits to reduce the bias in the obtained results. Last but not least, the performance was evaluated using the specificity and sensitivity rather than the accuracy since these metrics better illustrate the performance of the classifiers when dealing with imbalanced data. Dataset Visualization {#Dataset_Visualization} --------------------- In ML problems, it is very important to visualize the dataset in order to get a better understanding of the nature of data. It is known that Principal Component Analysis (PCA) can be used to reduce the number of features to two principle components and this enables us to visualize the dataset [@ng]. The first and second principle components resulting from PCA were used to train SVM-RBF to plot the decision boundaries in order to understand the behavior of SVM with the given dataset. Fig. \[fig:SVM20\] shows that dataset 1 at 50% is not linearly separable because there are outlier data points. Indeed, if we were to train a linear classifier, we would likely obtain miss-classified points in the test sample. ![\[fig:SVM20\]Decision boundaries for dataset 1](Fig8.png) ![\[fig:SVM50\] Decision boundaries for dataset 2](Fig9.png) Fig. \[fig:SVM50\] illustrates the behavior of SVM in building the decision boundary with Gaussian kernel (RBF) of dataset 2 at 50% stage. In both cases, SVM-RBF model gives a better performance and it is clear that it outperforms the linear kernel (since the data is not linear) and that it is more likely to perform well in classifying new instances. Moreover, it is shown that PCA shows the overall “shape” of the data [@ng], identifying which samples are similar to one another and which are very different. In other words, PCA can enable us to identify groups of samples that are similar and work out which variables make one group different from another. Performing PCA on Dataset 1 at stage 20%, we obtain the percentage of variance for every component is as in Figure \[fig:PCA20bars\]. Each component explains a percentage of the total variation in the dataset. In particular the first four components can explain 81.5% of the variance. ![\[fig:PCA20bars\] Dataset 1 - Stage 20% - Percentage of variance per principal component](Fig10.png) For instance, the first principal component PC1 explains 42.1% of the total variance, which means that almost 1/2 of the information in the dataset can be encapsulated by just that one Principal Component. PC1 and PC2 together can explain 60.7% of the variance as shown in Figure \[fig:PCA20bars\]. More generally, we can plot the first four components 2 by 2 obtaining the following plot that shows in particular that there are many outliers, see Figure \[Dataset1\_20\_4by4\]. We visualize the variable contributions to the principal PC1 - PC4, aiming to give an interpretation of each principal component (see Figures \[Dataset1\_20\_2comp12\], \[Dataset1\_20\_2comp34\]): So we deduce that: - PC1 corresponds to the *Analyze Task* cluster - PC2 corresponds to the *Apply Task* cluster - PC3 corresponds to the *Understand Task* cluster - PC4 corresponds to the *Evaluate Task* cluster And all these tasks are in Boolean Algebra. Analogously, we perform PCA on Dataset 1 at stage 50%, obtaining the percentage of variance for every component. In particular the first four components can explain 76% of the variance. More specifically, the first principal component PC1 in this case explains 40.9% of the total variance, which means that about 2/5 of the information in the dataset is described by just the first Principal Component. PC1 and PC2 together can explain 57.8% of the variance. - PC1 corresponds to the *Evaluate Task* cluster - PC2 corresponds to the *Apply Task* cluster - PC3 corresponds to the *Analyze Task* cluster - PC4 corresponds to the *Understand Task* cluster Accordingly, it can be inferred that tasks that fall under the *Evaluate* and *Analyze* categories based on Bloom’s taxonomy (PC1 and PC2 in this case) are better indicators and predictors of student performance. This is because these task categories show the highest level of comprehension of the course material from the educational point of view. Hence, the performance of students in these tasks can provide us with intuitive insights into their overall projected performance in the course. ML Algorithms’ Parameter Tuning {#sec:ML_app} -------------------------------- In this section, we describe the classifiers we built for each of the datasets, then we explain the approach used to select the best ensemble learners for the considered datasets. Note that the models were trained on the *raw* datasets and not on the principal components. R was used to implement six classifiers and the ensemble learners. The six classifiers that we trained are SVM-RBF, LR, NB, k-NN, RF, and MLP. All the classifiers were trained using all the variables available and maximizing the Gini Index of a 3-fold cross validation [@CC1]. Note that a 3-fold cross validation was done in order to reduce the variance. This is based on the fact that using a smaller value of $k$ for cross validation often results in a smaller variance and a higher bias [@dd]. On the other hand, 5 different splits of data were used to reduce the bias of the models under consideration.\ The parameters used for each model are tuned using the *grid search* optimization method in such a manner that the Gini Index is maximized. Grid search optimization method is a common method used to hyper tune the parameters of ML classification algorithms. In essence, it discretizes the values for the parameter set [@ee]. Models are then trained and assessed for all possible combinations of these values for all the parameters of the ML model used. Despite the fact that this may seem computationally heavy, grid search method benefits from the ability to perform the optimization in parallel, which results in a lower computational complexity [@ee]. Table \[tab:ML\_model\_parameter\_range\] summarizes the range of values for the parameters of the different ML algorithms considered in this work. \[tab:ML\_model\_parameter\_range\] Note the following: - For the NB algorithm, the *usekernel* parameter represents the choice of the density estimator used. More specifically, *usekernel = true* implies that the data distribution is non-Gaussian and *usekernel=false* implies that the data distribution is Gaussian. - The LR algorithm was not included in the table because it has no parameters to optimize. The default function (namely the sigmoid function) was used by the grid search method to maximize the Gini index. For each algorithm and each dataset we show the list of the features ordered by their importance, i.e. their impact on the predictions. This is meant to give only a rough idea of what the most important features are for each algorithm and each dataset, as the ordering, for such small datasets, heavily depends on the split in Train-Test samples chosen. For this reason, the weights of the predictors will not be specified. The final step was to select, for each problem, the best ensemble learner among all the possible ensemble learners that could be produced with the six classifiers. ### Dataset 1 - Stage 20% - *RF:* The classifier was trained using k-fold cross-validation with $k=3$. ![\[Dataset1\_20\_RF\_vars\]Dataset 1 - Stage 20% - Random Forest, Accuracy vs. mtry value](Fig14.png) Figure \[Dataset1\_20\_RF\_vars\] shows how the performance changes by choosing a different *mtry* value, i.e. the number of variables available for splitting at each tree node. For example, the optimal value for the *mtry* parameter in the first split was determined to be 3.\ The variables’ importance in terms of predictivity is described in Table \[tab:table\_dataset\_1\_vars\_weight\] that shows that the most relevant feature is ES3.3, followed by ES3.5, whereas ES1.1 does not have much impact on the predictions. \[tab:table\_dataset\_1\_vars\_weight\] - *SVM-RBF*: SVM algorithm was trained with radial basis function kernel. Table \[tab:table\_dataset\_1\_vars\_weight\] shows the list of the predictors ordered by their impact on the output. In particular we can see from the table that the top three variables are ES2.2, ES3.3 and ES3.5 and that ES1.1 and ES3.2 do not have much impact on the predictions. - *MLP:* The variables’ importance for MLP classifier is shown in Table \[tab:table\_dataset\_1\_vars\_weight\]. - *NB:* The variables’ importance for the NB classifier is the same obtained for MLP and SVM as shown in Table \[tab:table\_dataset\_1\_vars\_weight\]. - *k-NN:* We trained k-NN classifier trying different values for $k$. For Dataset 1, stage 20%, the best performance was obtained for $k=9$ and the list of variables ordered by their importance is shown in Table \[tab:table\_dataset\_1\_vars\_weight\], which is the same as the one obtained for MLP, NB, and SVM classifiers. - *LR:* For the LR classifier, variables ES3.1, ES3.2 and ES1.2 have no impact on the predictions. The most important variables for this algorithm are ES2.2 and ES3.3, as shown in Table \[tab:table\_dataset\_1\_vars\_weight\]. In general, the most important features for all the classifiers are ES2.2, ES3.3 and ES3.5, that contributed to the first and second principal components, as we saw in Figure \[Dataset1\_20\_2comp12\]. ### Dataset 1 - Stage 50% - *RF:* The variables’ importance in terms of predictivity is described in Table \[tab:table\_dataset\_1\_50\_RF\_vars\_weight\] that shows that the most relevant feature is ES4.2, followed by ES4.1, whereas ES3.2 does not have much impact on the predictions. Also note that the bottom 3 variables are the same as the ones shown for RF classifier on Dataset 1, at stage 20%. \[tab:table\_dataset\_1\_50\_RF\_vars\_weight\] - *SVM-RBF:* The list of predictors for SVM classifier, ordered by their importance is shown in Table \[tab:table\_dataset\_1\_50\_RF\_vars\_weight\]. - *MLP:* MLP classifier shares with SVM classifier the same table of variables’ importance, see Table \[tab:table\_dataset\_1\_50\_RF\_vars\_weight\]. - *NB:* The variables’ importance for the NB classifier is the same obtained for MLP, SVM, and k-NN classifiers, see Table \[tab:table\_dataset\_1\_50\_RF\_vars\_weight\]. - *k-NN:* We tried different values for $k$ when we trained k-NN classifier. The final choice of $k$ for dataset 1 at 50% stage was $k=5$ . As for dataset 1 at 50% stage , the list of variables ordered by their importance as shown in Table \[tab:table\_dataset\_1\_50\_RF\_vars\_weight\], is the same as the one obtained for MLP and SVM classifiers. - *LR:* For the LR classifier, variables ES2.2, ES3.2 and ES1.2 have no impact on the predictions. The most important variable for this algorithm is ES4.1 as Table \[tab:table\_dataset\_1\_50\_RF\_vars\_weight\] shows. In general, the most important features for almost all the classifiers are ES4.1, ES4.2 and ES5.1, that contributed to the first principal component. The only classifier that does not have ES5.1 in the top three variables is the LR classifier which has ES3.3 in third position instead. Also variable ES3.3 belongs to the first principal component. This further validates the previously obtained results during the PCA analysis performed which illustrated the significance and contribution of each of the features in predicting the students’ final grade. ### Dataset 2 - Stage 20% We have only two features for Dataset 2, Stage 20% and for all the classifiers, the list of features ordered by importance is the same, see Table \[tab:table\_dataset\_2\_20\_predictors\]. \[tab:table\_dataset\_2\_20\_predictors\] ### Dataset 2 - Stage 50% For RF, SVM, MLP, k-NN and NB the lists of features are ordered in the same way while for LR the list order by importance is slightly different, as shown in Table \[tab:table\_dataset\_2\_50\_predictors\]. \[tab:table\_dataset\_2\_50\_predictors\] Based on the aforementioned results, it can be seen that assignments are better indicators of the student performance. This can be attributed to the fact that students have more time to complete assignments and are often allowed to discuss issues and problems among themselves. Thus, students not performing well in the assignments can be an indication that they are not fully comprehending the material, resulting in potentially lower overall final course grade. Proposed Ensemble learning model selection: a systematic approach ----------------------------------------------------------------- For each dataset and at each stage, a systematic approach was used to select the best subset of classifiers to consider for the ensemble learner. More specifically, the procedure was to evaluate the performance of every possible combination of the classifiers that we trained. The performance of each model was measured in terms of Gini Index. We inferred each model on the test sample producing a score. Each score corresponds to the probability of being a Weak student. Note that although confusion matrices present a clear picture of correct and incorrect classifications for each class of objects, they are affected by the choice of a threshold on the probability score. For this reason, we will rely on the Gini Index instead of evaluating every model using the confusion matrices as it is more robust and less dependent on the probability of the threshold. The statistical significance of our results is determined by computing the p-values. The general approach is to test the validity of a claim, called the *null hypothesis*, made about a population. An alternative hypothesis is the one you would believe if the null hypothesis is concluded to be untrue. A small p-value ($\leq 0.05$) indicates strong evidence against the null hypothesis, so you reject the null hypothesis. ![\[fig:1.3\] The procedure of generating ensemble learners and creating the score for each ensemble learner](Fig15.png) For our purposes, the null hypothesis states that the Gini Indices were obtained by chance. We generated 1 million random scores from normal distribution and calculated the p-value. The ensemble learners selected have p-value $\leq 0.05$, indicating that there is strong evidence against the null hypothesis. Therefore, choosing an ensemble model using a combination of Gini Index and p-value allows us to have a more statistically significant and robust model. For each model we create a matrix consisting of the target variable and the score produced by the model, then we order the matrix by the score in descending order. In this way, on top we can find the students which are more likely to be Weak students as opposed to the bottom of the matrix where we find the students who are less likely to be Weak students. Finally, for each dataset, we generate all the possible combinations of the six models and calculate the corresponding Gini Index. The procedure followed to produce each ensemble learner can be summarized in Figure \[fig:1.3\]. Since the training and test samples are small sized, many of the ensemble learners produced had the same Gini Index and the performances seemed to depend on the split into training and test samples that was chosen at the beginning. For instance, Figure \[LR\_5\_splits\_small20\] shows the performance of the different classifiers on Dataset 1 at stage 20% on different splits. For example, it can be seen that the performance of the LR classifier is really good on the first two splits considered (with Gini Index 88.9% and 76% respectively), whereas on the fourth split it performs very poorly (Gini Index is only 17.8%). ![\[LR\_5\_splits\_small20\][Dataset 1 - Stage 20% - Performance of different classifiers on 5 splits]{}](Fig16.png) Since the performance of the classifiers depends very much on the split, and so do all the ensembles, instead of considering only one split we considered 5 additional splits of the dataset, namely split1= (Training1, Test1), split2 = ( Training2, Test2), split3 = (Training3, Test3), split4 = (Training4, Test4), split5 = (Training5, Test5), and ran the 6 algorithms on each split, training a total of $6 \times 5$ models, each one run using a 3-fold method and keeping track of the performances of each model also on the folds. We average of the performance across the splits to remove any potential bias. Note that every split was created randomly, like for the initial training and test samples, and in such a way that the target variable of each training and test sample was representative of the entire dataset. Afterwards, we compare the performances of the models of each split, and produce one table of all possible ensembles. Although from the literature we expect the ensemble learners to perform better than the individual classifiers, we included in the comparison also the individual classifiers and considered 64 combinations of classifiers as opposed to 57 (the actual ensemble learners).\ Finally we created a table consisting of 64 rows, each one representing a specific ensemble, where in each row we have: - the first 6 entries, one for each algorithm, with 1’s and 0’s corresponding to presence or absence of the algorithm in the ensemble. In particular, the individual classifiers correspond to those rows for which the sum of the first six entries is one. - entries 7,8,9,10,11,12 corresponding to the Gini index, namely G, G1, G2, G3, G4, G5, associated to the initial split, split1, split2, split3, split4, split5. - entry 13, called Avg, corresponding to the average of G, G1, G2, G3, G4 and G5. with a subset shown in Table \[tab:table\_dataset\_1\_best\]. The table was ordered with respect to Avg, in descending order, and the top ensemble was selected as the best classifier. Moreover, the p-values were calculated and shown in the table in the 14-th column called p, and prove that the ensemble selected is statistically significant. Results and Discussion {#sec:disc} ====================== In this section we discuss the results and select an ensemble learner for each of the four experiments. Finally we set a threshold and evaluate the performance based on the corresponding confusion matrices. Results: Dataset 1 - Stage 20% ------------------------------ As explained, 30 models were trained, 6 on each of the 5 splits. The top 3 models in terms of Gini Index are RF, NB and k-NN. Once the matrix with all possible ensemble learners is created, we ordered it with respect to Average Gini Index. Table \[tab:table\_dataset\_1\_best\] consists of the best 3 classifiers (one of each row). \[tab:table\_dataset\_1\_best\] In the table, the 1’s correspond to the presence of the model in the ensemble whereas the 0’s indicate that the corresponding model should not be included. We select, for Dataset 1 at stage 20%, the ensemble corresponding to the first row, i.e. the ensemble formed by RF and NB. The Gini Index associated with this ensemble learner for the original split is 75%. Although this Gini Index is not the highest reached on the initial dataset, we believe that the ensemble chosen is more robust as it comes from the test on 6 different splits. The Gini Index of the ensemble chosen corresponds to the area between the model curve (light blue) and the straight line (in dark blue – no model), in Figure \[perf\_small20\_graph\]. Furthermore, it was observed that the number of Weak Students decreases by 100 times, moving from Highest scoring to Lowest Scoring. Although the ensemble we selected does not show either the lowest false positive rate or the highest accuracy, it is more robust than each individual classifier, i.e. depends less on the split, hence it is more reliable when inferred on a new dataset. Results: Dataset 1 - Stage 50% ------------------------------ Following the same procedure, we trained 30 models and compared the performances of the inferences on each test. RF and k-NN have the best performance, whereas LR has the worst performance on average on the datasets. The results obtained for the 3-folds agree with the ones obtained on the test samples: the top 3 models in terms of Gini Index are RF, NB and k-NN. \[tab:table\_dataset\_1\_50best\] The best 3 ensembles are shown in Table \[tab:table\_dataset\_1\_50best\]. The top three rows of the matrix of all possible ensemble learners, that was ordered with respect to the Avg, have same Gini Index and same p-value. Although they all are good candidates to be selected, we decide not to choose the third ensemble that includes MLP classifier as it performed very poorly on certain splits. Since k-NN had very good performances on all splits, we decide to include it in the ensemble. Accordingly, despite the fact that it may be more computationally expensive, we choose the ensemble corresponding to the second row of Table \[tab:table\_dataset\_1\_50best\], i.e. the ensemble formed by RF, k-NN and SVM classifiers to improve the probability of correctly classifying instances based on classifiers’ votes. The ensemble learner has Gini Index = 92.9%, represented by the area between the model curve and the straight line in Figure \[perf\_small50\_graph\]. In particular, we can see from Figure \[perf\_small50\_graph\] that if we order the students by their probability of being a Weak student, we get 60% of Weak students in the first 30% of students, and 100% of Weak students in the first 50%, as opposed to only 30% and 50% respectively if we were not to use the model. Similarly, it was observed that the number of Weak Students decreases by 100 times, moving from Highest scoring to Lowest Scoring. Results: Dataset 2 - Stage 20% ------------------------------ In the same way, we trained 30 models and compared the performances of the inferences on each test. For this dataset, RF, SVM and k-NN do not have good performance. The best classifiers in this case are LR, MLP and NB, and the results obtained for the 3-folds agree with the ones obtained on the test samples. The best 3 ensemble learners for Dataset 2 at stage 20% are shown in Table \[tab:table\_dataset\_2\_20best\]. \[tab:table\_dataset\_2\_20best\] Hence, for this dataset we select the ensemble learner formed by NB and LR. The Gini Index of the ensemble selected is 89% and is represented by the area between the model curve and the straight line as per Figure \[perf\_big20\_graph\]. In particular, 100% of Weak students are identified by the ensemble learner in the first 21.4% of students ordered by scoring, in descending order. Results: Dataset 2 - Stage 50% ------------------------------ For this dataset, RF, SVM and k-NN do not have good performance. The best classifiers in this case are LR, MLP, followed by NB. For Dataset 2 at stage 50% we select the ensemble learner formed by MLP and LR as shown in Table \[tab:ensembles\_big20part22\]. \[tab:ensembles\_big20part22\] The Gini Index of the ensemble selected is 89.9% and is represented by the area between the model curve and the straight line, see Figure \[perf\_big50\_graph\]. In particular, 100% of Weak students are identified by the ensemble learner in the first 28.28% of students ordered by their probability of being a Weak student.\ Table \[perf\_mat\_base\] summarize the specificity and sensitivity results of all the base learners for the two datasets. It is worth noting that the performance was evaluated using the specificity and sensitivity due to the fact that the datasets studied are imbalanced. This is a regular occurrence in educational datasets. Ensemble Learners ----------------- The ensemble learners selected are formed by: 1. RF and NB for Dataset 1 at stage 20% 2. RF, k-NN and SVM for Dataset 1 at stage 50% 3. NB and LR for Dataset 2 at stage 20% 4. MLP and LR for Dataset 2 at stage 50% It is worth noting that the RF classifier was chosen as part of the ensemble model in both stages for dataset 1. This is mainly due to the feature-rich nature of this dataset despite the low number of instances available. On the other hand, it can be seen that LR classifier was an integral part of the ensemble model for dataset 2. This can be attributed to two main factors. The first is that LR requires a relatively large sample size which is the case for dataset 2, but not dataset 1. The second is that LR classifier assumes that features are independent of each other. In case of dataset 2, due to the low number of features considered at the 20% and 50% stages, the correlation between the features may not be as evident, which can result in the LR classifier assuming them to be independent. Hence, the LR classifier performed well as an individual classifier at both stages and accordingly was included as part of the ensemble model. Table \[perf\_mat\_ensembles\] illustrates the performances of the ensemble learners in terms of accuracy, precision, recall/sensitivity, F-measure, and specificity. These quantities depend on the *threshold* $\tau$. Based on the results shown in Table \[perf\_mat\_ensembles\], it can be seen that the proposed ensemble models achieve high accuracy and high specificity. This means that the proposed ensemble model selection approach resulted in providing a model that can help instructors identify students who may need help during the course delivery. In turn, this would allow the instructors to have a more proactive role in helping their students. As mentioned earlier, the performance was evaluated using the specificity and sensitivity due to the fact that the datasets studied are imbalanced. This is a regular occurrence in educational datasets. Research Limitations {#sec:limitation} ==================== Despite the promising results obtained using the proposed approach, this work suffers from some limitations that may have affected the results. - For Dataset 1, the main issue was the size: only 52 students could be considered for our experiment, and the models were trained on only 70% of them and tested on the remaining 30%, corresponding to a number of students which is not statistically relevant.\ Although it is possible to use over-sampling techniques such as Synthetic Minority Over-sampling Technique (SMOTE) to increase the dataset sample size, such techniques increase the complexity of the model and may lead to model over-fitting [@ch4_rev1d]. Thus, using such techniques was not considered in this work in order to maintain the real-life nature of the dataset under consideration. - For Dataset 2, at the 20% stage, the number of student was not an issue, but we could only use two features to build the classifiers. In both cases it was not possible to obtain additional data: indeed, for the second dataset it took almost a year to get the data because of the privacy. - Another main factor is that there are many outliers, i.e. points that have very different characteristics from all the other points of the dataset as seen in Figures \[fig:SVM50\] and \[fig:SVM20\]. These points correspond to those students who had a good performance at all tasks except for one, where they did not perform it, getting zero grade (e.g. getting a zero grade in the midterm). The classifiers are more likely to give a wrong prediction for these students. However, these outlier points could not be removed as this would threaten the integrity and validity of the proposed analysis. - Another issue encountered was that Dataset 2 is unbalanced, i.e. the percentage of weak students in the target variable is very low. - As we have seen in Section \[Dataset\_Visualization\], the datasets are non-linear and consequently any linear classifier would not perform well on such given datasets. It is worth mentioning that these challenges and limitations are common when dealing with educational datasets. However, despite all the issues encountered, we highlight that *the classifier was able to predict correctly the weak students*, as it was shown in Section \[sec:disc\]. Conclusion and Future Work {#sec:future} ========================== Educational data mining has garnered significant interest in recent years in an attempt to personalize and improve the learning process for students. Therefore, many researchers have focused on trying to predict the performance of learners. However, such a task is not simple to achieve, especially during the course delivery. To that end, this work thoroughly explored and analyzed two different datasets at two separate stages of course delivery (20% and 50% respectively) using multiple graphical, statistical, and quantitative techniques. This analysis showed the non-linear nature of the collected data in addition to the correlation between the features. These insights were then used to help choose and tune the classification algorithms and their parameters respectively. Furthermore, a systematic ensemble learning model selection approach was proposed based on the combination of Gini Index and statistical significance indicator (p-value) to predict students who may need help in an e-learning environment. Experimental results showed that the proposed ensemble models achieve high accuracy and low false positive rate at all stages for both datasets. Based on the aforementioned research limitations, below are some suggestions for our future work. For instance, the best way to face \[A\] and \[B\] would be to have more data available, by collecting training and testing datasets for every time the course is offered. Unfortunately, not all data that would need to be added to our dataset can be collected for privacy reasons. Even though we cannot collect data such as students’ personal information, we can collect data regarding their attendance and we believe this feature would further help the classifier be more accurate. Issue \[C\] suggests that we could build a predictive model that aims to classify the outliers: this would be incredibly useful as it would allow us to detect those students who seem to be performing well at first, but that are likely to end up *becoming weak students* because of just one task. We could use this predictive model to try to prevent this from happening. To build such a model, we would need information about the students’ attendance. A possible solution for issue \[D\], is usually given by the *Synthetic Minority Over-sampling TEchnique (SMOTE algorithm)*: this algorithm consists of a combination of under-sampling the majority class (Good students) and over-sampling the minority class (Weak students) by adding Synthetic points to the dataset, [@R]. There are other methods to solve problems related to unbalanced datasets, for example one could use k-NN and define outliers to be those points that are the furthest from all the other points, or could use SVM to find a hyperplane that isolates the outliers from the rest of the points. There are many methods and algorithms available in literature, see [@T], and it would be interesting to run experiments using several techniques and compare their performance on this specific dataset.
--- abstract: 'Copper carbodiimide, CuNCN, is a geometrically frustrated nitrogen-based analog of cupric oxide, whose magnetism remains ambiguous. Here, we employ a combination of local-probe techniques, including $^{63,\, 65}$Cu nuclear quadrupole resonance, $^{13}$C nuclear magnetic resonance and muon spin rotation to show that the magnetic ground state of the Cu$^{2+}$ ($S=1/2$) spins is frozen and disordered. Moreover, these complementary experiments unequivocally establish the onset of an intrinsically inhomogeneous magnetic state at $T_h=80$ K. Below $T_h$, the low-temperature frozen component coexists with the remnant high-temperature dynamical component down to $T_l = 20$ K, where the latter finally ceases to exist. Based on a scaling of internal magnetic fields of both components we conclude that the two components coexist on a microscopic level.' author: - 'A. Zorko' - 'P. Jeglič' - 'M. Pregelj' - 'D. Arčon' - 'H. Luetkens' - 'A. L. Tchougréeff' - 'R. Dronskowski' title: Magnetic inhomogeneity in the copper pseudochalcogenide CuNCN --- Introduction ============ Geometrical frustration, where lattice geometry prevents conventional magnetic ordering, is one of the major promoters of exotic phenomena in condensed matter. In geometrically frustrated antiferromagnets, perplexing magnetic states, such as spin glasses and spin liquids, with unconventional excitations are regularly encountered.[@lacroix2011introduction; @balents2010spin] Furthermore, even in uniform spin systems, competition among various nearly degenerate phases can lead to inhomogeneous magnetic states on a microscopic scale.[@schmalian2000stripe; @mu2002stripe; @kamiya2012formation] Various intriguing experimental cases that fall in this category have indeed been reported. The first type of examples are systems in which a fraction of all spins remains dynamical while the rest either order[@stewart2004phase; @zheng2006coexisting; @ling2017striped] or form valence bonds.[@nakajima2012microscopic] Further examples are realizations of multiple magnetic orders that locally compete,[@zorko2014frustration; @zorko2015magnetic; @nilsen2015complex] and a nanoscale modulation of magnetic order.[@pregelj2015spin; @pregelj2016exchange] The nitrogen-containing analog of cupric oxide, CuNCN,[@liu2005novel] is a potential contender for such a class of antiferromagnets. Its unit cell (Fig. \[fig0\]) has orthorhombic symmetry and corresponds to the $Cmcm$ space group. It possesses a gap at the Fermi surface[@liu2008characterization] and magnetic exchange coupling between $S=1/2$ spins localized on the Cu$^{2+}$ sites is predicted to be of the order of several hundreds of kelvins.[@tsirlin2010uniform; @tchougreeff2013low] Nevertheless, various experiments initially suggested that this compound lacks classical long-range magnetic ordering.[@liu2008characterization; @xiang2009theoretical] Strong exchange coupling explains a very small magnetic susceptibility, which exhibits an anomaly at $T_h \simeq 80$ K that remains controversial. A broad feature was observed around this temperature also in magnetic heat capacity, where another feature was witnessed around $T_l \sim 20$ K.[@tchougreeff2017atomic] The $T_h$ anomaly was theoretically ascribed either to long-range magnetic ordering[@tsirlin2010uniform; @tsirlin2012hidden] or to a spin-liquid instability.[@tchougreeff2013low; @tchougreeff2014mean] Spin-liquid instabilities, where systems cross from one spin-liquid state to another, have been experimentally detected in frustrated antiferromagnets on a few occasions. [@itou2010instability; @gomilsek2016instabilities; @gomilsek2017field; @klanjsek2017high] In CuNCN, the spin-liquid instability scenario assumes a transition from a gapless high-temperature spin liquid into a pseudo-gapped low-temperature spin liquid at $T_h$.[@tchougreeff2013low; @tchougreeff2014mean] This scenario can indeed offer an explanation for the observed susceptibility anomaly,[@zorko2011unconventional] as well as for accompanying structural changes[@tchougreeeff2012structural; @jacobs2013high] and for the unusual temperature dependence of the magnetic heat capacity revealing a relatively small amount of entropy released up to 150 K.[@tchougreeff2017atomic] On the other hand, the alternative scenario of long-range magnetic order is based on the detection of frozen local magnetic fields in muon spin relaxation ($\mu$SR) experiments[@zorko2011unconventional; @tsirlin2012hidden] and on strong quantum spin fluctuations that are predicted to reduce the ordered part of the magnetic moments beyond the sensitivity of most experimental techniques.[@tsirlin2012hidden] However, the $\mu$SR experiments have further revealed that the width of the static-local-field distribution is very large, being of the same size as the average field magnitude even at temperatures as low as 63 mK.[@zorko2011unconventional] This is clearly not a characteristic of an antiferromagnetic long-range order. Quite surprisingly, in CuNCN the frozen state, as detected by $\mu$SR, seems to be fully established over the entire sample only below $T_l$, while in the broad temperature range $T_h \gtrsim T \gtrsim T_l$ the $\mu$SR signal attributed to the remnant dynamical component progressively disappears with decreasing temperature (Fig. \[fig1\]b).[@zorko2011unconventional] A similar conclusion of two coexisting magnetic components was drawn from $^{14}$N nuclear magnetic resonance (NMR) experiments, which also disclosed a broad distribution of local environments below $T_h$.[@zorko2011unconventional] Yet, both magnetic components have not been so far unambiguously simultaneously detected and evaluated in a single experiment. Furthermore, the assessment of the intrinsic nature of such intricate magnetism has not been provided either. In principle, static magnetism and the coexistence of two magnetic components could be triggered by external perturbations, e.g., by muons perturbing the local environment in the $\mu$SR experiments and by strong applied magnetic fields in the NMR experiments,[@zorko2011unconventional] if the dynamical disordered state was unstable. All these open questions about the magnetic inhomogeneity and the true nature of the magnetic ground state in CuNCN thus call for complementary experimental approaches that will clarify previous results and confront the existing theoretical proposals. Therefore, we have performed nuclear quadrupole resonance (NQR) experiment, as well as additional NMR and $\mu$SR experiments. The NQR investigation, which is performed in zero applied field, eliminates possible effects of both the applied magnetic field and external probes on magnetism of CuNCN. The $^{13}$C NMR has been chosen because the hyperfine coupling (Fig. \[fig0\]) of the $^{13}$C nuclei to the electronic magnetism is much smaller than that of $^{14}$N nuclei, thus allowing for simultaneous detection of both static and dynamical magnetic components below $T_h$. The $^{13}$C NMR thus complements the previous $^{14}$N NMR measurements that could detect only the dynamical magnetic component below $T_h$, while the static component was inaccessible.[@zorko2011unconventional] Finally, simultaneous detection of the two components is achieved also by a complementary $\mu$SR experiment in a strong transverse magnetic field (TF) that, in contrast to previous $\mu$SR experiments,[@zorko2011unconventional; @tsirlin2012hidden] by far exceeds the internal static fields. The new results reveal intrinsic coexistence of the two fundamentally different magnetic components, i.e., a dynamical and a static one, which in CuNCN compete on a microscopic scale in the broad temperature range between $T_h=80$ K and $T_l=20$ K. Experimental Details ==================== All experiments were performed on the same high-quality batch of CuNCN powders, as used in our previous local-probe magnetic investigations.[@zorko2011unconventional] The NQR spectra of the $^{63}$Cu nuclei with the natural abundance of 69% and the $^{65}$Cu nuclei with the natural abundance of 31% were recorded on a custom-built spectrometer using a solid-echo pulse sequence $\beta-\tau-\beta-\tau-{\rm echo}$ with the $\pi/2$-pulse length $\beta=4.8$ $\mu$s and the interpulse delay $\tau=20$ $\mu$s, by sweeping the frequency in $\Delta\nu = 50$ kHz steps. The nuclear spin-spin relaxation time $T_2$ was measured with the same pulse sequence on the $^{63}$Cu NQR peak by varying the delay $\tau$. The gyromagnetic ratios of the two isotopes are $^{63}\gamma = 2\pi\cdot 11.28$ MHz/T and $^{65}\gamma = 2\pi\cdot 12.09$ MHz/T. The $^{13}$C NMR spectra were recorded in a magnetic field of 9.4 T with parameters $\beta=9$ $\mu$s, $\tau=30$ $\mu$s and $\Delta\nu = 30$ kHz. The Larmor frequency in this case, given by the $^{13}$C nuclear gyromagnetic ratio $^{13}\gamma = 2\pi\cdot 10.71$ MHz/T, was $\nu_0=100.571$ MHz. An inversion recovery method was used for $^{13}$C spin-lattice relaxation ($T_1$) measurements, where the spectra were integrated within an 80 kHz window centered at positions $\nu_A=100.600$ MHz or $\nu_B=100.450$ MHz. The $\mu$SR experiment was performed on the General Purpose Surface-Muon Instrument (GPS) at the Paul Scherrer Institute, Switzerland. It was conducted in a transverse-muon-polarization mode in the magnetic field of 0.52 T applied along the muon beam direction and perpendicular to the up-down positron detectors. The field yielded muon-polarization precession at the Larmor frequency of $70.46$ MHz, given by the muon gyromagnetic ratio $\gamma_\mu =2\pi\cdot 135.5$ MHz/T. The angle between the initial polarization and the beam direction was about $45^{\circ}$, which resulted in the full positron-detector (muon) asymmetry of $A_0=0.206$. The muons, being spin-1/2 entities, can serve as efficient local probes of magnetism,[@yaouanc2011muon] as they are initially almost $100\%$ polarized in a $\mu$SR experiment. In a local field their polarization starts precessing, which is effectively measured by the direction of emitted positrons, arising from muon decays. The resulting muon asymmetry, measured by oppositely facing positron detectors with respect to the sample, is linearly proportional to the polarization. Results ======= $^{63,\,65}$Cu Nuclear Quadrupole Resonance ------------------------------------------- All previous magnetic studies of CuNCN relied on applying a finite magnetic field.[@liu2008characterization; @zorko2011unconventional; @tsirlin2012hidden] In the case of a competition among different nearly-degenerate candidate ground states, as it may occur in geometrically frustrated systems,[@lacroix2011introduction] the applied field can give preference to a particular state. This was suggested previously as a possible explanation of the unusual magnetism of CuNCN below $T_h$.[@zorko2011unconventional] The current NQR experiment performed in zero applied field would eliminate the field-induced effect if it was really present. In CuNCN, there exists a single Cu crystallographic site. Since both Cu isotopes, $^{63}$Cu and $^{65}$Cu, possess nuclear spin $I=3/2$, each isotope gives a single NQR line (corresponding to the transitions $I_z=\pm 3/2 \leftrightarrow \pm 1/2$) when no static magnetic field is present in a material.[@abragam1961principles] Consequently, the $^{63,\, 65}$Cu NQR spectrum of CuNCN consists of two spectral lines (Fig. \[fig1\]a). The $^{63}$Cu line is about twice as intense due to the larger natural abundance of this isotope. The position of each line is given by the nuclear quadrupolar moment $Q$ of each isotope and the common electric-field-gradient (EFG) tensor of the crystal at the Cu site. As $^{63}Q/^{65}Q=1.08$, the $^{63}$Cu spectral line is positioned at a higher frequency compared to the $^{65}$Cu line. Accordingly, we find the $^{63}$Cu line at 31.79 MHz and the $^{65}$Cu line at 29.41 MHz at room temperature. The full-width-at-half-maximum (FWHM) of the $^{63}$Cu NQR line $^{63}\delta=250(10)$ kHz is larger than the $^{65}$Cu FWHM $^{65}\delta=230(10)$ kHz by the same factor $^{63}Q/^{65}Q$, demonstrating that the NQR line width in CuNCN is due to a distribution of EFGs. The measured widths of several hundred kHz must have an entirely static origin, since homogeneous broadening ($1/T_2$) due to spin-spin relaxation is of the order of only 10 kHz (inset in Fig. \[fig1\]d). The $^{63,\,65}$Cu NQR line position (Fig. \[fig1\]c) and line width (Fig. \[fig1\]d) exhibit rather pronounced temperature dependences. The line position, which is sensitive to the changes in the crystal lattice through the changes in the EFG tensor, exhibits an unusual maximum at $T_h=80$ K (Fig. \[fig1\]c). This is in line with a nonmonotonic behavior of lattice parameters that has been recently observed in synchrotron[@tchougreeeff2012structural] and neutron-diffraction experiments. [@jacobs2013high] Moreover, also the line width exhibits an anomaly at $T_h$, where it starts increasing rapidly on decreasing temperature (Fig. \[fig1\]d), reminiscent of the $^{14}$N NMR case.[@zorko2011unconventional] Although the $^{63}$Cu NQR line width is larger than the $^{65}$Cu line width at high temperatures, the situation is reversed below $T_h$. At 50 K we find the ratio $^{63}\delta/^{65}\delta=0.90(3)$, which corresponds well to the gyromagnetic-factors ratio $^{63}\gamma/^{65}\gamma=0.93$. Therefore, in addition to the EFG NQR broadening mechanism, there is a second broadening mechanism of magnetic origin that prevails below $T_h$. This demonstrates the growth of static magnetic fields at copper sites below $T_h$ on the NQR time scale. The magnitude of static magnetic fields below $T_h$ can be estimated from the magnetic broadening. Since the NQR line is a convolution of two broadening mechanisms, the square of the full width is a sum of squared individual contributions. For the $^{65}$Cu line at 50 K the full line width of $730(30)$ kHz and the quadrupolar contribution of the order of $250(10)$ kHz yield the magnetic line width of $\delta^{65}=690(30)$ kHz, which corresponds to a magnetic field of $\pi\, \delta^{65}/^{65}\gamma = 30(1)$ mT. This should be compared to a typical on-site field of 25 T felt by the nucleus due to the hyperfine coupling with the fully polarized, on-site Cu$^{2+}$ ($S=1/2$) electronic spin.[@abragam19702electron] The observed, relatively small broadening of the NQR spectra is thus clear evidence that the magnetic state of CuNCN, as detected by $^{63,\,65}$Cu NQR, remains predominantly dynamical even below $T_h$. Next, we highlight another intriguing property of the low-temperature NQR spectra, namely, a loss of the signal, i.e., a spectral wipe-out, that is observed to occur progressively below $T_h$ (Fig. \[fig1\]a). Namely, the NQR spectral intensity, when corrected for the $T_2$ relaxation effect (inset in Fig. \[fig1\]d) and the Boltzman population factor $1/T$, shows a sudden drop below $T_h$, yet it vanishes completely only around $T_l$ (Fig. \[fig1\]b). This atypical behavior resembles the slow disappearance of the dynamical signal in the $\mu$SR experiment[@zorko2011unconventional] (Fig. \[fig1\]b), signaling that this is an intrinsic effect and not an effect induced by external perturbations, such as the implanted muons or the applied magnetic field. We note that the disappearance of the NQR signal speaks against two possible scenarios related to spin liquids. Firstly, in a spin liquid, an enhancement of nuclear relaxation can lead to a spectral wipe-out.[@zorko2008easy] Yet, in CuNCN, there is no drastic enhancement of the $T_2$ relaxation, which is always faster than the $T_1$ relaxation. Secondly, if a transition from a gapless to a gapped spin liquid were taking place at $T_h$,[@tchougreeff2013low; @tchougreeff2014mean] the opening of a spin gap would reduce spin excitations and a full-intensity NQR line would remain. Hence, we are led to a conclusion that the observed wipe-out is apparently of a different origin. A plausible explanation is that a second magnetic component that is unobservable in the present NQR experiment emerges below $T_h$. In order to detect and characterize it, we next turn to complementary $^{13}$C NMR measurements. $^{13}$C Nuclear Magnetic Resonance ----------------------------------- Similarly to previous TF $\mu$SR and $^{14}$N NMR measurements, [@zorko2011unconventional] the $^{63,\,65}$Cu NQR can only follow the component that is disappearing with decreasing temperature below $T_h$, while the assumed second component is undetectable. NMR measurements on $^{13}$C nuclei, which are positioned at the center of the NCN group (Fig. \[fig0\]), being the most distant from the electronic magnetism of Cu$^{2+}$ ions and possessing the weakest coupling among all the nuclei in CuNCN, can overcome this drawback. Furthermore, contrary to the $^{63,\,65}$Cu and $^{14}$N nuclei, the $^{13}$C nuclei are spin-1/2 entities. Therefore, they do not couple to the EFG tensor and their NMR spectra are solely affected by the magnetism of CuNCN. At 300 K, the $^{13}$C NMR spectrum has an asymmetric powder line shape (Fig. \[fig2\]b), which is typical of a uniaxial local-field distribution that can either be due to an anisotropic hyperfine coupling or a chemical shift. Simulation of the spectrum yields an isotropic local-field component of 240(5) ppm, shifting the line from the $^{13}$C Larmor frequency $\nu_0$, and an anisotropic component of 200(5) ppm, giving the line shape and line width. An additional isotropic Gaussian broadening with FWHM $^{13}\delta = 5.6$ kHz is needed, which we convolute with the powder spectrum to satisfactorily describe the NMR line shape. With decreasing temperature the isotropic Gaussian broadening becomes dominant and completely overshadows the anisotropic line shape below $\sim$100 K, where the Gaussian FWHM reaches $^{13}\delta=25$ kHz. This pronounced magnetic broadening of the $^{13}$C NMR spectrum is reminiscent of the broadening found in $^{14}$N NMR spectra, while the increase of the magnetic susceptibility in the same temperature range is much more moderate.[@zorko2011unconventional] The isotropic magnetic broadening implies inhomogeneous local magnetic fields, most likely due to short-range magnetic correlations. The shape of the $^{13}$C NMR spectrum changes drastically below $T_h$ (Fig. \[fig2\]a), where a much broader component ($^{13}$C$_2$) accompanies the narrower high-temperature component ($^{13}$C$_1$). The intensity of the $^{13}$C$_2$ component grows on the expense of the $^{13}$C$_1$ component, which is steadily disappearing when temperature is decreased (Fig. \[fig2\]a). The $^{13}$C NMR spectra can be fit with two overlapping Gaussian components (Fig. \[fig2\]c) down to $T_l=20$ K, where the $^{13}$C$_1$ component completely disappears. The broad component is approximately 4-times broader than the narrow one (see section \[muSR\] for details on the quantitative analysis). This gradual transformation of the $^{13}$C NMR spectrum corroborates with the above-presented wipe-out of the $^{63}$Cu NQR and previous $\mu$SR results[@zorko2011unconventional] (Fig. \[fig1\]b). Importantly, it also provides the first direct insight into the low-temperature phase, which other measurements could not detect. The static local field at the $^{13}$C nuclear site, estimated from the FWHM $^{13}\delta_2=370$ kHz of the broad $^{13}$C$_2$ component at 5 K, is $\pi\,^{13}\delta_2/^{13}\gamma=17$ mT. The two $^{13}$C NMR components are further inspected on the basis of their spin-lattice relaxation rates $1/T_1$ (Fig. \[fig2\]d). Above $T_h=80$ K, i.e., in the magnetically homogeneous phase, we find a single-exponential relaxation at the frequency $\nu_A$, corresponding to the maximum of the narrow $^{13}$C$_1$ NMR component. The relaxation rate increases linearly with temperature. Below $T_h$, a stretched-exponential relaxation (stretching exponent $\alpha<1$; see the inset in Fig. \[fig2\]d) is observed, while $1/T_1$ changes its trend and starts increasing with decreasing temperature. Since the amplitude of the broad $^{13}$C$_2$ component is very small at the $\nu_A$ position down to $\sim$50 K (Fig. \[fig2\]a), we attribute this inverse trend to the narrow $^{13}$C$_1$ NMR component. Indeed, $T_1$ analysis where the spectrum-integration window around the $\nu_A$ position is reduced in steps from the original 80 down to 4 kHz does not yield any significant change of the $T_1$ values in this temperature range. Below $\sim$50 K, the amplitudes of the two components at the $\nu_A$ position become comparable, therefore, the extracted relaxation rate is an average of both components. We note that varying the integration window at these temperatures changes the $1/T_1$ value within less than 15%, with the maximal variation occurring at 30 K. This implies that the relaxation rates of the two NMR components are similar in magnitude. The observed changes in the stretching exponent $\alpha$ (inset in Fig. \[fig2\]d) demonstrate broadening of the distribution of relaxation rates at $\nu_A$ below $T_h$, something regularly encountered in inhomogeneous magnetic phases. The same effect was previously observed by $^{14}$N NMR.[@zorko2011unconventional] To avoid the overlap of the two components, we additionally performed $^{13}$C NMR $1/T_1$ measurements at the $\nu_B$ position, where the contribution of the narrow $^{13}$C$_1$ component is negligible at all temperatures (Figs. \[fig2\]a, \[fig2\]c). Therefore, we attribute this relaxation solely to the broad $^{13}$C$_2$ component. The measured relaxation rate $1/T_1$ at $\nu_B$ increases with temperature in a quasi-linear fashion, and a stretched-exponential relaxation ($\alpha=0.7$) is found, similar to the $\nu_A$ position. We note that the relaxation at $\nu_B$ could not be measured reliably at temperatures above 40 K due to weak signal. The $^{13}$C $1/T_1$ rate normalized by the factor $(^{13}\gamma)^2$, which is proportional to the magnitude of the square of local fluctuating magnetic fields, is about three orders of magnitude below the corresponding value for $^{14}$N at room temperature.[@zorko2011unconventional] This can be rationalized with a significantly reduced spin density at the $^{13}$C site compared to the $^{14}$N site. Such a reduction of the spin density, despite the fact that the exchange interaction bridged by the NCN$^{2- }$ group should be extremely large,[@tsirlin2010uniform; @tchougreeff2013low] is expected based on local-density approximation (LDA) calculations. These predict that the highest occupied molecular orbital (HOMO) of NCN$^{2- }$, overlapping with the Cu$^{2+}$ orbital, is composed mainly of the two nitrogen orbitals due to $\pi$ bonding, while the contribution of the carbon orbitals is minor.[@tsirlin2010uniform] Secondly, filtering effects are possibly important already at room temperature, as the predicted dominant antiferromagnetic exchange interaction along the $c$ crystallographic axis exceeds room temperature. [@tsirlin2010uniform; @tchougreeff2013low] The position of the $^{13}$C nucleus on the NCN$^{2-}$ bond with respect to the surrounding copper moments is more symmetric than the position of the $^{14}$N nucleus (Fig. \[fig0\]). As a result, the $^{13}$C nucleus filters out antiferromagnetic correlations along the $c$ axis mediated by the isotropic part of the transferred hyperfine coupling, while the $^{14}$N nucleus does not. The significantly reduced hyperfine coupling on the $^{13}$C site compared to other nuclear sites in CuNCN also explains why both magnetic components that appear below $T_h$ can be observed by $^{13}$C NMR. In contrast, only the dynamical high-temperature component can be detected by $^{63}$Cu NQR and $^{14}$N NMR,[@zorko2011unconventional] while the low-temperature component possessing larger frozen fields is inaccessible by these two types of nuclei. Muon Spin Rotation {#muSR} ------------------ To further characterize the two magnetic components appearing below $T_h$ we turn to $\mu$SR. Previous zero-field (ZF) $\mu$SR experiments on CuNCN demonstrated that the internal field $B_{int}$ in the low-temperature magnetic phase is frozen and broadly distributed.[@zorko2011unconventional; @tsirlin2012hidden] The distribution width (18 mT at 5 K) was found to be similar to the average field magnitude. Due to the inhomogeneous magnetism below $T_h$, it was essentially impossible to fit the ZF data unequivocally between $T_h$ and $T_l$. The TF measurements, on the other hand, have been performed so far only in a weak applied field $B_{TF}=2\,{\rm mT}\ll B_{int}$. In this case, only the signal of the dynamical high-temperature component could be traced below $T_h$ and was gradually lost on decreasing temperature (Fig. \[fig1\]b), while the $\mu$SR signal corresponding to the magnetically frozen component was undetectable.[@zorko2011unconventional] Further insight into the inhomogeneous magnetism, complementary to the previous $\mu$SR investigations and present $^{63}$Cu NQR and $^{13}$C NMR, can be gained from TF measurements in a strong external field ($B_{int}\ll B_{TF}$), where $B_{int}$ represents only a small perturbation to $B_{TF}$. The muon asymmetry $A$ measured in the applied field $B_{TF}=520$ mT shows a pronounced evolution with temperature (Fig. \[fig3\]a). At 290 K, a single-component slowly relaxing oscillating signal is observed. On the other hand, at temperatures below $T_h=80$ K, two oscillating components are clearly seen. The first component retains slow relaxation, while the second component relaxes much faster (Fig. \[fig3\]c). This can be further visualized by the Fourier transform analysis of the oscillations (Fig. \[fig3\]b). The analysis reveals a single-component ($\mu_1$) narrow spectrum above $T_h$ and an additional broad component ($\mu_2$) appearing below this temperature. The intensity of the $\mu_1$ component decreases below $T_h$ with decreasing temperature and completely disappears below $T_l=20$ K. These results are thus in complete agreement with the $^{13}$C NMR spectra also revealing the simultaneous presence of both the narrow and the broad component between $T_h$ and $T_l$. At 5 K, the $\mu$SR FFT spectrum is Gaussian (Fig. \[fig3\]b). Its FWHM of $\delta_\mu=5.3$ MHz corresponds to a static-field-distribution width of $\pi \delta_\mu/\gamma_\mu =20$ mT, which is in perfect agreement with previous ZF $\mu$SR results.[@zorko2011unconventional] We note that internal fields in the range 10–100 mT are typically detected by muons in frozen magnetic insulators with spin-1/2 entities.[@yaouanc2011muon] Therefore, the frozen moment in the broad $\mu_2$ component must represent a significant part of a full Bohr magneton. On the contrary, the width of the narrow $\mu_1$ component is about 25-times smaller (see below), thus once more revealing its predominantly dynamical nature. A more quantitative insight is obtained from fitting the muon data in the time domain (Fig. \[fig3\]a). The muon asymmetry can be modeled with a single damped cosine component above $T_h$ and below $T_l$, while two such components with different relative amplitudes $A_i$, frequencies $\nu_i$, and relaxation rates $\lambda_i$ are needed in the intermediate temperature regime, where $$\label{eq1} A(t) =A_1\cos \left(2\pi\nu_1 t \right){\rm e}^{-(\lambda_1 t)^2} +A_2\cos \left(2\pi\nu_2 t \right){\rm e}^{-(\lambda_2 t)^2}.$$ The two amplitudes are found to sum to the full asymmetry of $A_0=0.206$ at all temperatures. Their temperature dependence, shown in Fig. \[fig4\]a, quantifies – in terms of volume fractions – the gradual transition from the dynamical $\mu_1$ to the frozen $\mu_2$ component between $T_h$ and $T_l$. Identical results (Fig. \[fig4\]a) are obtained from fitting the complementary $^{13}$C NMR spectra with two Gaussian contributions (Fig. \[fig2\]c) between $T_l$ and 60 K, where the fits become unreliable due to weak intensity of the broad component. The disappearance of the high-temperature component (1) and its broadening with decreasing temperatures in both experiments (Fig. \[fig4\]b) also corresponds nicely to the behavior of the $^{63}$Cu NQR signal. The temperature evolution of the $\mu$SR relaxation rates $\lambda_i$ and $^{13}$C line widths $^{13}\delta_i$ of the two components is displayed in Fig. \[fig4\]b. We note that both $\lambda_i$’s exceed the longitudinal muon relaxation rate[@tsirlin2012hidden] and both $^{13}\delta_i$’s exceed the $^{13}$C spin-lattice relaxation rates by at least an order of magnitude. $\lambda_i$’s and $^{13}\delta_i$’s in general contain a contribution due to a static-field distributions and a contribution due to dynamical fields fluctuating at the Larmor frequency, while the longitudinal muon relaxation and the nuclear spin-lattice relaxation are solely due to dynamical fields.[@abragam1961principles] Therefore, the static-local-field contribution to $\lambda_i$ and $^{13}\delta_i$ must dominate. The static-field distributions increase sizably already in the dynamical high-temperature magnetic component (1) when decreasing the temperature from room temperature towards $T_h=80$ K (Fig. \[fig4\]b), implying the presence of short-range correlations. Below $T_h$ the increasing trend of $\lambda_1$ and $^{13}\delta_1$ with decreasing temperature becomes even more pronounced. Interestingly, this increase is correlated with the increasing relaxation/width of the other, frozen component (2) that emerges below $T_h$. Discussion ========== Let us first compare our results to previous magnetic studies of CuNCN. Tsirlin [*et al.*]{} dubbed the low-temperature phase of CuNCN a “hidden magnetic order”.[@tsirlin2012hidden] We have indeed found complementary evidence of frozen magnetism at low temperatures in this study. However, in addition to the fact that magnetic Bragg peaks are absent,[@xiang2009theoretical] there are several experimental findings that speak strongly against long-range magnetic order. First, we emphasize that the width of the distribution of the internal static fields is extremely large, i.e., of the order of the average field magnitude. This property, initially revealed by ZF $\mu$SR experiments[@zorko2011unconventional; @tsirlin2012hidden] is here confirmed by the broad Gaussian TF $\mu$SR and $^{13}$C NMR spectra at low temperatures. In contrast, box-shaped spectra would be observed for a powder sample in the case of a well-defined internal field. Secondly, we stress that the $^{13}$C spin-lattice relaxation rate at low temperatures shows a linear temperature dependence, which also does not agree with ordinary 3D antiferromagnetically ordered states. Namely, for the usually dominant Raman-magnon-scattering process one generally finds $1/T_1\propto T^3$ for temperatures much above the gap in the spin-wave spectrum. [@beeman1968nuclear] If the temperature is lower than the gap or higher-order magnon scattering terms are dominant the temperature dependence of $1/T_1$ will be even steeper.[@beeman1968nuclear] These observations show that the low-temperature magnetic state of CuNCN is a frozen, highly magnetically disordered state. In terms of static magnetic fields, this state is closer to a spin-glass-like state than to a long-range antiferromagnetic order. However, the canonical spin-glass picture does not apply, since the onset of spin freezing at $T_h=80$ K seems to be independent of the magnetic field, as both the zero-field NQR and the NMR experiments performed in 9.4 T show essentially the same results. This field independence and the lack of any zero-field-cooled/field-cooled magnetic irreversibility[@zorko2011unconventional] differentiate CuNCN from other more conventional disordered frozen spin systems.[@binder1986spin] In another proposal, a competing scenario to a frozen magnetic state was presented.[@zorko2011unconventional] This scenario of a spin-liquid instability at $T_h=80$ K was proposed based on (i) the fact that the exchange coupling was predicted to be extremely large,[@liu2008characterization; @tsirlin2010uniform] potentially allowing for a correlated magnetic state already at temperatures as high as room temperature, (ii) the spin-only magnetic susceptibility deduced from ESR was found to exhibit a sudden decrease at $T_h$,[@zorko2011unconventional], (iii) even polarized neutron diffraction experiments failed to detect any magnetic Bragg peaks down to the lowest temperatures,[@xiang2009theoretical] and (iv) the spin liquids were possibly fragile to external perturbations, such as implanted muons in the $\mu$SR experiment and high magnetic fields applied in the NMR experiments. The present experiments eliminate the uncertainties of the point (iv), unambiguously reveal frozen magnetic fields, and thus leave much less room for any interpretation based of a transition between a gapless high-temperature spin liquid and a pseudo-gapped low-temperature spin liquid. All complementary experimental techniques employed in our study, even the zero-field NQR, show the same general features. The data is in fact entirely consistent with an inhomogeneous state with two magnetic components below $T_h$, the essentially dynamical nature of one of the components and the presence of large static magnetic fields in the other component. Such a two-component picture also explains the decrease of the ESR susceptibility[@zorko2011unconventional] below $T_h$. An interesting finding of the two experiments that are able to detect both magnetic components between $T_h$ and $T_l$ ($^{13}$C NMR and TF $\mu$SR) is that internal fields emerge also in the dynamical phase and that the broadening of the dynamical $^{13}$C$_1$ component $^{13}\delta_1$ and the relaxation of the dynamical $\mu_1$ component $\lambda_1$ scale with the broadening $^{13}\delta_2$ and relaxation $\lambda_2$ of the corresponding frozen components (Fig. \[fig4\]b). This indicates that the dynamical component and the frozen component are intertwined rather then being phase segregated. Furthermore, the scaling suggests that the small static fields observed in the dynamical component actually originate from the magnetically frozen component, implying a microscopic mixture of both components. We find that the scaling ratios $\lambda_2/\lambda_1\sim 25$ and $^{13}\delta_2/^{13}\delta_1\sim 4$ differ substantially (Fig. \[fig4\]b. This is most likely due to the fact that the value of $^{13}\delta_2$ is relatively decreased due to a symmetric position of the $^{13}$C nucleus on the NCN$^{2-}$ bond with respect to the surrounding frozen copper moments (Fig. \[fig0\]). For the isotropic part of the hyperfine coupling this symmetry causes filtering-out of antiferromagnetic correlations in both $a$ and $c$ crystallographic directions, along which the exchange couplings are predicted to be dominant.[@tsirlin2010uniform; @tchougreeff2013low] This does not apply for the usually much less symmetric position of the muon stopping site.[@yaouanc2011muon] On the other hand, the ratio $^{63,65}\delta_2/^{63,65}\delta_1$ of the Cu NQR line widths should be enhanced compared to the $\lambda_2/\lambda_1\sim 25$ ratio. The reason is that the $^{63,65}\delta_1$ line width of the detectable dynamical component originates from a transferred hyperfine coupling with neighboring static Cu$^{2+}$ electronic moments (Fig. \[fig0\]), while the contribution of dynamical on-site moments is reduced due to exchange narrowing. [@abragam1961principles] On the other hand, for the undetectable frozen component the $^{63,65}\delta_2$ line width is determined by the hyperfine coupling to the on-site static moments. The on-site hyperfine coupling is usually orders of magnitude larger than the transferred coupling. Therefore, the frozen $^{63,65}$Cu NQR component should be much broader than the dynamical component, which explains why we were unable to detect it. Similar magnetic inhomogeneities to the one observed in CuNCN between $T_h=80$ K and $T_l=20$ K have been related in the literature[@stewart2004phase; @zheng2006coexisting; @ling2017striped; @nakajima2012microscopic; @zorko2014frustration; @zorko2015magnetic; @nilsen2015complex; @pregelj2015spin; @pregelj2016exchange] to geometrical frustration of the underlying spin lattice leading to a degenerate ground-state manifold. The coexistence of different magnetic components on a microscopic scale is then a natural way to release frustration. Although the exact spin model of CuNCN is at present still a subject of debate,[@tsirlin2010uniform; @tchougreeff2013low] the observation of magnetic inhomogeneity in the broad temperature range between $T_h$ and $T_l$ speaks in favor of a highly frustrated spin model, rather than any model where geometrical frustration is negligible. The gradual onset of the frozen magnetic phase below $T_h=80$ K is reminiscent of disorder-broadened first-order phase transitions,[@manekar2001first; @kumar2006relating] regularly encountered in the formation of metastable magnetic states with glassy characteristics, e.g., such as found in colossal magnetoressistive manganites.[@tokura1996competing; @dagotto2005complexity] However, the latter phenomenon is generally field dependent, shows thermal hysteresis, and relies on the presence of quenched disorder or accommodation strain.[@dagotto2005complexity; @ahn2004strain; @wu2006magnetic] As already emphasized above, there is no apparent field dependence in spin freezing of CuNCN below $T_h$, at least for fields up to 9.4 T. We also observed no thermal hysteresis in any of our experiments. Furthermore, the fraction of magnetic impurities in our sample was estimated to be as low as 0.04%.[@zorko2011unconventional] Therefore, the scenario of the disorder-broadened first-order phase transition that would result in an “undercooled” dynamical state in CuNCN below $T_h$ seems unlikely. On the other hand, we note that a recent high-resolution synchrotron study of CuNCN revealed anisotropic broadening of Bragg peaks, especially those with reflection indexes related to the crystallographic $b$ axis.[@tsirlin2012hidden] This suggests the presence of microstructural irregularities, which, surprisingly, are anticorrelated with the chemical inhomogeneity, as the anisotropic broadening is most pronounced in samples with the lowest amount of impurities and almost perfect stoichiometry.[@tsirlin2012hidden] In this respect, CuNCN resembles the case of $\alpha$-NaMnO$_2$. There, a similar kind of anisotropic broadening of Bragg reflections was shown to originate from near-degenerate crystal structures where geometrical frustration of the spin lattice led to a magnetostructurally inhomogeneous phase-separated ground state.[@zorko2014frustration; @zorko2015magnetic] Although only one stable crystallographic phase has been experimentally found in CuNCN so far, it is reassuring to note that two polymorphs exist in the case of a sister HgNCN compound,[@liu2002synthesis] where their energy difference is rather small.[@liu2003experimental] Alternatively, extremely low-frequency flexural modes of CuNCN along the $b$ axis[@tchougreeff2017atomic] could also explain the anisotropic broadening of synchrotron Bragg peaks.[@tsirlin2012hidden] These or similar modes can also be involved in the formation of the inhomogeneous magnetic state, as observed in our experiments on a time scale longer than $\sim$0.1 $\mu$s, if the magnetoelastic coupling is substantial. Finally, we highlight a signature of a structural change that is found in CuNCN in the Cu NQR experiment. At $T^{*}=200$ K, a clear peak is observed in the spin-spin relaxation rate $1/T_2$ (inset in Fig. \[fig1\]d), which is accompanied by a notable line-width increase below the same temperature (Fig. \[fig1\]d). As no magnetic anomaly has been observed so far at 200 K in any experiment, we attribute this behavior to a subtle structural effect, like freezing-out of a particular lattice excitation. In CuNCN, flexural phonon modes and libration modes are limited to rather low frequencies/energies[@tchougreeff2017atomic] and may thus fall into the time-window of the $1/T_2$ measurements. Conclusions =========== The combination of complementary local-probe techniques of $^{63,65}$Cu NQR, $^{13}$C NMR and TF $\mu$SR employed in this study has unveiled a remarkably complex magnetic state of CuNCN. We have firmly established that the magnetic state below $T_h=80$ K is intrinsically inhomogeneous, as the same kind of behavior is observed by all three experimental techniques, including NQR, which presents no perturbation to the physical system whatsoever. On decreasing temperature below $T_h$ towards $T_l=20$ K, the high-temperature dynamical component continuously transforms into the essentially disordered low-temperature frozen component, as evidenced by both the two-component $^{13}$C NMR spectra (Fig. \[fig2\]c) and the two-component $\mu$SR signal (Fig. \[fig3\]b). Importantly, we find that the line width/damping of the dynamical component detected by $^{13}$C NMR/$\mu$SR scales with the magnetically frozen component. This experimental finding demonstrates a mutual magnetic coupling between the two components and eliminates the possibility of phase segregation in favor of the two components coexisting on a microscopic scale. Further in-depth investigations are needed to ultimately unveil the corresponding microscopic mechanism of the intriguing magnetic behavior of CuNCN. [43]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty , ,  and , eds., @noop [**]{} (, , ) [****, ()](\doibase 10.1038/nature08917) [****,  ()](\doibase 10.1103/PhysRevLett.85.836) [****,  ()](\doibase 10.1063/1.1487817) [****,  ()](\doibase 10.1103/PhysRevLett.109.067204) [****,  ()](\doibase 10.1088/0953-8984/16/28/L01) [****,  ()](\doibase 10.1103/PhysRevLett.97.247204) @noop [ ]{} [****,  ()](\doibase 10.1143/JPSJ.81.063706) [****,  ()](\doibase 10.1038/ncomms4222) [****,  ()](\doibase 10.1038/srep09272) [****,  ()](\doibase 10.1103/PhysRevB.91.174435) [****,  ()](\doibase 10.1038/ncomms8255) [****,  ()](\doibase 10.1103/PhysRevB.94.081114) [****,  ()](\doibase 10.1515/znb-2005-0601) [****,  ()](\doibase 10.1021/jp8007199) [****,  ()](\doibase 10.1103/PhysRevB.81.024424) [****,  ()](\doibase 10.1088/0953-8984/25/43/435602) [****,  ()](\doibase 10.1021/jp907458f) [****,  ()](\doibase 10.1088/1361-648X/aa6e73) [****,  ()](\doibase 10.1103/PhysRevB.85.224431) [****,  ()](\doibase 10.1063/1.4850536) [****,  ()](\doibase 10.1038/nphys1715) [****,  ()](\doibase 10.1103/PhysRevB.93.060405) [****,  ()](\doibase 10.1103/PhysRevLett.119.137205) [****,  ()](\doibase 10.1038/nphys4212) [****,  ()](\doibase 10.1103/PhysRevLett.107.047208) [****,  ()](\doibase 10.1021/jz301722b) [****, ()](\doibase 10.1063/1.4840555) @noop [**]{} (, , ) @noop [**]{} (, , ) @noop [**]{} (, , ) [****,  ()](\doibase 10.1103/PhysRevLett.100.147201) [****,  ()](\doibase 10.1103/PhysRev.166.359) [****,  ()](\doibase 10.1103/RevModPhys.58.801) [****,  ()](\doibase 10.1103/PhysRevB.64.104416) [****,  ()](\doibase 10.1103/PhysRevB.73.184435) [****,  ()](\doibase 10.1103/PhysRevLett.76.3184) [****,  ()](\doibase 10.1126/science.1107559) [****,  ()](\doibase 10.1038/nature02364) [****,  ()](\doibase 10.1038/nmat1743) [****,  ()](\doibase 10.1021/ic020133g) [****,  ()](\doibase 10.1002/cphc.200300635)
--- abstract: 'We describe a semiconductor multilayer structure based in acoustic phonon cavities and achievable with MBE technology, designed to display acoustic phonon Bloch oscillations. We show that forward and backscattering Raman spectra give a direct measure of the created phononic Wannier-Stark ladder. We also discuss the use of femtosecond laser impulsions for the generation and direct probe of the induced phonon Bloch oscillations. We propose a gedanken experiment based in an integrated phonon source-structure-detector device, and we present calculations of pump and probe time dependent optical reflectivity that evidence temporal beatings in agreement with the Wannier-Stark ladder energy splitting.' address: - '$^1$Centro Atómico Bariloche & Instituto Balseiro, C.N.E.A., 8400 S. C. de Bariloche, R. N., Argentina' - '$^2$Laboratoire de Photonique et de Nanostructures, CNRS, Route de Nozay, 91460 Marcoussis, France' author: - 'N. D. Lanzillotti Kimura$^1$, A. Fainstein$^1$, and B. Jusserand$^2$' title: 'Phonon Bloch oscillations in acoustic-cavity structures' --- Electronic Bloch oscillations, that is, oscillations of an electron induced by a [*constant*]{} electric field in the precense of a periodic potential, [@Bloch] are a beautiful and clear example of quantum effects in solids. When an electric field is applied on a charged particle in a crystal, its wavevector increases with time. Thereafter, Bragg interference leads to a velocity reduction, and finally to a sign change at the band edge. Notwithstanding its simplicity, for many years the issue was controversial and only quite recently the existence of electronic Bloch oscillations has been definitively established. In normal metals the large Brillouin zone and electron relaxation lead to an overdamped behavior characterized by Ohm’s law. Instead, electronic Bloch oscillations are observable in SL’s due to the Brillouin zone reduction. [@SL's] Very recently photon Bloch oscillation devices based in optical microcavities have been proposed, [@Malpuech] and related reflectivity and time-resolved optical transmission features have been observed in structures grown on porous silicon. [@PRL's] The concept behind these devices is also quite nice and simple. The optical microcavities provide the discrete photonic states which can couple through photon mirrors thus leading to photonic minibands. The spatial dependent energy gradient, equivalent to the electric field for electrons, can be achieved, e.g., by varying the refractive index. [@Bloch] As compared to electrons, photon dephasing mechanisms are less effective and thus Bloch oscillations are, in principle, more easy to observe. On the other hand, photonic minibands require a large number of optical microcavities (around 30-50), amounting to total thicknesses larger than 40-50 microns. [@Malpuech] This is too large for current molecular beam epitaxy (MBE) semiconductor technology. For this reason the reported structures were grown using electrochemical methods of porous silicon nanostructuring that do not have this limitation. [@PRL's] The drawback is that the quality of the samples is not as good, layer interfaces and refractive indices are not that well controlled, and optical residual absorption becomes an important issue in the performance of the structures. Sound in solid media is also described by a wave equation, the relevant parameters being the material density and sound velocity, and the boundary conditions establishing the connection between displacement and strain between two different materials. Extending the above concepts applied to electrons and photons to phononic structures, in this letter we propose semiconductor multilayer structures capable of displaying [*acoustic phonon*]{} Bloch oscillations. The building block of the structures are acoustic phonon cavities as recently introduced by Trigo and coworkers. [@Trigo]. The phonon wavelength is only a few nanometers (phonon frequencies being in the THz range), and thus the sample full size and required interface layer quality are achievable with actual MBE technology. In addition, acoustic phonon mean free paths are large [@Philosophical] compared to the structure size (typically below a micron) and thus dephasing is not a critical issue. We present calculations of phonon reflectivity, displacement distribution, and time evolution upon excitation with a localized phonon source to illustrate the device behavior and to demonstrate the existence of a phonon Wannier-Stark ladder (WSL) and Bloch oscillations (BO). In addition, using a photoelastic model we calculate the Raman spectra which we show give a direct measure of the created phononic WSL. Phonon BO can be independently probed using coherent phonon generation techniques. For this purpose we propose a specifically designed device and we present calculations of time dependent reflectivity induced by femtosecond impulsions in this structure. A periodic stack of two materials with contrasting acoustic impedances reflects sound. [@Narayanamurti] The first $k=0$ folded phonon mini-gap in a SL is maximum with the layer´s thickness ($d$) ratio given by $d_1/v_1=d_2/3v_2$, the stop-band and reflectivity of such a phonon mirror being determined by the acoustic impedance mismatch $Z=\rho_1v_1/\rho_2v_2$ and the number of SL periods. [@Lacharmoise; @Jusserand] A phonon cavity can be constructed by enclosing between two SL’s a spacer of thickness $d_c=m\lambda_{c}/2$, where $\lambda_{c}$ is the acoustical phonon wavelength at the center of the phonon minigap. [@Trigo] The cavity confined modes correspond to discrete energy states within the phonon stop-band, their width (i.e., the cavity Q-factor) being determined by the phonon mirror reflectivity. [@Lacharmoise] When a large series of phonon microcavities are coupled one after the other, the discrete confined energy states form phonon bands that resemble the minibands in electronic SL’s. In order to display Bloch oscillations, the energy of the $i$-th cavity must differ from that of the $(i-1)$-th in a constant value. Such a linear dependence with position of the phonon cavity-mode energy, analogous to an electric field for electrons, can be obtained by tuning the cavity widths. [@Malpuech] We thus consider a multilayer structure where each unite cell consists of an acoustic phonon mirror made by $(n+1/2)$ $\frac{\lambda}{4}/\frac{3\lambda}{4}$ periods of two materials with contrasting acoustic impedances (GaAs/AlAs in the examples discussed here), followed by a $\lambda$ cavity (GaAs). This unit cell is repeated $N_{c}$ times, with layer thicknesses increasing from the surface to the substrate so as to have a linear decrease of cavity-mode energy by steps of $\Delta$. The stationary solutions of the acoustic waves in the proposed structure can be derived using a matrix method implementation of the elastic continuum model. [@Trigo] The calculations give $(i)$ the phonon field distribution in the different layers, $(ii)$ the phonon reflectivity and/or transmission, and $(iii)$ the variation along the structure of the energy bands associated to an infinite SL with the local unit cell. The latter are given by the condition $-1 \leq (a_{11} + a_{22})/2 \leq 1$, where $a_{11}$ and $a_{22}$ are the diagonal elements of the transfer matrix across each period of the structure. [@Malpuech] In Fig. \[fig1\] we present results for a $N_{c}=25$ period acoustic cavity structure. The first unit cell is made by a 2.5 GaAs/AlAs period $59.3\AA/23.5\AA$ phonon mirror ($n=2$) and a GaAs spacer tuned to 20 cm$^{-1}$ ($79\AA$). The energy steps are given by $\Delta=0.15$ cm$^{-1}$, and the structure is limited from the two sides by GaAs. Panels $(a)$ to $(c)$ display, respectively, the phonon band structure (black regions represent “forbidden” energies), the phonon reflectivity, and the phonon displacement distribution as a function of position and energy. In the latter panel, calculated for phonons entering from the left, darker regions indicate larger acoustic phonon intensities. Several features should be highlighted in these figures. $(i)$ An acoustic phonon allowed band originated in the coupled discrete cavity modes is observed between two forbidden minigap bands (panel (a)). The energy of the bands decreases linearly with cavity number according to design. Three different spectral regions can be identified. Between 17.4 and 18.6 cm$^{-1}$ (shaded in Fig. \[fig1\]), phonons are confined in a spatial region limited by the top and bottom of the lower and upper minigap bands, respectively. On the other hand, below(above) 17.4(18.6) cm$^{-1}$ a phonon entering from cavity $\#1$ will bounce back at the bottom of the lower (upper) minigap band and leave the sample. $(ii)$ The phonon reflectivity (panel (b)) displays a broad stop-band (between $\sim 15$ and $\sim 21.5$ cm$^{-1}$), basically determined by the superposition of the individual cavity minigaps. A series of dips modulate the reflectivity within this wide stop-band. Between the lower stop-band limit and $\sim 17.4$ cm$^{-1}$, and between $\sim 18.6$ cm$^{-1}$ and the upper stop-band limit, a series of features with varying energy spacing can be identified. These originate from phonon interferences determined by a propagation limited by the sample surface and by the minigap bands. These features are relatively weak and broad because of the small acoustic impedance mismatch between the top GaAs layer and the cavity structure. On the other hand, sharp reflection dips are observed in the region between $\sim 17.4$ and $\sim 18.6$ cm$^{-1}$. These dips, which are equidistant and separated by $\Delta=0.15$ cm$^{-1}$ correspond to the coupling of external phonons, by tunneling through the minigap band, to the Wannier-Stark states confined within the structure. $(iii)$ This phononic WSL can be also identified by the well defined discrete phonon modes displayed in panel (c). The WSL is the spectral domain counterpart of the BO. It is precisely in this spectral region where oscillations should appear in the time-domain. The time and spatial variation of the acoustic phonon displacement field $U_g(z,t)$, created by a pulse described by a spectral function $g(\omega)$ and incident at $t=0$ at the GaAs-sample interface ($z=0$), can be evaluated using the scattering method described by Malpuech and Kavokin. [@Malpuech] Within this description, $U_g(z,t)=\frac{1}{2\pi}\int_{-\infty}^\infty u(z)g(\omega)exp(-i\omega t)d\omega$, where $u(z)$ are the stationary solutions of the elastic wave equation with frequency $\omega$ shown in Fig. \[fig1\](c). The time evolution of such wavepackets for the two different energies indicated in Fig. \[fig1\](a) are shown in Fig. \[fig2\]. For energies above or below the phonon WSL region (20 cm $^{-1}$ in the example shown), the incident pulse propagates within the sample up to a position where it is back-reflected by a mini-gap band leaving afterwards the sample. On the contrary, when $\omega$ corresponds to the WSL energy region (18.3 cm $^{-1}$ in the displayed figure), a fraction of the pulse energy is backreflected at the surface while another part enters the structure by tunneling through the minigap band developing afterwards clear periodic oscillations within the structure. In order for these phonon BO to be observed, the FWHM of $g(\omega)$ should be larger than $\Delta$. For these calculations we have used a gaussian distribution with $2\sigma=1.0$ cm$^{-1}$. On the other hand, the period of the oscillations (and consequently the length travelled by the pulse) is directly determined by $\Delta$ ($\tau_B=h/\Delta$). In what follows we will show how Raman scattering and coherent phonon generation experiments can provide a direct probe of the phonon WSL and BO, respectively, in these devices. Raman scattering has extensively shown to be a powerful tool to study phonons in semiconductor multilayers [@Jusserand] and, in particular, confined modes in acoustic cavities. [@Trigo; @Lacharmoise] In order to evaluate the Raman spectra, we use a photoelastic model. [@Trigo; @Jusserand] We analyze two experimental geometries, namely backscattering (BS, $k_S \sim -k_L$, $q=\sim 2k_L$) and forward scattering (FS, $k_S \sim k_L$, $q \sim 0$). Here $k$ refers either to the scattered or laser wavevector, and $q$ is the transferred wavenumber. In Fig.3 we present BS and FS spectra calculated for the sample described above and assuming laser excitation with 550nm. For comparison purposes the corresponding phonon reflectivity is also shown. The Raman spectra display a complex series of peaks in the stop-band spectral region. We have verified that such rich spectra are a kind of sample finger-print that can be used as a characterization tool. Interestingly, the BS and FS spectra display clear peaks and dips, respectively, at exactly the WSL energies. High resolution Raman set-ups working in the visible can discern spectral features with resolution better than 0.02 cm$^{-1}$, [@Pinan] thus providing a spectral-domain tool able to probe the underlying phonon WSL. Coherent phonon generation is termed the impulsive generation of phonons using high power ultrashort laser pulses. [@coherent] In the case of THz vibrations, femtosecond pulses are required. To the best of our knowledge the generation mechanism is still an open issue, and no complete theory is available to describe the processes involved. We briefly describe next a model for pump and probe coherent phonon generation and detection based on a photoelastic coupling between light and acoustic phonons. [@Mariano] This mechanism is the only active when pump and probe are below the gap. Above the gap other mechanisms can contribute, but we expect the main conclusions to remain essentialy valid. Any arbitrary time and position dependent phonon displacement in the structure $w(z,t)$ can be expressed in terms of the phonon eigenstates as $w(z,t)=\int r_\omega u(z) sin(\omega t)d\omega$. Assuming that phonons are generated coherently through a photoelastic mechanism by a femtosecond pulse (modelled as $E_0(z,t) \propto \delta(t)exp(ik_Lz)$), the coefficients for the above expansion can be obtained as [@Mariano] $r_\omega=\frac{1}{\omega}\int_0^L p(z) \left|E_0(z)\right|^2 \frac{\partial u(z)}{\partial z} dz$. Here L is the length of the sample, $p(z)$ is the photoelastic constant which is assumed constant in each layer, and ${\partial u(z)}/{\partial z}$ is the strain associated with an eigenstate of energy $\omega$. We note that the FS ($q=0$) Raman cross section discussed above is proportional to $\left|r_\omega\right|^2$.[@Trigo] Once this excitation is generated, it evolves according to the time dependence $sin(\omega t)$ and can be detected by a delayed, lower power, probe pulse that senses the time dependent change of reflectivity. [@Thomsen] The change in reflectivity can be calculated as $\Delta r(t) \propto \int_0^\infty \Delta\epsilon(z,t)exp(2ikz)dz$, where the probe pulse has been assumed to be proportional to $e^{ikz}$. [@Thomsen] Introducing again the photoelastic coefficient to relate $\Delta\epsilon$ with the strain ${\partial w(z,t)}/{\partial z}$, the time dependent change in reflectivity can be written as $\Delta r(t) \propto \int_0^\infty p(z)\frac{\partial w(z,t)}{\partial z} exp(2ikz)dz$. If $p(z)$ is constant, this equation implies that only phonons with $q=2k$ can be probed. On the other hand, the detector’s $p(z)$ can be designed to access other excitations by backfolding the phonon dispersion. Comparing this equation with the Raman cross section, [@Trigo] it is easy to see that the observable phonon spectrum is related to the BS Raman scattering component of the detector structure. In order to generate the quasi-monoenergetic phonons required to induce BO, and to monitor the time evolution of the latter, we have conceived a monolithic source-sample-detector device (see the scheme in Fig. \[fig4\]). The first GaAs/AlAs SL (SL/s in Fig. \[fig4\]) acts as the phonon source and is designed to generate, by excitation with a fs impulsion above the gap, an elastic pulse with energy centered at 18.3 cm$^{-1}$ and width equal to $\sim 0.8$ cm$^{-1}$. [@coherent] It is made of 30 periods of $43.2\AA/51.5\AA$ GaAs/AlAs. The layer widths determine the energy of the generated pulse, while the number of periods define, due to finite size effects, its width. The coefficient $r_\omega$ of the phonon pulse (or equivalently the FS Raman spectra) generated in this SL is shown in the inset on Fig. \[fig4\]. Once generated, the coherent phonons propagate into the structure and act as the phonon source $g(\omega)$ for exciting Bloch oscillations. Between the SL source and the structure, a GaAs 200nm buffer layer is introduced to screen the residual pump power, not absorved in the SL, from impinging into the cavity structure and generating unwanted frequencies. The $g(\omega)$ phonon pulse, on the other hand, propagates through the GaAs layer basically unaltered. The cavity structure is identical to the one described above. Once within this structure, Bloch oscillations develop and part of the energy is lost to the substrate upon each turn. Their effect at the right end side of the sample is probed by the second SL (SL/d in Fig. \[fig4\]), which acts as an energy selective detector of the Bloch oscillations through their effect in the time variation of the reflectivity. [@Mariano] To keep this time variation simple, a second GaAs layer is introduced to stop the probe beam from being modulated also by the cavity structure. The BS Raman spectrum of the SL defines the detector’s bandwidth, which we have designed to include the pulse $g(\omega)$. It consists of a 20 periods $38.1\AA/68.1\AA$ GaAs/AlAs SL. Its BS spectrum, calculated for a 750 nm probe pulse, is shown in the inset on Fig. \[fig4\]. The device is terminated by a thick ($> 1\mu$m) Ga$_{0.5}$Al$_{0.5}$As layer which acts both as a stop layer for chemically etching the GaAs substrate, and as a window for accessing the detector SL from the back. In Fig. \[fig4\] we present calculations of the reflectivity change as a function of time. Fast oscillations dominate the reflectivity, corresponding to the frequency of the coherently excited phonons (18.3 cm$^{-1})$. In addition, this fast component is amplitude modulated by an envelope whose frequency is determined precisely by the Bloch oscillation period. The Fourier transform of the time dependent reflectivity clearly shows the WSL frequencies within an envelope determined by the input source $g(\omega)$ and the detector bandwidth. In conclusion we have extended concepts previously discussed in the context of electronic and photonic properties of solids to acoustic phonon physics. We have described semiconductor structures based in recently reported acoustic cavities and achievable with actual growth technologies, displaying phononic Stark ladders and capable of sustaining Bloch oscillations. We have also shown how Raman scattering and coherent phonon generation provide spectral and time domain probes, respectively, of these acoustic phenomena. Structures as the one described here can be exploited to enhance the coupling between sound and other excitations (electrons and photons), and as the feedback high-Q resonator of a phonon laser. Moreover, engineered phonon potentials are not limited to linear dependencies that mimic electric fields but can take arbitrary shapes. This opens the way to novel phonon devices based in the discrete confined states of acoustic cavities. We acknowledge M. Trigo and B. Perrin for enlightening discussions and G. Malpuech for useful information concerning the porous silicon optical Bloch structures. AF also acknowledges support from the ONR (US). F. Bloch, Z. Phys. [**52**]{}, 555 (1928); C. Zener, Proc. R. Soc. London A [**145**]{}, 523 (1934). See, for example, J. Feldman [et al.]{}, Phys. Rev. B [**46**]{}, 7252 (1992); C. Waschke [*et al.*]{}, Phys. Rev. Lett. [**70**]{}, 3319 (1993). G. Malpuech and A. Kavokin, Semicond. Sci. Technol. [**16**]{}, R1 (2001), and references therein. R. Sapienza [*et al.*]{}, Phys. Rev. Lett. [**91**]{}, 263902 (2003); V. Agarwal [*et al.*]{}, Phys. Rev. Lett. [**92**]{}, 097401 (2004). M. Trigo [*et al.*]{}, Phys. Rev. Lett. [**89**]{}, 227402 (2002); see also J. M. Worlock and M. L. Roukes, Nature [**421**]{}, 802 (2003). W. Chen [*et al.*]{}, Philosophical Magazine B [**70**]{}, 687 (). V. Narayanamurti [*et al.*]{}, Phys. Rev. Lett. [**43**]{}, 2012 (1979). P. Lacharmoise [*et al.*]{}, Appl. Phys. Lett. [**84**]{}, 3274 (2004). B. Jusserand and M. Cardona, in [*Light Scattering in Solids V*]{}, edited by M. Cardona and G. Güntherodt (Springer, Heidelberg, 1989), p. 49. A two meter SOPRA double grating spectrograph can have a resolution around 0.02 cm$^{-1}$. A tandem Fabry-Perot Raman monochromator set-up can improve this perfomance down to 0.005 cm$^{-1}$. See, e.g., J. P. Pinan [*et al.*]{}, J. Chem. Phys. [**109**]{}, 5469 (1998). T. Dekorsy, G. C. Cho, and H. Kurz, in [*Light Scattering in Solids VIII*]{}, edited by M. Cardona and G. Güntherodt (Springer, Heidelberg, 2000), p. 169. M. Trigo, T. Eckhause, and R. Merlin, private communication. C. Thomsen [*et al.*]{}, Phys. Rev. B [**34**]{}, 4129 (1986). \[!h\] \[!h\] \[!h\]
--- abstract: | In this paper we study existence of solutions for the Cauchy problem of the Debye-Hückel system with low regularity initial data. By using the Chemin-Lerner time-space estimate for the heat equation, we prove that there exists a unique local solution if the initial data belongs to the Besov space $\dot{B}^{s}_{p,q}(\mathbb{R}^{n})$ for $-\frac{3}{2}<s\leq-2+\frac{n}{2}$, $p=\frac{n}{s+2}$ and $1\leq q\leq \infty$, and furthermore, if the initial data is sufficiently small then the solution is global. This result improves the regularity index of the initial data space in previous results on this model. [**Keywords:**]{} Debye-Hückel system; low regularity; global existence. [**Mathematics Subject Classification 2010:** ]{} 35K45, 35Q99 author: - | Jihong Zhao$^{1}$,  Qiao Liu$^{2}$,  and Shangbin Cui$^{2}$[^1]\ \[0.2cm\] [$^{1}$ Department of Mathematics, Northwest A&F University, Yangling,]{}\ [Shaanxi 712100, People’s Republic of China]{}\ \[0.2cm\] [$^{2}$ Department of Mathematics, Sun Yat-sen University, Guangzhou, ]{}\ [Guangdong 510275, People’s Republic of China]{} title: '[Existence of Solutions for the Debye-Hückel System with Low Regularity Initial Data]{}' --- **Introduction** ================ In this paper we study the following Debye-Hückel system arising from the theory of electrolytes ([@DH23]): $$\label{eq1.1} \begin{cases} \partial_{t}v=\nabla\cdot(\nabla v-v\nabla \phi)\quad &\mbox{in}\ \ \mathbb{R}^n\times(0,\infty),\\ \partial_{t}w=\nabla\cdot(\nabla w+w\nabla\phi)\quad &\mbox{in}\ \ \mathbb{R}^n\times(0,\infty),\\ \Delta \phi=v-w\quad &\mbox{in}\ \ \mathbb{R}^n\times(0,\infty),\\ v(x,0)=v_0(x),\quad w(x,0)=w_0(x) \ \ &\mbox{in}\ \ \mathbb{R}^n, \end{cases}$$ where $v$ and $w$ denote the densities of the electron and the hole, respectively, in an electrolytes, and $\phi$ denotes the electric potential. Mathematical analysis of the Debye-Hückel system was first focused on the initial boundary value problems in 1980’s, and some results related to the global existence, uniqueness and regularity of classical solutions and the asymptotic stability of stationary solutions were obtained by using the Green function, the Poincaré inequality and the standard maximum principle of equations of parabolic type; see [@G85], [@GG86], [@M74] and [@S83] for more details. In 1994, Biler, Hebisch and Nadzieja in [@BHN94] considered the no-flux boundary problem of , and considered global and local existence of weak solutions, convergence rate estimates to stationary solutions of time-dependent solutions, and for further studies related to this topic we refer the reader to see [@BMV04], [@BD00] and the references therein. In 1999, Karch in [@K99] proved existence and uniqueness of solutions of the problem for initial data in the Besov space $\dot{B}^{s}_{p, \infty}(\mathbb{R}^n)$ with $-1<s<0$ and $p=\frac{n}{s+2}$. Note that similar results for initial data in the Lebesgue and Sobolev spaces was established only recently, see the work of Kurokiba and Ogawa [@KO08]. In [@OS08], Ogawa and Shimizu considered existence of global solutions of the problem for small initial data in a two-dimensional critical Hardy space. The purpose of this paper is to prove existence of solutions for with initial data in the Besov space $\dot{B}^{s}_{p,q}(\mathbb{R}^{n})$ of indices $-\frac{3}{2}<s\leq-2+\frac{n}{2}$, $p=\frac{n}{s+2}$ and $1\leq q\leq \infty$. This result improves the corresponding result of Karch obtained in [@K99]. It shows that the Debye-Hückel system has a better property than the Navier-Stokes equations in regard to existence of solutions, since for that equation there is no existence result for initial data in a space with regularity index $s$ smaller than $-1$. In fact, the nonlinear term of the system seems to be closer to the quadratic nonlinear heat equation ($\sim u^2$) than to the Navier-Stokes equations ($\sim u\cdot \nabla u$), and our main result (see Theorem \[th1.1\] below) also holds for the quadratic nonlinear heat equation and is even new for this equation. Main tools used to get our main result are the Chemin-Lerner space $\mathfrak{L}^{r}(0,T; \dot{B}^{-2+n/p+2/r}_{p,q})$ and some related estimates (see Definition 2.1 and Propositions 2.2 and 2.3 in Section 2). Note that from the third equation in (1.1) we have $$\label{eq1.2} \phi=(-\Delta)^{-1}(w-v):=E\ast(w-v),$$ where $E(x)=-\frac{1}{2\pi}\log|x|$ for $n=2$ and $E(x)=\frac{1}{4}\pi^{-n/2}\Gamma(\frac{n}{2}-1)|x|^{-(n-2)}$ for $n\ge 3$, so that we can eliminate $\phi$ from and obtain $$\label{eq1.3} \begin{cases} \partial_{t}v-\Delta v=-\nabla\cdot(v\nabla(-\Delta)^{-1}(w-v))\quad &\mbox{in}\ \ \mathbb{R}^n\times(0,\infty),\\ \partial_{t}w-\Delta w=\nabla\cdot(w\nabla(-\Delta)^{-1}(w-v))\quad &\mbox{in}\ \ \mathbb{R}^n\times(0,\infty),\\ v(x,0)=v_0(x),\quad w(x,0)=w_0(x) \ \ &\mbox{in}\ \ \mathbb{R}^n. \end{cases}$$ Hence, we only need to consider this equivalent problem. We now give the precise statement of our main result, and for simplicity, we use $(v, w)\in \mathcal{X}$ to denote $(v, w)\in \mathcal{X}\times\mathcal{X}$ for a Banach space $\mathcal{X}$. \[th1.1\] Let $n\geq2$, $1\leq q\leq \infty$ and $2\leq p<2n$. Suppose that $(v_{0}, w_{0})\in \dot{B}^{-2+n/p}_{p,q}(\mathbb{R}^{n})$. There exists $T>0$ such that the problem has a unique solution $$\label{eq1.4} (v, w)\in \underset{1<r\leq\infty}{\cap}\mathfrak{L}^{r}(0, T; \dot{B}^{-2+n/p+2/r}_{p,q}(\mathbb{R}^{n})).$$ Moreover, if $(v_{0},w_{0})$ belongs to the closure of $\mathcal{S}(\mathbb{R}^n)$ in $\dot{B}^{-2+n/p}_{p,q}(\mathbb{R}^{n})$, then $(v, w)\in C([0,T), \dot{B}^{-2+n/p}_{p,q}(\mathbb{R}^{n}))$. Besides, there exists $\varepsilon>0$ such that if $\|(v_{0}, w_{0})\|_{\dot{B}^{-2+n/p}_{p,q}}\leq\varepsilon$, then the above assertion holds for $T=\infty$, i.e., the solution $(v, w)$ is global. Furthermore, if $(v,w)$ and $(\tilde{v}, \tilde{w})$ are two solutions of with initial data $(v_0, w_0)$ and $(\tilde{v}_0, \tilde{w}_0)$, respectively, then there exists a universal constant $C>0$ such that for any $1<r\leq\infty$, we have $$\label{eq1.5} \|(v-\tilde{v}, w-\tilde{w})\|_{\mathfrak{L}^{r}(0,T; \dot{B}^{-2+n/p+2/r}_{p,q})}\le C\|(v_0-\tilde{v}_0, w_0-\tilde{w}_0)\|_{\dot{B}^{-2+n/p}_{p,q}}.$$ It is easy to verify that is invariant under the scaling $v_{\lambda}(x,t)=\lambda^2 v(\lambda x, \lambda^2t)$, $w_{\lambda}(x,t)=\lambda^2 w(\lambda x, \lambda^2t)$ and $\phi_{\lambda}(x,t)=\phi(\lambda x, \lambda^2t)$. Hence, as a standard practice, we have the following existence result for [*self-similar solution*]{} of : \[co1.2\] Let $n\ge2$ and $\frac{n}{2}\leq p<2n$. Suppose that $(v_{0}, w_{0})\in \dot{B}^{-2+n/p}_{p,\infty}(\mathbb{R}^{n})$ and $\|(v_{0}, w_{0})\|_{\dot{B}^{-2+n/p}_{p,q}}\leq\varepsilon$, where $\varepsilon$ is as above. Suppose furthermore that $v_{0}, w_{0}$ are homogeneous of degree $-2$, i.e., they satisfy the relations $v_{0}(x)=\lambda^{2}v_{0}(\lambda x)$ and $w_{0}(x) =\lambda^{2}w_{0}(\lambda x)$ for all $x\in\mathbb{R}^{n}$ and $\lambda>0$. Then the unique global solution ensured by Theorem \[th1.1\] is a self-similar solution, i.e., it satisfies the following condition: $$v(x,t)=\lambda^2 v(\lambda x, \lambda^2t),\ \ w(x,t)=\lambda^2 w(\lambda x, \lambda^2t),\ \ \phi(x,t)=\phi(\lambda x, \lambda^2t).$$ In the next section we give the proof of Theorem 1.1. The proof of Theorem 1.1 ======================== We first recall some basic notions and preliminary results used in the proof of Theorem 1.1. Let $\mathcal{S}(\mathbb{R}^{n})$ be the Schwartz space and $\mathcal{S}'(\mathbb{R}^{n})$ be its dual. Given $f\in\mathcal{S}(\mathbb{R}^{n})$, the Fourier transform of it, $\mathcal{F}(f)=\widehat{f}$, is defined by $$\mathcal{F}(f)(\xi)=\widehat{f}(\xi)=\frac{1}{(2\pi)^{n/2}}\int_{\mathbb{R}^{n}}f(x)e^{-ix\cdot\xi}dx.$$ Let $\mathcal{D}_{1}=\{\xi\in\mathbb{R}^{n},\ |\xi|\leq\frac{4}{3}\}$ and $\mathcal{D}_{2}=\{\xi\in\mathbb{R}^{n},\ \frac{3}{4}\leq |\xi|\leq\frac{8}{3}\}$. Choose two non-negative functions $\phi, \psi\in\mathcal{S}(\mathbb{R}^{n})$ supported, respectively, in $\mathcal{D}_{1}$ and $\mathcal{D}_{2}$ such that $$\begin{aligned} \psi(\xi)+\sum_{j\geq0}\phi(2^{-j}\xi)=1, \ \ \xi\in\mathbb{R}^{n},\\ \sum_{j\in\mathbb{Z}}\phi(2^{-j}\xi)=1, \ \ \xi\in\mathbb{R}^{n}\backslash\{0\}.\end{aligned}$$ We denote $\phi_{j}(\xi)=\phi(2^{-j}\xi)$, $h=\mathcal{F}^{-1}\phi$ and $\tilde{h}=\mathcal{F}^{-1}\psi$, where $\mathcal{F}^{-1}$ is the inverse Fourier transform. Then the dyadic blocks $\Delta_{j}$ and $S_{j}$ can be defined as follows: $$\begin{aligned} \Delta_{j}f=\phi(2^{-j}D)f=2^{jn}\int_{\mathbb{R}^{n}}h(2^{j}y)f(x-y)dy,\\ S_{j}f=\psi(2^{-j}D)f=2^{jn}\int_{\mathbb{R}^{n}}\tilde{h}(2^{j}y)f(x-y)dy.\end{aligned}$$ Here $D=(D_1, D_2, \cdots, D_n)$ and $D_j=i^{-1}\partial_{x_j}$ ($i^{2}=-1$). The set $\{\Delta_{j}, S_{j}\}_{j\in\mathbb{Z}}$ is called the Littlewood-Paley decomposition. Formally, $\Delta_{j}=S_{j}-S_{j-1}$ is a frequency projection to the annulus $\{|\xi|\sim 2^{j}\}$, and $S_{j}=\sum_{k\leq j-1}\Delta_{k}$ is a frequency projection to the ball $\{|\xi|\leq 2^{j}\}$. For more details, please reader to [@C98] and [@L02]. Let $\mathcal{Z}(\mathbb{R}^{n})=\big\{f\in \mathcal{S}(\mathbb{R}^{n}): \ \ \partial^{\alpha}\widehat{f}(0)=0, \ \forall\alpha\in(\mathbb{N}\cup\{0\})^{n}\big\}$, and denote by $\mathcal{Z}'(\mathbb{R}^{n})$ the dual of it. Recall that for $s\in\mathbb{R}$ and $(p,q)\in[1, \infty]\times[1, \infty]$, the homogeneous Besov space $\dot{B}^{s}_{p,q}(\mathbb{R}^{n})$ is defined by $$\dot{B}^{s}_{p,q}(\mathbb{R}^{n})=\big\{f\in \mathcal{Z}'(\mathbb{R}^{n}):\ \ \|f\|_{\dot{B}^{s}_{p,q}}<\infty\big\},$$ where $$\|f\|_{\dot{B}^{s}_{p,q}}= \begin{cases} \Big(\sum_{j\in\mathbb{Z}}2^{jsq}\|\Delta_{j}f\|_{L^{p}}^{q}\Big)^{1/q}\ \ \text{for}\ \ 1\leq q<\infty,\\ \sup_{j\in\mathbb{Z}}2^{js}\|\Delta_{j}f\|_{L^{p}}\ \ \ \ \ \ \ \ \ \text{for}\ \ q=\infty. \end{cases}$$ It is well-known that if either $s<\frac{n}{p}$ or $s=\frac{n}{p}$ and $q=1$, then $(\dot{B}^{s}_{p,q}(\mathbb{R}^{n}),\|\cdot\|_{\dot{B}^{s}_{p,q}})$ is a Banach space. Moreover, if we denote $D^s f=\mathcal{F}^{-1}(|\xi|^{s}\mathcal{F}(f))$, then for any function $f$ defined on $\mathbb{R}^{n}\backslash\{0\}$ which is smooth and homogeneous of degree $k$, the corresponding pseudo-differential operator $f(D)$ is a bounded linear map from $\dot{B}^{s}_{p,q}(\mathbb{R}^{n})$ to $\dot{B}^{s-k}_{p,q}(\mathbb{R}^{n})$. Besides, there exists a constant $C$ depending only on the dimension $n$ such that for any $s>0$, $j\in\mathbb{Z}$ and $1\leq p\leq q\leq\infty$, there holds the following Bernstein inequality: $$\label{eq2.1} {{\rm supp}}\widehat{f}\subset\{|\xi|\leq2^{j}\}\ \ \Longrightarrow\ \ \|D^{s}f\|_{L^{q}}\leq C2^{js+jn(1/p-1/q)}\|f\|_{L^{p}}.$$ We now recall the definition of the Chemin-Lerner space $\mathfrak{L}^{r}(0,T; \dot{B}^{s}_{p,q}(\mathbb{R}^{n}))$: \[de2.1\] [*([@C98])*]{} Let $s\in \mathbb{R}$, $1\leq p, q, r\leq\infty$, and $0<T\leq\infty$ be fixed. The Chemin-Lerner space is defined by $$\mathfrak{L}^{r}(0,T; \dot{B}^{s}_{p,q}(\mathbb{R}^{n})):=\{f\in \mathcal{D}'((0,T), \mathcal{Z}'(\mathbb{R}^{n})):\ \ \|f\|_{\mathfrak{L}^{r}(0,T; \dot{B}^{s}_{p,q}(\mathbb{R}^{n}))}<\infty\},$$ where $ \|f\|_{\mathfrak{L}^{r}(0,T; \dot{B}^{s}_{p,q})}=\big(\sum_{j\in\mathbb{Z}}2^{jsq}\|\Delta_{j}f\|_{L^{r}(0,T; L^{p})}^{q}\big)^{1/q}. $ We define the usual space $L^{r}(0,T; \dot{B}^{s}_{p,q}(\mathbb{R}^{n}))$ associated with the norm $$\|f\|_{L^{r}(0,T; \dot{B}^{s}_{p,q})}=\Big(\int_{0}^{T}\Big(\sum_{j\in\mathbb{Z}}2^{jsq}\|\Delta_{j}f\|_{ L^{p}}^{q}\Big)^{r/q}dt\Big)^{1/r}.$$ By the Minkowski inequality, it is readily to verify that $$\begin{cases} \|f\|_{\mathfrak{L}^{r}(0,T; \dot{B}^{s}_{p,q})}\leq\|f\|_{L^{r}(0,T; \dot{B}^{s}_{p,q})} \ \ \ \text{if}\ \ \ r\leq q,\\ \|f\|_{L^{r}(0,T; \dot{B}^{s}_{p,q})}\leq \|f\|_{\mathfrak{L}^{r}(0,T; \dot{B}^{s}_{p,q})} \ \ \ \text{if} \ \ \ q\leq r. \end{cases}$$ In our discussion we shall use two basic results related to this space. The first one is concerned with the product of two functions in this space and reads as follows: \[le2.2\] [*([@D05; @RS96])*]{} Let $1\leq p$, $q$, $r$, $r_{1}$, $r_{2}\leq \infty$, $s_{1}$, $s_{2}<\frac{n}{p}$, $s_{1}+s_{2}>0$ and $\frac{1}{r}=\frac{1}{r_{1}}+\frac{1}{r_{2}}$. Then there exists a positive constant $C$ depending only on $s_{1}, s_{2}, p, q, r, r_{1}, r_{2}$ and $n$ such that $$\label{eq2.2} \|fg\|_{\mathfrak{L}^{r}(0,T; \dot{B}^{s_{1}+s_{2}-n/p}_{p, q})}\leq C\|f\|_{\mathfrak{L}^{r_{1}}(0,T; \dot{B}^{s_{1}}_{p, q})}\|g\|_{\mathfrak{L}^{r_{2}}(0,T; \dot{B}^{s_{2}}_{p, q})}.$$ The proof of this result is a simple application of the following fundamental result concerning the product of two functions in the homogeneous Besov spaces: Let $1\leq p,q\leq\infty$, $s_1,s_2<\frac{n}{p}$ and $s_1+s_2>0$. Then there exists a constant $C$ depending only on $p, q, s_1, s_2$ and $n$ such that $$\|fg\|_{\dot{B}^{s_1+s_2-n/p}_{p,q}}\leq C\|f\|_{\dot{B}^{s_1}_{p,q}}\|g\|_{\dot{B}^{s_2}_{p,q}}.$$ For details of the proof, we refer the reader to see [@D05] and [@RS96]. The second one, whose proof can be found from e.g. [@D05], is concerned with the Cauchy problem of the heat equation: $$\label{eq2.3} \begin{cases} \frac{\partial u}{\partial t}-\Delta u= f(x,t), \ \ x\in\mathbb{R}^{n}, \ t>0,\\ u(x,0)=u_{0}(x), \ \ x\in\mathbb{R}^{n}. \end{cases}$$ \[pro2.3\] [*([@D05])*]{} Let $s\in \mathbb{R}$, $1\leq p,q,r_1\leq\infty$ and $0<T\leq\infty$. Assume that $u_{0}\in \dot{B}^{s}_{p,q}(\mathbb{R}^{n})$ and $f\in\mathfrak{L}^{r_1}(0,T; \dot{B}^{s+2/r_1-2}_{p,q}(\mathbb{R}^{n}))$. Then has a unique solution $u\in\underset{r_1\leq r\leq\infty}{\cap}\mathfrak{L}^{r}(0,T; \dot{B}^{s+2/r}_{p,q}(\mathbb{R}^{n}))$. In addition, there exists a constant $C>0$ depending only on $n$ such that for any $r_1\leq r\leq\infty$, we have $$\label{eq2.4} \|u\|_{\mathfrak{L}^{r}(0,T; \dot{B}^{s+2/r}_{p,q})}\leq C\big(\|u_{0}\|_{\dot{B}^{s}_{p,q}}+\|f\|_{\mathfrak{L}^{r_1}(0,T; \dot{B}^{s+2/r_1-2}_{p,q})}\big).$$ Next we recall an existence and uniqueness result for an abstract operator equation in a generic Banach space. For the proof we refer the reader to see Lemarié-Rieusset [@L02]. \[pro2.4\] [*([@L02])*]{} Let $\mathcal{X}$ be a Banach space and $\mathbf{B}:\mathcal{X}\times\mathcal{X}\rightarrow\mathcal{X}$ is a bilinear bounded operator, $\|\cdot\|_{\mathcal{X}}$ being the $\mathcal{X}$-norm. Assume that for any $u_{1},u_{2}\in \mathcal{X}$, we have $\|\mathbf{B}(u_{1},u_{2})\|_{\mathcal{X}}\leq C_{0}\|u_{1}\|_{\mathcal{X}}\|u_{2}\|_{\mathcal{X}}$. Then for any $y\in \mathcal{X}$ such that $\|y\|_{\mathcal{X}}\leq \varepsilon<\frac{1}{4C_{0}}$, the equation $u=y+\mathbf{B}(u,u)$ has a solution $u$ in $\mathcal{X}$. Moreover, this solution is the only one such that $\|u\|_{\mathcal{X}}\leq 2\varepsilon$, and depends continuously on $y$ in the following sense: if $\|\widetilde{y}\|_{\mathcal{X}}\leq \varepsilon$, $\widetilde{u}=\widetilde{y}+\mathbf{B}(\widetilde{u},\widetilde{u})$ and $\|\widetilde{u}\|_{\mathcal{X}}\leq 2\varepsilon$, then $\|u-\widetilde{u}\|_{\mathcal{X}}\leq \frac{1}{1-4\varepsilon C_{0}}\|y-\widetilde{y}\|_{\mathcal{X}}$. We are now ready to give the proof of Theorem 1.1. Let $p$ and $q$ be as in Theorem 1.1, i.e., $2\leq p<2n$ and $1\leq q\leq\infty$, and let $r$ be as in , i.e., $1<r\leq\infty$. We choose a number $2<r_1\leq2r$ such that $\frac{2}{r_1}+\frac{n}{p}>\frac{3}{2}$. For $T>0$ to be specified later, we set $\mathcal{X}_{T}=\mathfrak{L}^{r_1}(0,T; \dot{B}^{-2+n/p+2/r_1}_{p,q}(\mathbb{R}^{n}))$. Given $(v,w)\in \mathcal{X}_{T}$, we define $\mathcal{G}(v,w)=(\bar{v},\bar{w})$ to be the solution of the following initial value problem: $$\begin{aligned} \label{eq2.5} &\partial_{t}\bar{v}-\Delta \bar{v}=-\nabla\cdot(v\nabla(-\Delta)^{-1}(w-v)), \ \ \ \bar{v}(x,0)=v_{0}(x),\\ \label{eq2.6} &\partial_{t}\bar{w}-\Delta \bar{w}=\nabla\cdot(w\nabla(-\Delta)^{-1}(w-v)), \ \ \ \bar{w}(x,0)=w_{0}(x).\end{aligned}$$ Obviously, $(v,w)$ is a solution of if and only if it is a fixed point of $\mathcal{G}$. \[le2.5\] Let $(v, w)\in \mathcal{X}_{T}$. Then $(\bar{v}, \bar{w})\in \mathcal{X}_{T}$. Moreover, there exists a constant $C_0>0$ such that $$\begin{aligned} \label{eq2.7} &\|\bar{v}\|_{\mathcal{X}_{T}}\leq \|e^{t\Delta}v_{0}\|_{\mathcal{X}_{T}}+C_0\|(v,w)\|_{\mathcal{X}_{T}}^{2},\\ \label{eq2.8} &\|\bar{w}\|_{\mathcal{X}_{T}}\leq \|e^{t\Delta}w_{0}\|_{\mathcal{X}_{T}}+C_0\|(v,w)\|_{\mathcal{X}_{T}}^{2}.\end{aligned}$$ Here $e^{t\Delta}$ is the heat operator with kernel $G(x,t)=(4\pi t)^{-n/2}\exp(-\frac{|x|^2}{4t})$. By Duhamel principle, is equivalent to the following integral equation: $$\bar{v}(t)=e^{t\Delta}v_{0}-\int_{0}^{t}e^{(t-\tau)\Delta}\nabla\cdot(v\nabla(-\Delta)^{-1}(w-v))(\tau)d\tau.$$ Since $2\leq p<2n$, $2<r_1<\infty$ and $\frac{n}{p}+\frac{2}{r_1}>\frac{3}{2}$, by choosing $s_{1}=-2+\frac{n}{p}+\frac{2}{r_1}$ and $s_{2}=-1+\frac{n}{p}+\frac{2}{r_1}$ in Lemma \[le2.2\] we get $$\begin{aligned} \label{eq2.9} \|\nabla\cdot(&v\nabla(-\Delta)^{-1}(w-v))\|_{\mathfrak{L}^{r_1/2}(0,T; \dot{B}^{-4+n/p+4/r_1}_{p,q})}\nonumber\\&\leq C\|v\nabla(-\Delta)^{-1}(w-v)\|_{\mathfrak{L}^{r_1/2}(0,T; \dot{B}^{-3+n/p+4/r_1}_{p,q})}\nonumber\\ &\leq C\|v\|_{\mathfrak{L}^{r_1}(0,T; \dot{B}^{-2+n/p+2/r_1}_{p,q})}\|\nabla(-\Delta)^{-1}(w-v)\|_{\mathfrak{L}^{r_1}(0,T; \dot{B}^{-1+n/p+2/r_1}_{p,q})}\nonumber\\ &\leq C\|(v,w)\|_{\mathfrak{L}^{r_1}(0,T; \dot{B}^{-2+n/p+2/r_1}_{p,q})}^{2}.\end{aligned}$$ Hence, by using Proposition \[pro2.3\] we conclude that there exists a positive constant $C_{0}$ such that $$\begin{aligned} \label{eq2.10} \|\bar{v}\|_{\mathcal{X}_{T}}&\leq \|e^{t\Delta}v_{0}\|_{\mathcal{X}_{T}}+C\|\nabla\cdot(v\nabla(-\Delta)^{-1}(w-v))\|_{\mathfrak{L}^{r_1/2}(0,T; \dot{B}^{-4+n/p+4/r_1}_{p,q})}\nonumber\\ &\leq \|e^{t\Delta}v_{0}\|_{\mathcal{X}_{T}}+C_{0}\|(v,w)\|_{\mathfrak{L}^{r_1}(0,T; \dot{B}^{-2+n/p+2/r_1}_{p,q})}^{2}.\end{aligned}$$ This proves . The proof of is similar. [*Proof of Theorem 1.1*]{}:  The above lemma ensures that $\mathcal{G}$ is well-defined and maps $\mathcal{X}_{T}$ into itself. Moreover, from and we see that for any $(v,w)\in \mathcal{X}_{T}$ and $(\bar{v},\bar{w})= \mathcal{G}(v,w)$, $$\begin{aligned} \label{eq2.11} \|(\bar{v},\bar{w})\|_{\mathcal{X}_{T}}\leq \|(e^{t\Delta}v_{0}, e^{t\Delta}w_{0})\|_{\mathcal{X}_{T}} + C_{0}\|(v,w)\|_{\mathcal{X}_{T}}^2.\end{aligned}$$ **Existence**.  We first prove global existence for small initial data. For this purpose we choose $T=\infty$. By Proposition \[pro2.4\] and it is easy to see that if $\|(e^{t\Delta}v_{0}, e^{t\Delta}w_{0})\|_{\mathcal{X}_{\infty}}\leq\varepsilon$ and $\varepsilon>0$ is so small that $4C_0\varepsilon\leq 1$, then $\mathcal{G}$ has a fixed point in the closed ball $\|(v,w)\|_{\mathcal{X}_{\infty}}\leq 2\varepsilon$ in $\mathcal{X}_{\infty}$. By Proposition \[pro2.3\] we see that the condition $\|(e^{t\Delta}v_{0}, e^{t\Delta}w_{0})\|_{\mathcal{X}_{\infty}}\leq\varepsilon$ is satisfied if $\|(v_{0}, w_{0})\|_{\dot{B}^{-2+n/p}_{p,q}}$ is small enough. Indeed, by Proposition \[pro2.3\], there exists a positive constant $C_{1}$ depending only on $n$ such that $\|(e^{t\Delta}v_{0}, e^{t\Delta}w_{0})\|_{\mathcal{X}_{\infty}}\leq C_{1}\|(v_{0}, w_{0})\|_{\dot{B}^{-2+n/p}_{p,q}}$. Hence, if we assume that $\|(v_{0}, w_{0})\|_{\dot{B}^{-2+n/p}_{p,q}}\leq C_1^{-1}\varepsilon$, then we have $\|(e^{t\Delta}v_{0}, e^{t\Delta}w_{0})\|_{\mathcal{X}_{\infty}}\leq \varepsilon$. This proves global existence for small initial data. Next we prove local existence for large initial data. For this purpose we split $v_{0}$ into a sum as follows: $\widehat{v}_{0}(\xi)=\widehat{v}_{0}1_{\{|\xi|>2^{N}\}}+\widehat{v}_{0}1_{\{|\xi|\leq2^{N}\}}: =\widehat{v_{01}}+\widehat{v_{02}}$, where $1_{\mathcal{D}}$ represents the characteristic function on the domain $\mathcal{D}$. Similarly, we split $w_{0}$ as $\widehat{w_{0}}=\widehat{w_{01}}+\widehat{w_{02}}$. Since $2\leq p<2n$, it is easy to see that if $N\in\mathbb{Z}^{+}$ is sufficiently large then $C_{1}\|(v_{01},w_{01})\|_{\dot{B}^{-2+n/p}_{p,q}}\leq \frac{1}{2}\varepsilon$. Choosing a such $N$ and fixing it, we get $$\label{eq2.12} \|(e^{t\Delta}v_{0}, e^{t\Delta}w_{0})\|_{\mathcal{X}_{T}}\leq \frac{1}{2}\varepsilon+\|(e^{t\Delta}v_{02}, e^{t\Delta}w_{02})\|_{\mathcal{X}_{T}}.$$ Applying the Bernstein inequality, $$\begin{aligned} \|&(e^{t\Delta}v_{02}, e^{t\Delta}w_{02})\|_{\mathcal{X}_{T}}=\|(e^{t\Delta}v_{02}, e^{t\Delta}w_{02})\|_{\mathfrak{L}^{r_1}(0,T; \dot{B}^{-2+n/p+2/r_1}_{p,q})}\\ &\lesssim 2^{(2N)/r_1}\|(e^{t\Delta}v_{02}, e^{t\Delta}w_{02})\|_{\mathfrak{L}^{r_1}(0,T; \dot{B}^{-2+n/p}_{p,q})}\leq C_{2}2^{(2N)/r_1}T^{1/r_1}\|(v_{0}, w_{0})\|_{\dot{B}^{-2+n/p}_{p,q}}.\end{aligned}$$ Hence, if we choose $T$ small enough such that $C_{2}2^{(2N)/r_1}T^{1/r_1}\|(v_{0},w_{0})\|_{\dot{B}^{-2+n/p}_{p,q}}\leq\frac{1}{2}\varepsilon$, then $\|(e^{t\Delta}v_{02}$, $e^{t\Delta}w_{02})\|_{\mathcal{X}_{T}}\leq \frac{\varepsilon}{2}$. This result together with yields that $\|(e^{t\Delta}v_{0}, e^{t\Delta}w_{0})\|_{\mathcal{X}_{T}}\leq\varepsilon$. By applying Proposition \[pro2.4\] again, we obtain a fixed point of $\mathcal{G}$ in the closed ball $\|(v,w)\|_{\mathcal{X}_{T}}\leq 2\varepsilon$ in $\mathcal{X}_{T}$, and concludes the proof of local existence of solution. **Regularity**.  Note that if $(v,w)\in \mathcal{X}_{T}$ is a solution of , then we can proceed in the same way as in the proof of Lemma \[le2.5\] to obtain that $$\nabla\cdot(v\nabla(-\Delta)^{-1}(w-v)), \nabla\cdot(w\nabla(-\Delta)^{-1}(w-v))\in \mathfrak{L}^{r_1/2}(0,T; \dot{B}^{-4+n/p+4/r_1}_{p,q}(\mathbb{R}^{n})).$$ By Proposition \[pro2.3\], for any $\frac{r_1}{2}\leq r\leq\infty$ we have $(v, w)\in \mathfrak{L}^{r}(0,T; \dot{B}^{-2+n/p+2/r}_{p,q}(\mathbb{R}^{n}))$. Moreover, if $(v_{0},w_{0})$ belongs to the closure of $\mathcal{S}(\mathbb{R}^n)$ in the space $\dot{B}^{-2+n/p}_{p,q}(\mathbb{R}^{n})$, then $(v, w)\in C([0,T), \dot{B}^{-2+n/p}_{p,q}(\mathbb{R}^{n}))$. **Uniqueness**.  Let $(v,w)$ and $(\tilde{v},\tilde{w})$ be two solutions of in $\mathcal{X}_{T}$ associated with the initial data $(v_{0},w_{0})$ and $(\tilde{v}_{0}, \tilde{w}_{0})$, respectively. Set $V=v-\tilde{v}$, $W=w-\tilde{w}$. Then $(V,W)$ satisfies the following equations: $$\begin{cases} &\partial_{t}V-\Delta V=-\nabla\cdot(V\nabla(-\Delta)^{-1}(w-v))-\nabla\cdot(\tilde{v}\nabla(-\Delta)^{-1}(W-V)),\\ &\partial_{t}W-\Delta W=\nabla\cdot(W\nabla(-\Delta)^{-1}(w-v))+\nabla\cdot(\tilde{w}\nabla(-\Delta)^{-1}(W-V)),\\ &V(x,0)=V_{0}(x)=v_{0}(x)-\tilde{v}_{0}(x),\ \ W(x,0)=W_{0}(x)=w_{0}(x)-\tilde{w}_{0}(x). \end{cases}$$ Proceeding in the same way as in the proof of Lemma \[le2.5\], we can prove that $$\begin{aligned} \|-\nabla\cdot&(V\nabla(-\Delta)^{-1}(w-v))-\nabla\cdot(\tilde{v}\nabla(-\Delta)^{-1}(W-V))\|_{\mathcal{X}_{T}}\\ &\leq C \big(\|v\|_{\mathcal{X}_{T}}+\|\tilde{v}\|_{\mathcal{X}_{T}}+\|w\|_{\mathcal{X}_{T}}\big) \|(V,W)\|_{\mathcal{X}_{T}}\\ \|\nabla\cdot&(W\nabla(-\Delta)^{-1}(w-v))+\nabla\cdot(\tilde{w}\nabla(-\Delta)^{-1}(W-V))\|_{\mathcal{X}_{T}}\\ &\leq C \big(\|v\|_{\mathcal{X}_{T}}+\|w\|_{\mathcal{X}_{T}}+\|\tilde{w}\|_{\mathcal{X}_{T}}\big) \|(V,W)\|_{\mathcal{X}_{T}}.\end{aligned}$$ Hence, by Proposition \[pro2.3\], $$\begin{aligned} \|(V,W )\|_{\mathcal{X}_{T}}&\leq C_{1}\|(V_{0}, W_{0})\|_{\dot{B}^{-2+n/p}_{p,q}}\\&+ C_{0} \big(\|v\|_{\mathcal{X}_{T}}+\|\tilde{v}\|_{\mathcal{X}_{T}}+\|w\|_{\mathcal{X}_{T}}+\|\tilde{w}\|_{\mathcal{X}_{T}}\big) \|(V,W)\|_{\mathcal{X}_{T}}.\end{aligned}$$ Let $M(T):=C_{0}\big(\|v\|_{\mathcal{X}_{T}}+\|\tilde{v}\|_{\mathcal{X}_{T}} +\|w\|_{\mathcal{X}_{T}}+\|\tilde{w}\|_{\mathcal{X}_{T}}\big)$. By absolute continuity of the Lebesgue integral, we have that $M(T)$ converges to zero as $T\to 0^+$. Hence, if we choose $T_{1}$ sufficiently small such that $M(T_{1})\leq\frac{1}{2}$, then $$\|(V,W )\|_{\mathcal{X}_{T}}\leq 2C_{1}\|(V_{0}, W_{0})\|_{\dot{B}^{-2+n/p}_{p,q}}.$$ Repeating this argument step by step on the intervals $[0,T_{1})$, $[T_{1}, 2T_{1})$, $\ldots$, we finally get a constant $C=C_T$ after a finite steps such that $\|(V,W)\|_{\mathcal{X}_{T}}\leq C\|(V_{0}, W_{0})\|_{\dot{B}^{-2+n/p}_{p,q}}$. This proves which implies the uniqueness of solutions. The proof of Theorem \[th1.1\] is complete. **Acknowledgment.** This work is supported by the China National Natural Science Fundation under the Grant No.10771223. [00]{} N. Ben Abdallah, F. Méhats, N. Vauchelet, A note on the long time behavior for the drift-diffusion-Poisson system, C. R. Math. Acad. Sci. Paris, 339 (10) (2004) 683–688. P. Biler, J. Dolbeault, Long time behavior of solutions to Nernst-Planck and Debye-Hückel drift-diffusion systems, Ann. Henri Poincaré, 1 (2000) 461–472. P. Biler, W. Hebisch, T. Nadzieja, The Debye system: existence and large time behavior of solutions, Nonlinear Anal., 23 (1994) 1189–1209. J.-Y. Chemin, *Perfect Incompressible Fluids*, Oxford Lecture Series in Mathematics and its Applications, vol. 14. The Clarendon Press, Oxford University Press: New York, 1998. R. Dachin, *Fourier Analysis Methods for PDE’s*, 2005, http://perso-math.univ-mlv.fr/users/danchin.raphael/courschine.pdf. P. Debye, E. Hückel, Zur Theorie der Elektrolyte, II: Das Grenzgesetz für die elektrische Leitfähigkeit, Phys. Z., 24 (1923) 305–325. H. Gajewski, On existence, uniqueness and asymptotic behavior of solutions of the basic equations for carrier transport in semiconductors, Z. Angew. Math. Mech., 65 (1985) 101–108. H. Gajewski, K. Gröger, On the basic equations for carrier transport in semiconductors, J. Math. Anal. Appl., 113 (1986) 12–35. G. Karch, Scaling in nonlinear parabolic equations, J. Math. Anal. Appl., 234 (1999) 534–558. M. Kurokiba, T. Ogawa, Well-posedness for the drift-diffusion system in $L^{p}$ arising from the semiconductor device simulation, J. Math. Anal. Appl., 342 (2008) 1052–1067. P.-G. Lemarié-Rieusset, *Recent Developments in the Navier-Stokes Problem*, Research Notes in Mathematics, Chapman & Hall/CRC, 2002. M. S. Mock, An initial value problem from semiconductor device theory, SIAM J. Math. Anal., 5 (1974) 597–612. T. Ogawa, S. Shimizu, The drift-diffusion system in two-dimensional critical Hardy space, J. Funct. Anal., 255 (2008) 1107–1138. T. Runst, W. Sickel, *Sobolev Spaces of Fractional Order, Nemytskij Operators, and Nonlinear Partial Differential Equations*, de Gruyter Series in Nonlinear Analysis and Applications, vol. 3. Walter de Gruyter & Co.: Berlin, 1996. S. Selberherr, *Analysis and simulation of semiconductor devices*, Springer Verlag, 1983. [^1]: E-mail: zhaojihong2007@yahoo.com.cn; liuqao2005@163.com; cuisb3@yahoo.com.cn.
--- abstract: 'The linear space of rooted forest admits two graded bialgebra structures. The first is defined by A. Connes and D. Kreimer using admissible cuts, and the second is defined by D. Calaque, K. Ebrahimi-Fard and the second author using contraction of trees. In this article we define the doubling of these two spaces. We construct two bialgebra structures on these space which are in interaction, as well as two associative products. We also show that these two bialgebras verify a commutative diagram similar to the diagram verified D. Calaque, K. Ebrahimi-Fard and the second author in the case of rooted trees Hopf algebra, and by the second author in the case of cycle free oriented graphs oriented graphs.' address: - '[Laboratoire de mathématiques physique fonctions spéciales et applications, université de sousse, rue Lamine Abassi 4011 H. Sousse, Tunisie]{}' - 'Université Blaise Pascal, C.N.R.S.-UMR 6620, 63177 Aubière, France' author: - Mohamed Belhaj Mohamed - Dominique Manchon date: May 2016 title: ' [Doubling bialgebras of rooted trees]{}' --- =cmr8 =cmr7 =cmr5 **MSC Classification**: 05C05, 16T05, 16T10, 16T15, 16T30. **Keywords**: Bialgebras, Hopf algebras, Comodules, Rooted trees. Introduction ============ Rooted trees appear in the work of Cayley [@ca] in the context of differential equations. They are used in an essential way in the work of Butcher [@bu], Grossman and Larson [@gl], Munthe-Kaas and Wright [@mw] in the field of numerical analysis. They also appear in the context of renormalization in perturbative quantum field theory in the works of A. Connes and D. Kreimer [@ad98; @A.D2000; @DK98], D. Calaque, K. Ebrahimi-Fard, D. Manchon [@ckm; @ms] and L. Foissy [@lf].\ On the vector space $\Cal H$ spanned by the rooted forests and graded by the number of vertices, A. Connes and D. Kreimer introduce a Hopf algebra structure where the coproduct is defined by: $$\begin{aligned} \Delta_{CK}(t) &=& \sum_{c \in {\mop{\tiny{Adm}}(t)}} P^c(t)\otimes R^c(t),\end{aligned}$$ where $\mop{Adm}(t)$ is the set of admissible cuts.\ In the same context D. Calaque, K. Ebrahimi-Fard and the second author introduce on the commutative algebra $\tilde{\Cal H}$ generated by rooted forests a structure of bialgebra graded by the number of edges. The coproduct is defined by: $$\Delta_{\tilde {\Cal H}} (t) = \sum_{s\subseteq t} s \otimes t/s,$$ where $s$ is a covering subforest of the rooted tree $t$ and $t / s$ is the tree obtained by contracting each connected component of $s$ onto a vertex. Also they establish a relation between the bialgebra $\tilde{\Cal H}$ obtained this way and the Connes-Kreimer Hopf algebra of rooted trees $\Cal H$ by means of a natural $\tilde{\Cal H}$-comodule structure on $\Cal H$ given by: $$\Phi(t)=\Delta_{\tilde {\Cal H}} (t) = \sum_{s\subseteq t} s \otimes t/s.$$ To be precise, the following diagram commutes: making $\Cal H$ a comodule-bialgebra on $\tilde{\Cal H}$ [@ckm].\ In this paper, we define the doubling spaces of $\Cal H$ and $\tilde{\Cal H}$ respectively denoted by $D$ and $\tilde{D}$. In other words, we denoted by ${V}$ the vector space spanned by the couples $(t,s)$ where $t$ is a tree and $s =P^{c_0}(t)$ where $c_0$ is an admissible cut of $t$. The doubling space ${D}$ is the symmetric algebra of $V$, i.e: ${D} : = S({V})$. Similarly, if $\tilde V$ is the vector space spanned by the couples $(t,s)$ where $t$ is a tree, and $s$ is a subforest of $t$, the doubling space $\tilde{D}$ is the symmetric algebra of $\tilde V$, i.e: $\tilde D : = S(\tilde V)$. Note that $D$ is strictly included in $\tilde{D}$, as there are subforest $s$ which are not of the form $P^{c_0}(t)$.\ We prove that there exist graded bialgebra structures on $D$ and $\tilde{D}$, where the coproducts are defined respectively by: For all $(t,s) \in D$: $$\Delta(t,s) = \sum_{c \in {\mop{\tiny{Adm}}(s)}} \big(t, P^c(s)\big)\otimes \big( R^c(t), R^c(s)\big),$$ and for all $(t,s) \in \tilde D$: $$\Gamma (t,s) = \sum_{s'\subseteq s} (t , s') \otimes(t/s', s/s').$$ We show that $\Delta(V) \subset V\otimes V$ and $\Gamma (\tilde {V}) \subset \tilde {V}\otimes \tilde {V}$, which allows us to restrict $\Delta$ on $V$ and $\Gamma$ on $\tilde {V}$.\ In the second part of this paper, we prove that $D$ admits a comodule structure on $\tilde D$ given by the coaction $\phi: D \longrightarrow \tilde{D} \otimes D$, which is defined for all $(t,s) \in D$ by restriction of $\Gamma$ to $D$: $$\phi (t,s) = \sum_{s'\subseteq s} (t , s') \otimes(t/s', s/s').$$ The coaction $\phi$ can also restrict to $V$ because $\phi (V) \subset \tilde{V}\otimes V$.\ We construct an associative algebra structure on $V$ given by the associative product $\circledast : V \otimes V \longrightarrow V$, defined for all couples of forests $(t,s), (t',s')$ such that $s = P^{c} (t)$ and $s' = P^{c'} (t')$ by: $$(t,s) \circledast (t',s') = \left\lbrace \begin{array}{lcl} (t,s \cup s') \;\;\;\;\; \text{if} \;\; t' = R^c (t)\\ 0 \;\; \;\; \;\; \;\; \text{if not}, \end{array}\right.$$ where $s \cup s'$ is the pruning of the cut $c'$ raised to the tree $t$. This product is obtained by dualizing the restriction of the coproduct $\Delta$ to $V$, identifying $V$ with its graded dual using the basis $\{(t,P^c(t)),\, t \hbox{ rooted tree and }c\in\mop{Adm}(t)\}$. We accordingly construct a second associative algebra structure on $\tilde V$ by dualizing the restriction of the coproduct $\Gamma$ to $\tilde V$, yielding the associative product $\sharp : \tilde V \otimes \tilde V \longrightarrow \tilde V$, defined by: $$(t,s) \sharp (t',s') = \left\lbrace \begin{array}{lcl} (t,s \cup s') \;\;\;\;\; \text{if} \;\; t' = t/s\\ 0 \;\; \;\; \;\; \;\; \text{if not}. \end{array}\right.$$ In the end of this article, we define a new map $\xi : \tilde V \otimes \tilde V \otimes V \longrightarrow \tilde V \otimes V$ by: 1. $\xi \big( (t', s') \otimes (t'',s'')\otimes (u, v)\big) = (t', s' \cup s'')\otimes (t'/(s' \cup s''), v),$\ if $t'' = R^c(t')$, $v = P^{\tilde{c}}(t')$ and $u = t'/s'$, where $c$ is an admissible cut of $t'$, and $\tilde{c}$ is an admissible cut of $t'$ which does not meet $s'$ and $s''$. 2. $\xi \big( (t', s') \otimes (t'',s'')\otimes (u, v)\big) = 0,$\ if $t', t'', s', s'', u$ and $v$ are forests which do not match the conditions of item (1). We prove that the coaction $\phi$ and the map $\xi$ make the following diagram commute: Moreover this diagram extends to the diagram: where the arrows are now algebra morphisms. This second diagram is similar to the commutative diagram verified by D. Calaque, K. Ebrahimi-fard and the second author in the case of rooted trees hopf algebra and by the second author in the case of cycle-free oriented graphs oriented graphs. The only difference is that the map $m^{13}$ is replaced here by the map $(\xi \otimes id)\circ \tau^{23}$. Hopf algebras of rooted trees ============================= A [*rooted tree*]{} is a finite connected simply connected oriented graph such that every vertex has exactly one incoming edge, except for a distinguished vertex (the root) which has no incoming edge. The set of rooted trees is denoted by $T$ and the set of rooted trees with $n$ vertices is denoted by $T_n$. $$\begin{aligned} T_1 &=& \{\racine\}\\ T_2 &=& \{\echela\}\\ T_3 &=& \{\echelb , \arbrey \}\\ T_4 &=& \{\arbreca, \arbrema, \arbreza, \arbredh\}\end{aligned}$$ Let $\mathcal{H} = S({T})$ be the algebra of rooted forest. A. Connes and D. Kreimer [@ad98], [@DK98] showed that this space, graded according to the number of vertices, admits a structure of graded bialgebra. The product is the concatenation, and the coproduct is defined by: $$\begin{aligned} \Delta_{CK}(t) &=& t \otimes \un + \un \otimes t + \sum_{c \in {\mop{\tiny{Adm'}}(t)}} P^c(t)\otimes R^c(t)\\ &=&\sum_{c \in {\mop{\tiny{Adm}}(t)}} P^c(t)\otimes R^c(t),\end{aligned}$$ where $\mop{Adm}(t)$ (resp $\mop{Adm'}(t)$) is the set of admissible cuts (resp. nontrivial admissible cuts) of a forest, i.e. the set of collections of edges such that any path from the root to a leaf contains at most one edge of the collection. We denote as usual by $P^c(t)$ (resp. $R^c(t)$) the pruning (resp. the trunk) of $t$, i.e. the subforest formed by the edges above the cut $c \in \mop{Adm}(t)$ (resp. the subforest formed by the edges under the cut). Note that the trunk of a tree is a tree, but the pruning of a tree may be a forest. $\un$ stands for the empty forest, which is the unit. One sees easily that $\deg(t) = \deg(P_c(t)) + \deg(R_c(t))$ for all admissible cuts. (See [@ckm] and [@lf]).\ D. Calaque, K. Ebrahimi-Fard and D. Manchon showed that the space $\tilde {\Cal H}$ generated by the rooted forests, graded according to the number of edges, admits a structure of graded bialgebra [@ckm]. The unit is the empty forests, the product is the concatenation, and the coproduct is defined for any non empty forest $t$ by: $$\Delta_{\tilde {\Cal H}} (t) = \sum_{s\subseteq t} s \otimes t/s,$$ where $s$ is a covering subforest of a rooted tree $t$ and $t / s$ the tree obtained by contracting each connected component of $s$ onto a vertex, i.e. $s$ is a collection of disjoint sub-trees $(t_1, \cdots ,t_n)$ of $t$, covering $t$. In particular, two sub-trees of the forest have no vertex in common. The tree $t/s$ is obtained by contraction of each connected component of $s$ on a vertex. The two coproducts applied to same tree $\arbreza$: $$\Delta_{\Cal H} (\arbreza)=\racine \racine \racine \racine \otimes\arbreza+\arbreza\otimes\racine+2\echela \racine \racine \otimes\arbrey+\echela \racine \racine\otimes\echelb+\echelb \racine\otimes\echela+\arbrey\racine\otimes\echela+\echela\echela\otimes\echela.$$ $$\Delta_{CK}(\arbreza)=\textbf{1}\otimes\arbreza+\arbreza\otimes \textbf{1}+\racine\otimes\echelb+\echela\otimes\echela+\racine\otimes\arbrey+\echela\racine\otimes\racine +\racine\racine\otimes\echela$$ The Hopf algebra $\Cal H'$ is given by identifying all elements of degree zero to unit $\un$: $${\Cal H'} = \wt{\Cal H} / \Cal J$$ where $\Cal J$ is the ideal generated by the elements $\un - t$ where $t$ is a forest of degree zero.\ The example of coproduct above becomes by identifying the unit to $\racine$: $$\Delta_{\Cal H'} (\arbreza)=\racine \otimes\arbreza+\arbreza\otimes\racine+2\echela \otimes\arbrey+\echela\otimes\echelb+\echelb \otimes\echela+\arbrey\otimes\echela+\echela\echela\otimes\echela.$$ Doubling bialgebras of trees ============================ We have studied the concept of doubling bialgebra in the context of the specified Feynman graphs Hopf algebra [@MBM]. We have proved that the doubling space of specified Feynman graphs, the vector space spanned by the $(\bar\Gamma, \bar\gamma)$ where $\bar\Gamma$ is locally $1PI$ specifed graph of the theory $\Cal T$, $\bar\gamma \subset \bar\Gamma$ loc $1PI$ and $\bar\Gamma / \bar\gamma$ is a specifed graph of $\Cal T$, admits a structure of graded bialgebra (see [@MBM] and [@mbm §3]). Doubling bialgebra $\mathcal{H}_{CK}$ ------------------------------------- Let ${V}$ the vector space spanned by the couple $(t,s)$ where $t$ is a tree and $s =P^{c_0}(t)$ where $c_0$ is an admissible cut of $t$. We define then the doubling bialgebra of trees $\mathcal{H}_{CK}$ by ${D} : = S({V})$ and we define the coproduct $\Delta$ for all $(t,s) \in D$ by: $$\Delta(t,s) = \sum_{c \in {\mop{\tiny{Adm}}(s)}} \big(t, P^c(s)\big)\otimes \big( R^c(t), R^c(s)\big).$$ $D$ is a graded bialgebra. The unit $\textbf{1}$ is identified to empty forest, the counit $\varepsilon$ is given by $\varepsilon (t, s) = \varepsilon (s)$ and the graduation is given by the number of vertices of $s$: $$|(t, s)| = |s|.$$ The product is defined by: $$(t, s)(t', s') = (tt', ss').$$ We now calculate: $$\begin{aligned} (\Delta \otimes id)\Delta (t, s) &=& (\Delta \otimes id) \Big( \sum_{c \in {\mop{\tiny{Adm}}(s)}} \big(t, P^c(s)\big)\otimes \big( R^c(t), R^c(s)\big) \Big)\\ &=& \sum_{c \in {\mop{\tiny{Adm}}(s)}\atop c' \in {\mop{\tiny{Adm}}(P^c(s))}} \Big(t,P^{c'}\big(P^c(s)\big) \Big) \otimes \Big( R^{c'}(t), R^{c'}\big(P^c(s)\big)\Big) \otimes \big( R^c(t), R^c(s)\big)\\ &=& \sum_{{c \in {\mop{\tiny{Adm}}(s)};c' \in {\mop{\tiny{Adm}}(s)}}\atop {c'> c }} \big(t,P^{c'}(s)\big) \otimes \Big( R^{c'}(t), R^{c'}\big(P^c(s)\big)\Big) \otimes \big( R^c(t), R^c(s)\big).\end{aligned}$$ The notation ${c '> c}$ denotes the cut $c$ is below the cut $c'$.\ On the other hand, $$\begin{aligned} (id \otimes \Delta)\Delta (t, s) &=& (id \otimes \Delta) \Big( \sum_{c' \in {\mop{\tiny{Adm}}(s)}} \big(t, P^{c'}(s)\big)\otimes \big( R^{c'}(t), R^{c'}(s)\big) \Big)\\ &=& \sum_{c' \in {\mop{\tiny{Adm}}(s)} \atop c \in {\mop{\tiny{Adm}}(R^{c'}(t))}} \big(t, P^{c'}(s)\big)\otimes \Big( R^{c'}(t), P^c \big(R^{c'}(s)\big)\Big)\otimes \Big(R^c( R^{c'}(t)\big), R^c \big( R^{c'}(s)\big)\Big).\end{aligned}$$ The condition $\{ c' \in {\mop{\tiny{Adm}}(s)} ; c \in {\mop{\tiny{Adm}}(R^{c'}(t))} \}$ is equivalent to $\{{c \in {\mop{\tiny{Adm}}(s)} ; c' \in {\mop{\tiny{Adm}}(s)}} \text{and}\; {c'> c }\}$ and we obtain the following equalities: $$P^c (R^{c'}(s)) = R^{c'} (P^c (s)), \hskip 5mm R^c( R^{c'}(t)) = R^c(t), \hskip 5mm R^c( R^{c'}(t)) = R^c(s).$$ Then: $$\begin{aligned} (id \otimes \Delta)\Delta (t, s)&=& \sum_{{c \in {\mop{\tiny{Adm}}(s)};c' \in {\mop{\tiny{Adm}}(s)}}\atop {c'> c }} \big(t,P^{c'}(s)\big) \otimes \Big( R^{c'}(t), R^{c'}\big(P^c(s)\big)\Big) \otimes \big( R^c(t), R^c(s)\big)\\ &=& (\Delta \otimes id)\Delta (t, s).\end{aligned}$$ Hence $(\Delta \otimes id)\Delta = (id \otimes \Delta)\Delta$, and consequently $\Delta$ is co-associative. Finally we have directly: $$\Delta \big((t, s).(t', s')\big) = \Delta (t, s) \Delta (t', s').$$ We remark here that $\Delta (V) \subset V \otimes V$. Indeed, if $(t, s) \in V$ then $\big(t,P^{c}(s)\big) \in V$, since a pruning of $s$ is also a pruning of $t$. Similary $\big( R^c(t), R^c(s)\big) \in V$ because $R^c(t)$ is a tree, and $R^c(s) = R^c(P^{c_0} (t)) = P^{c_0} (R^c(t))$ is a pruning of $R^c(t)$. So we can restrict the coassociative product $\Delta$ to $V$. \[p1\] The second projection $$\begin{aligned} P_2: D &\longrightarrow& {{\Cal H}} \\ (t, s) &\longmapsto& s\end{aligned}$$ is a bialgebra morphism. The fact that $P_2$ is an algebra morphism is trivial. It suffices to show that $P_2$ is a coalgebra morphism, i.e. $P_2$ verifies the following commutative diagram: which can be seen by direct calculation: $$\begin{aligned} \Delta \circ P_2 (t, s) &=& \Delta(s) \\ &=&\sum_{c \in {\mop{\tiny{Adm}}(s)}} P^c(s)\otimes R^c(s)\\ &=&\sum_{c \in {\mop{\tiny{Adm}}(s)}} P_2 \big(t, P^c(s)\big)\otimes P_2 \big( R^c(t), R^c(s)\big)\\ &=& (P_2 \otimes P_2) \Delta (t, s).\end{aligned}$$ Doubling bialgebra $\tilde{\mathcal H}$ --------------------------------------- Let $\tilde V$ be the vector space spanned by the couples $(t,s)$ where $t$ is a tree, and $s$ is a subforest of $t$. We define then the doubling of bialgebra $\tilde {\Cal H}$ by $\tilde D : = S(\tilde V)$ and we define the coproduct $\Gamma$ for all $(t,s) \in \tilde D$ by: $$\Gamma (t,s) = \sum_{s'\subseteq s} (t , s') \otimes(t/s', s/s'),$$ $\tilde D$ is a graded bialgebra. The unit $\textbf{1}$ is identified to empty graph, the counit $\varepsilon$ is given by $\varepsilon (t, s ) = \varepsilon (s)$ and the graduation is given by the number of vertices of $s$: $$|(t, s)| = |s|.$$ The product is given by: $$(t, s)(t', s') = (tt', ss').$$ The coassocitivity of coproduct $\Gamma$ is given by this calculation: $$\begin{aligned} (\Gamma \otimes id)\Gamma (t, s) &=& (\Gamma \otimes id) \big( \sum_{s' \subseteq s } ( t, s') \otimes (t / s, s / s') \big)\\ &=& \sum_{s' \subseteq s \atop s'' \subseteq s' } ( t, s'')\otimes (t / s'', s' / s'') \otimes (t / s, s / s')\\ &=& \sum_{s'' \subseteq s' \subseteq s } ( t, s'')\otimes (t / s'', s' / s'') \otimes (t / s, s / s'),\end{aligned}$$ whereas $$\begin{aligned} (id \otimes \Gamma)\Gamma (t, s) &=& (id \otimes \Gamma) \big( \sum_{s'' \subseteq s } ( t, s'') \otimes (t / s, s / s'') \big)\\ &=& \sum_{s'' \subseteq s \atop r \subseteq s/s'' } ( t, s'')\otimes (t / s'', r) \otimes ( (t /s'')/r, (t /s'')/r ).\end{aligned}$$ As $r \subseteq s/s''$, then there exists a forest $s'$ such that: $s'' \subseteq s' \subseteq s$ and $r \cong s' / s''$. Hence: $$\begin{aligned} (id \otimes \Gamma)\Gamma (t, s) &=& \sum_{s'' \subseteq s' \subseteq s } ( t, s'')\otimes (t / s'', s' / s'') \otimes ((t /s'')/(s' /s''), (s /s'')/(s' /s''))\\ &=& \sum_{s'' \subseteq s' \subseteq s } ( t, s'')\otimes (t / s'', s' / s'') \otimes (t / s, s / s').\end{aligned}$$ Therefore $(\Gamma \otimes id)\Gamma = (id \otimes \Gamma)\Gamma$, and thus $\Gamma$ is coassociative. Finally we show immediately that: $$\Gamma \big((t, s) (t', s')\big) = \Gamma (t, s)\Gamma (t', s').$$ We note here that $\Gamma (\tilde V) \subset \tilde V \otimes \tilde V$. Indeed, if $(t, s) \in V$ then $(t, s') \in \tilde V$ and $(t/s', s/s') \in \tilde V$. So we can restrict the coassociative product $\Gamma$ to $\tilde V$. The second projection $$\begin{aligned} P_2: {\wt{D}} &\longrightarrow& {\wt{\Cal H}} \\ (t, s) &\longmapsto& s\end{aligned}$$ is a morphism of graded bialgebras. The fact that $P_2$ is an algebras morphism is trivial, it suffices to show that $P_2$ is a coalgebra morphism, analogously to Proposition \[p1\]: $$\begin{aligned} \Gamma \circ P_2 (t, s) &=& \Gamma(s) \\ &=& \sum_{s' \subseteq s} s'\otimes s' / s'\\ &=& \sum_{s' \subseteq s} P_2 (t, s')\otimes P_2 (t/s', s/s')\\ &=& (P_2 \otimes P_2) \Gamma (t, s).\end{aligned}$$ Comodule structure ================== Comodule structure on bialgebras of trees and bialgebras of oriented graphs --------------------------------------------------------------------------- D. Calaque, K. Ebrahimi-Fard and D. Manchon have studied the Connes-Kreimer Hopf algebra $\Cal H$ graduated following the number of vertices in [@ckm], as comodule on a Hopf algebra of rooted trees $\tilde {\Cal H}$ graduated according to the number of edges. This structure is defined as follows: For $\un$ we have: $\Phi(\un)=\racine \otimes \un$, and for any non-empty tree $t$ we have: $$\Phi(t)=\Delta_{\tilde {\Cal H}} (t) = \sum_{s\subseteq t} s \otimes t/s.$$ We can also write $\Phi(t)$ as follows: $$\begin{aligned} \Phi(t) &=& \Delta_{\tilde {\Cal H}} (t) = \sum_{s\subseteq t} s \otimes t/s\\ &=& \racine \otimes t + \left( t \otimes \racine + \sum_{s \hbox{ \sevenrm proper sub-forest of} t} s \otimes t/s \right ).\end{aligned}$$ D. Calaque, K. Ebrahimi-Fard and the second author showed the existence of a relation between this coaction $\Phi$ and the Connes-Kreimer coproduct $\Delta_{CK}$. $$\Delta_{CK}(t) = t \otimes \un + \un \otimes t + \sum_{c \in {\mop{\tiny{Adm}}(t)}} P^c(t)\otimes R^c(t),$$ This relation is given by this theorem: [@ckm] This diagram is commutative: i.e : The following identity is verified: $$\label{codistrib} (\mop{Id}_{\tilde {\Cal H}} \otimes\Delta_{CK})\circ\Phi=m^{1,3}\circ(\Phi\otimes\Phi)\circ\Delta_{CK},$$ where $ m^{1,3}:\tilde {\Cal H} \otimes \Cal H \otimes \tilde {\Cal H} \otimes \Cal H \longrightarrow \tilde {\Cal H} \otimes \Cal H \otimes \Cal H $ is defined by: $$m^{1,3}(a\otimes b\otimes c\otimes d)=ac\otimes b\otimes d.$$ Comodule structure on the doubling of the rooted trees bialgebra ---------------------------------------------------------------- We define $\phi : D \longrightarrow \tilde{D} \otimes D$ for all $(t,s) \in D$ by: $$\phi (t,s) = \sum_{s'\subseteq s} (t , s') \otimes(t/s', s/s').$$ The map $\phi$ is well defined. Indeed, if $(t,s) \in D$ i.e: $s = P^{c_0} (t)$ for an admissible cut $c_0$ of $t$, we have: $$s \subseteq t \Longrightarrow s' \subseteq s \subseteq t \Longrightarrow (t, s') \in \tilde D,$$ and $s/s' = P^{c} (t/s')$, where $c$ is the admissible cut deduced from $c_0$. Therefore $(t/s', s/s') \in D$. $D$ admits a comodule structure on $\tilde{D}$ given by $\phi$. The proof amounts to show that the following diagram is commutative: Let $(t,s) \in D$: $$\begin{aligned} (\Gamma \otimes id)\circ \phi (t,s) &=& (\Gamma \otimes id) \big(\sum_{s'\subseteq s} (t , s') \otimes(t/s', s/s')\big)\\ &=& \sum_{s'' \subseteq s'\subseteq s } (t , s'') \otimes (t/s'' , s'/s'') \otimes (t/s', s/s').\end{aligned}$$ On the other hand, we have: $$\begin{aligned} (id \otimes \phi)\circ \phi (t,s) &=& (id \otimes \phi) \big(\sum_{s''\subseteq s} (t , s'') \otimes(t/s'', s/s'')\big)\\ &=& \sum_{s'\subseteq s \atop \tilde s'\subseteq s/s''} (t , s'') \otimes (t/s'' , \tilde s') \otimes ((t/s'')/\tilde s', (s/s'')/\tilde s')\\ &=& \sum_{s'' \subseteq s'\subseteq s} (t , s'') \otimes (t/s'' , s'/s'') \otimes (t/s', s/s').\end{aligned}$$ Then : $$(\Gamma \otimes id)\circ \phi = (id \otimes \phi)\circ \phi,$$ and consequently $\phi$ is a coaction. We note here that $\phi (V) \subset \tilde V \otimes V$. Structures of associative algebras on the doubling spaces ========================================================= Associative Product on $V$ -------------------------- Recall here that an element $(t, s)$ belongs to $V$ if $t$ is a tree and $s = P^c (t)$ i.e. $s$ is the pruning of the tree $t$ for an admissible cut $c$. Let $(t,s)$ and $(t',s')$ be two couples of forests such that $s = P^{c} (t)$ and $s' = P^{c'} (t')$. The product $\circledast : V \otimes V \longrightarrow V$ is defined by: $$(t,s) \circledast (t',s') = \left\lbrace \begin{array}{lcl} (t,s \cup s') \;\;\;\;\; \text{if} \;\; t' = R^c (t)\\ 0 \;\; \;\; \;\; \;\; \text{if not}, \end{array}\right.$$ where $s \cup s'$ is the pruning of the cut $c'$ raised to the tree $t$, is associative. Let $(t,s), (t',s')$ and $(t'',s'')$ be a three elements of $V$ i.e, there exist $c \in {\mop{Adm}(t)}, c' \in {\mop{Adm}(t')}$ and $c'' \in {\mop{Adm}(t'')}$ such that : $s = P^{c} (t), s' = P^{c'} (t')$ and $t'' = P^{c''} (t'')$.\ We suppose firstly that $t' = R^c (t)$, otherwise the result is zero. $$\begin{aligned} \big((t,s) \circledast (t',s')\big)\circledast (t'',s'') &=& (t,\tilde s' ) \circledast (t'',s'')\\ &=& (t,s \cup s' \cup s''), \end{aligned}$$ where $\tilde s' = s \cup s'$ and $t'' = R^{c_{\tilde s'}} (t) = R^{c'} (R^{c}(t)) = R^{c'} (t')$. Then: $$\begin{aligned} \big((t,s) \circledast (t',s')\big)\circledast (t'',s'') = \left\lbrace \begin{array}{lcl} (t,s \cup s' \cup s'') \;\;\;\;\; \text{if} \;\; t' = R^c (t) \;\text{and}\; t'' = R^{c'} (t')\\ 0 \;\; \;\; \;\; \;\; \text{if not}. \end{array}\right. \end{aligned}$$ Otherwise, for $t'' = R^{c'} (t')$ we have: $$\begin{aligned} (t,s) \circledast \big( (t',s')\circledast (t'',s'')\big) &=& (t,s) \circledast (t', \tilde s'')\\ &=& (t,s \cup s' \cup s''), \end{aligned}$$ where $\tilde s'' = s' \cup s''$ and $t' = R^{c} (t)$. Therefore: $$\begin{aligned} (t,s) \circledast \big( (t',s')\circledast (t'',s'')\big) = \left\lbrace \begin{array}{lcl} (t,s \cup s' \cup s'') \;\;\;\;\; \text{if} \;\; t' = R^c (t)\; \text{and}\; t'' = R^{c'} (t')\\ 0 \;\; \;\; \;\; \;\; \text{if not}. \end{array}\right. \end{aligned}$$ We therefore conclude that for all $(t,s), (t',s')$ and $(t'',s'')$ in $V$ we have: $$\big((t,s) \circledast (t',s')\big)\circledast (t'',s'') = (t,s) \circledast \big( (t',s')\circledast (t'',s'')\big),$$ which proves the associativity of the product $\circledast$. Associative product on $\tilde V$ --------------------------------- Recall here that an element $(t, s)$ belongs to $\tilde V$ if $s$ is a subforest of the tree $t$. \[th5\] The product $ \sharp : \tilde V \otimes \tilde V \longrightarrow \tilde V$ defined by: $$(t,s) \sharp (t',s') = \left\lbrace \begin{array}{lcl} (t,s \cup s') \;\;\;\;\; \text{if} \;\; t' = t/s\\ 0 \;\; \;\; \;\; \;\; \text{if not} \end{array}\right.$$ is associative. Let $(t,s), (t',s')$ and $(t'',s'')$ three elements of $\tilde V$, i.e, $s \subseteq t, s' \subseteq t' $ and $s'' \subseteq t''$\ We suppose firstly that $t' = t/s$, if not, the result is zero. $$\begin{aligned} \big((t,s) \sharp (t',s')\big)\sharp (t'',s'') &=& (t,\tilde s') \sharp (t'',s'')\\ &=& (t,s \cup s' \cup s''), \end{aligned}$$ where $\tilde s' = s \cup s'$ and $t'' = t/\tilde s' = (t/s)/s' = t'/s'$. Therefore: $$\begin{aligned} \big((t,s) \sharp (t',s')\big)\sharp (t'',s'') = \left\lbrace \begin{array}{lcl} (t,s \cup s' \cup s'') \;\;\;\;\; \text{if} \;\; t' = t/s \;\text{and}\;\; t'' = t'/s'\\ 0 \;\; \;\; \;\; \;\; \text{if not}. \end{array}\right. \end{aligned}$$ Otherwise, for $t'' = t'/s'$ we have: $$\begin{aligned} (t,s) \sharp \big( (t',s')\sharp (t'',s'')\big) &=& (t,s) \sharp (t', \tilde s'')\\ &=& (t,s \cup s' \cup s''), \end{aligned}$$ where $\tilde s'' = s' \cup s''$ and $t' = t'/s'$. Therefore: $$\begin{aligned} (t,s) \sharp \big( (t',s')\sharp (t'',s'')\big) = \left\lbrace \begin{array}{lcl} (t,s \cup s' \cup s'') \;\;\;\;\; \text{if} \;\; t' = t/s \;\;\;\text{and}\;\;\; t'' = t'/s'\\ 0 \;\; \;\; \;\; \;\; \text{if not}. \end{array}\right. \end{aligned}$$ We conclude that for all $(t,s), (t',s')$ and $(t'',s'')$ in $D$ we have: $$\big((t,s) \sharp (t',s')\big) \sharp (t'',s'') = (t,s) \sharp \big( (t',s')\sharp (t'',s'')\big),$$ which proves the associativity of the product $\sharp$. Relations between the laws on $V$ and $\tilde V$ ================================================ The map: $ \psi : \tilde V \otimes V \longrightarrow V$ defined by: $$\psi \big((t,s) \otimes (u, P^c (u))\big) = \left\lbrace \begin{array}{lcl} (t, P^{\tilde c}(t)) \;\;\;\;\; \text{if} \;\; u = t/s\\ 0 \;\; \;\; \;\; \;\; \text{if not}, \end{array}\right.$$ where ${\tilde c}$ is the raising of $c$ to $t$, is an action of $\tilde V$ on $V$. We have to verify the commutativity of this diagram: Let $(t,s)$ and $(t',s')$ be two elements of $\tilde V$, and $(u, P^c (u)) \in V$. We have: $$\begin{aligned} (id \otimes \psi)\big((t,s)\otimes(t',s')\otimes (u, P^c (u))\big) = \left\lbrace \begin{array}{lcl} (t,s)\otimes (t', P^{\bar c} (t')) \;\; \text{if} \;\; u = t'/s'\\ 0 \;\; \;\; \;\; \;\; \text{if not}, \end{array}\right. \end{aligned}$$ where ${\bar c}$ is the raising of $c$ to $t'$. Then: $$\begin{aligned} \psi \circ (id \otimes \psi)\big((t,s)\otimes (t',s')\otimes (u, P^c (u))\big) = \left\lbrace \begin{array}{lcl} \big(t, P^{\tilde {c}} (t)\big) \;\; \text{if} \;\; t' = t/s \;\;\text{and} \;\; u = t'/s' \\ 0 \;\; \;\; \;\; \;\; \text{if not}, \end{array}\right. \end{aligned}$$ where ${\tilde c}$ is the raising of $\bar c$ to $t$, i.e: ${\tilde c}$ is the raising of $c$ to $t$. Therefore: $$\begin{aligned} \psi \circ (id \otimes \psi)\big((t,s)\otimes (t',s')\otimes (u, P^c (u))\big) = \left\lbrace \begin{array}{lcl} \big(t, P^{\tilde {c}} (t)\big) \;\; \text{if} \;\; t' = t/s \;\;\text{and} \;\; u = t'/s'\\ 0 \;\; \;\; \;\; \;\; \text{if not}, \end{array}\right. \end{aligned}$$ where ${\tilde c}$ is the raising of $c$ to $t$. On the other hand, we have: $$\begin{aligned} (\sharp \otimes id)\big((t,s)\otimes (t',s')\otimes (u, P^c (u))\big) = \left\lbrace \begin{array}{lcl} (t, s \cup s')\otimes (u, P^c (u)) \;\; \text{if} \;\; t' = t/s\\ 0 \;\; \;\; \;\; \;\; \text{if not}. \end{array}\right.\end{aligned}$$ Then: $$\begin{aligned} \psi \circ (\sharp \otimes id)\big((t,s)\otimes (t',s')\otimes (u, P^c (u))\big) = \left\lbrace \begin{array}{lcl} \big(t, P^{\tilde {c}} (t)\big) \;\; \text{if} \;\; t' = t/s ,\;\;\; u = t/(s \cup s') = t'/s' \\ 0 \;\; \;\; \;\; \;\; \text{if not}, \end{array}\right. \end{aligned}$$ where ${\tilde c}$ is the raising of $c$ to $t$. We conclude then: $$\psi \circ (id \otimes \psi) = \psi \circ (\sharp \otimes id),$$ which proves that $\psi$ is an action of $\tilde V$ on $V$. The following diagram is commutative: i.e: $$(\phi\otimes id)\circ \Delta = (id\otimes \Delta)\circ \phi.$$ Therefore, $\Delta$ is a comodule morphism from $(D, \Phi)$ to $(D\otimes D, \Phi\otimes id)$. We have: $$\begin{aligned} (id\otimes \Delta)\circ \phi (t,s) &=& (id\otimes \Delta) \big(\sum_{s'\subseteq s} (t , s') \otimes(t/s', s/s')\big)\\ &=& \sum_{s'\subseteq s \atop c \in {\mop{\tiny{Adm}}(s/s')}} (t , s') \otimes \big(t/s' , P^{c} (s/s')\big) \otimes \big( R^{c}(t/s'), R^{c}(s/s')\big).\end{aligned}$$ On the other hand, we have: $$\begin{aligned} (\phi\otimes id)\circ \Delta (t,s) &=& (\phi\otimes id) \Big(\sum_{c' \in {\mop{\tiny{Adm}}(s)}} \big(t, P^{c'}(s)\big)\otimes \big( R^{c'}(t), R^{c'}(s)\big)\Big) \\ &=& \sum_{c' \in {\mop{\tiny{Adm}}(s)} \atop s'\subseteq P^{c'}(s)} (t, s')\otimes \big(t/s' , P^{c'} (s) / s'\big) \otimes \big( R^{c'}(t), R^{c'}(s)\big).\end{aligned}$$ The conditions $\{ s'\subseteq P^{c'}(s)\; \text{and}\; c \in {\mop{Adm}(s/s')}\}$ and $\{c' \in {\mop{Adm}(s)}\;\text{and}\; s'\subseteq P^{c'}(s)\}$ are equivalent, where $c'$ is the raising of $c$. We then obtain the following equalities:\ $P^{c'} (s) / s' = P^{c} (s/ s') \;;\; R^{c'}(t) = R^{c}(t/s') \;\; and \;\; R^{c'}(s) = R^{c}(s/s')$, Which gives: $$\begin{aligned} (\phi\otimes id)\circ \Delta (t,s) &=& \sum_{c' \in {\mop{\tiny{Adm}}(s)} \atop s'\subseteq P^{c'}(s)} (t, s')\otimes \big(t/s' , P^{c'} (s) / s'\big) \otimes \big( R^{c'}(t), R^{c'}(s)\big) \\ &=& \sum_{s'\subseteq s \atop c \in {\mop{\tiny{Adm}}(s/s')}} (t , s') \otimes \big(t/s' , P^{c} (s/s')\big) \otimes \big( R^{c}(t/s'), R^{c}(s/s')\big).\end{aligned}$$ Therefore: $$(\phi\otimes id)\circ \Delta = (id\otimes \Delta)\circ \phi,$$ which proves the commutativity of the diagram. The map $\psi$ verifies the following commutative diagram: i.e : $$\psi \circ (id \otimes \circledast) = \circledast \circ (\psi \otimes id).$$ Let $(u, P^c (u))$, $(u', P^{c'} (u'))$ be two elements of $V$ and $(t,s) \in \tilde V$, we have: $$\begin{aligned} (id \otimes \circledast) \big((t,s) \otimes(u, P^c (u))\otimes (u', P^{c'} (u'))\big) &=& \left\lbrace \begin{array}{lcl} (t,s) \otimes \big(u, P^{c} (u) \cup P^{c'} (u')\big) \;\; \text{if} \;\; u' = R^c (u)\\ 0 \;\; \;\; \;\; \;\; \text{if not}, \end{array}\right.\end{aligned}$$ Then: $$\begin{aligned} (id \otimes \circledast) \big((t,s) \otimes(u, P^c (u))\otimes (u', P^{c'} (u'))\big) &=& \left\lbrace\begin{array}{lcl} (t,s) \otimes \big(u, P^{\bar c'} (u) \big) \;\; \text{if} \;\; u' = R^c (u)\\ 0 \;\; \;\; \;\; \;\; \text{if not}, \end{array}\right.\end{aligned}$$ where ${\bar c'}$ is the raising of $c'$ to $u$. Then: $$\begin{aligned} \psi \circ (id \otimes \circledast) \big((t, s) \otimes (u, P^c (u))\otimes (u', P^{c'} (u'))\big) &=& \left\lbrace \begin{array}{lcl} \big(t, P^{\tilde c'} (t) \big) \;\; \text{if} \;\; u = t/s, \;\; \; u' = R^{c} (u)\\ 0 \;\; \;\; \;\; \;\; \text{if not}, \end{array}\right.\end{aligned}$$ where ${\tilde c'}$ is the raising of $\bar c'$ to $t$. The cut ${\tilde c'}$ can be also seen as raising of $c'$ to $t$. Therefore: $$\begin{aligned} \psi \circ (id \otimes \circledast) \big((t, s) \otimes (u, P^c (u))\otimes (u', P^{c'} (u'))\big) &=& \left\lbrace \begin{array}{lcl} \big(t, P^{\tilde c'} (t) \big) \;\; \text{if} \;\; u = t/s,\;\;\; u' = R^{c} (u)\\ 0 \;\; \;\; \;\; \;\; \text{if not}, \end{array}\right.\end{aligned}$$ where ${\tilde c'}$ is the raising of $c'$ to $t$. On the other hand, we have: $$\begin{aligned} (\psi \otimes id) \big((t, s)\otimes (u, P^{c} (u))\otimes (u', P^{c'} (u'))\big) = \left\lbrace \begin{array}{lcl} \big(u, P^{\tilde c} (t) \big) \otimes \big(u', P^{c'} (u')\big) \;\; \text{if} \;\; u = t/s\\ 0 \;\; \;\; \;\; \;\; \text{if not}, \end{array}\right.\end{aligned}$$ where ${\tilde c}$ is the raising of $c$ to $t$. Then: $$\begin{aligned} \circledast \circ(\psi \otimes id) \big((t, s)\otimes (u, P^{c} (u)) \otimes (u', P^{c'} (u')\big) &=& \left\lbrace\begin{array}{lcl} \big(t, P^{\tilde c} (u) \cup P^{c'} (u') \big) \;\; \text{if} \;\; {u = t/s, \atop u' = R^{\tilde c} (t)}\\ 0 \;\; \;\; \;\; \;\; \text{if not} \end{array}\right.\\ &=& \left\lbrace\begin{array}{lcl} \big(t, P^{\tilde c'} (t) \big) \;\; \text{if} \;\;\; {u = t/s, \atop u' = R^{\tilde c} (t) = R^{c} (u)}\\ 0 \;\; \;\; \;\; \;\; \text{if not}, \end{array}\right.\end{aligned}$$ where ${\tilde c'}$ is the raising of $c'$ to $t$. Therefore: $$\psi \circ (id \otimes \circledast) = \circledast \circ (\psi \otimes id),$$ and consequently the diagram is commutative. Let $\xi : \tilde V \otimes \tilde V \otimes V \longrightarrow \tilde V \otimes V$ be the map defined by: 1. $\xi \big( (t', s') \otimes (t'',s'')\otimes (u, v)\big) = (t', s' \cup s'')\otimes (t'/(s' \cup s''), v),$\ if $t'' = R^c(t')$, $v = P^{\tilde{c}}(t')$ and $u = t'/s'$, where $c$ is an admissible cut of $t'$, and $\tilde{c}$ is an admissible cut of $t'$ does not meet $s'$ and $s''$. 2. $\xi \big( (t', s') \otimes (t'',s'')\otimes (u, v)\big) = 0,$\ if $t', t'', s', s'', u$ and $v$ are forests which do not match the conditions of item (1). The two maps $\phi$ and $\xi$ make the following diagram commute: i.e : $$(id \otimes \Delta) \circ \phi = (\xi \otimes id ) \circ \tau^{23} \circ (\phi\otimes \phi) \circ \Delta.$$ We have: $$\begin{aligned} (id\otimes \Delta)\circ \phi (u,v) &=& (id\otimes \Delta) \big(\sum_{s\subseteq v} (u , s) \otimes(u/s, v/s)\big)\\ &=& \sum_{s\subseteq v \atop c \in {\mop{\tiny{Adm}}(v/s)}} (u , s) \otimes \big(u/s , P^{c} (v/s)\big) \otimes \big( R^{c}(u/s), R^{c}(v/s)\big).\end{aligned}$$ On the other hand, we have: $$\begin{aligned} &&(\xi \otimes id ) \circ \tau^{23} \circ (\phi\otimes \phi) \circ \Delta (u,v) = (\xi \otimes id ) \circ \tau^{23}\Big(\sum_{c \in {\mop{\tiny{Adm}}(v)}} \phi \big(u, P^{c}(v)\big)\otimes \phi\big( R^{c}(u), R^{c}(v)\big)\Big) \\ &&= (\xi \otimes id ) \circ \tau^{23} \Big(\sum_{c \in {\mop{\tiny{Adm}}(v)}} \sum_{s' \subseteq P^{c} (v) \atop s' \subseteq R^{c} (v)}\hskip-0.2cm(u, s')\otimes \big(u/s', P^{c} (v)/s'\big) \otimes \big(R^{c}(u), s''\big) \otimes \big(R^{c}(u)/s'' , R^{c}(v)/s'' \big)\Big)\\ &&= (\xi \otimes id )\Big(\sum_{c \in {\mop{\tiny{Adm}}(v)}} \sum_{s' \subseteq P^{c} (v) \atop s'' \subseteq R^{c} (v)}(u, s')\otimes \big(R^{c}(u), s''\big) \otimes \big(u/s', P^{c} (v)/s'\big)\otimes \big(R^{c}(u)/s'' , R^{c}(v)/s'' \big)\Big)\\ &&= \sum_{c \in {\mop{\tiny{Adm}}(v)}} \sum_{s' \subseteq P^{c} (v) \atop s''\subseteq R^{c} (v)}(u, s' \cup s'')\otimes \Big(u\big\slash s'\cup s'', P^{c} (v)\Big\slash\ (s'\cup s'')\cap P^{c} (v)\Big) \otimes \big(R^{c}(u)/s'' , R^{c}(v)/s'' \big)\\ &&=\sum_{c \in {\mop{\tiny{Adm}}(v)}} \sum_{s \subseteq v \atop \text{containing no edge of c}}\hskip-0.8cm(u, s)\otimes \big(u/s, P^{c} (v)\big\slash\ s\cap P^{c} (v)\big) \otimes \big(R^{c}(u)\big\slash\ s\cap R^{c}(u) , R^{c}(v)\big\slash\ s\cap R^{c}(v) \big)\\ &&= \sum_{s \subseteq v/s} \sum_{c \in {\mop{\tiny{Adm}}(v/s)}}(u, s)\otimes \big(u/s, P^{c} (v/s)\big) \otimes \big(R^{c}(u/s) , R^{c}(v/s) \big).\end{aligned}$$ Hence: $(id\otimes \Delta)\circ \phi = (\xi \otimes id ) \circ \tau^{23} \circ (\phi\otimes \phi) \circ \Delta$, which proves the theorem. We notice here that, this diagram extends to the commutative diagram: where the arrows are now algebra morphisms. [abcdsfgh]{}
--- abstract: 'We apply the teleparallelism condition to the Poincaré gauge theory of gravity. The resultant teleparallelized cosmology is completely equivalent to the Friedmann cosmology derived from Einstein’s general theory of relativity. The torsion is shown to play the role of the cosmological constant driving the cosmic acceleration. We then extend such theory to include the effect of spin and explore the possibility of accounting for the current accelerating universe by a spinning dark energy.' author: - Wenjie Lu - Wolung Lee - 'Kin-Wang Ng' bibliography: - 'tpgt1.bib' nocite: '[@*]' title: 'Teleparallel Poincaré Cosmology and $\Lambda$CDM Model ' --- Introduction ============ Recent observational data from type Ia supernovae, cosmic microwave background (CMB) anisotropies, and large scale structure, concordantly prevail an accelerating flat universe containing a mixture of matter and a preponderant smooth component with effective negative pressure [@Ia1; @Ia2; @wmap7; @sdss]. Though suffering from the so-called cosmological constant problem and the coincidence problem [@cc; @coin], the model of cold dark matter with a cosmological constant in the framework of general relativity (GR), i.e. $\Lambda$CDM model, thus has become the standard scenario in cosmology over the past decade. There exist, however, other options to account for the much perplexing accelerated expansion of the universe, among which the generalized teleparallel gravity, or the $f(T)$ gravity [@ft1; @ft2; @ft3; @ft4; @ft5], has attracted lots of attention recently. The $f(T)$ theory originates from the teleparallel equivalent of general relativity (TEGR) [@TGoverview; @introductionTG; @tegrev] which is based upon the notion of absolute parallelism (teleparallelism) initiated by Einstein in an unsuccessful attempt of unifying gravitation and electromagnetism [@einstein]. As an alternative geometrical formulation of GR, the TEGR employs a non-trivial dynamical tetrad field, $e_{i}\!^{\mu}$, determined by a given metric to define a linear Weitzenböck connection, $\Gamma^{\mu}\!_{\alpha \beta}=e_i\!^\mu\partial_\beta e_\alpha\!^i$. This particular choice of Weitzenböck connection makes the curvature tensor vanish. Thus, the spacetime is flat and the gravitational degrees of freedom are completely specified by the Weiztenböck torsion, ${T}^{\mu}\!_{\alpha \beta}=\Gamma^{\mu}\!_{\alpha \beta}-\Gamma^{\mu}\!_{\beta \alpha}$. This equivalence between teleparallel gravity and GR implies that both torsion and curvature are capable of offering effectively the same picture but with different interpretations regarding the gravitational field. For example, particle geodesics in GR are now replaced by trajectories depicted by a force equation similar to that of Lorentz force in electrodynamics [@torsionforce]. Furthermore, it is argued that the principle of general covariance ultimately prefers torsion to curvature [@TGreapprisal]. Generalizations of teleparallel gravity basically supersede the Lagragian density $T$ of the TEGR with various algebraic functions of the torsion scalar, $f(T)$. The torsion tensor so defined contains only products of first derivatives of the tetrads and consequently gives rise to second order field equations. Evidently, for the sake of computation, this feature is regarded as a significant advantage of the $f(T)$ theory when comparing to other modified gravity theories, such as the $f(R)$ theory [@frrev]. However, these general teleparallel gravity theories are certainly not free of pathologies: they do not respect local Lorentz covariance [@FFinf; @barrow1], i.e. a local Lorentz transformation would inevitably spoil the absolute parallelism except in the simplest TEGR case. It has been shown [@barrow2] that the local Lorentz invariance cannot even be regained by adding a spin connection back to the action. Moreover, since a proper tetrad field used to construct a teleparallel structure cannot be uniquely specified by a given metric, the lack of local Lorentz symmetry will cause difficulties in formulating a satisfactory cosmological model to account for the late time accelerated expansion [@FFcos]. In light of the fact that the Weitzenboöck connection is simply a special choice of the affine structure to make the curvature tensor identically null, to avoid those defects of the $f(T)$ gravity while retaining the notion of absolute parallelism, we thus resort to the more general gravitational theory: the Poincaré gauge theory (PGT) of gravity [@PGTHehl; @BnH; @3lecturesPGT]. The PGT includes two independent local translational and rotational potentials, the tetrad and affine connection, which respectively correspond to the torsion and curvature in a Riemann-Cartan geometry. The gauge structure and geometric properties of Riemann-Cartan geometry allow the PGT to become an alternative gravity theory to GR. In particular, when the torsion vanishes, the PGT reduces to GR and we recover a Riemann geometry. In the other hand, the PGT would reduce to a generalized teleparallel theory of gravity by imposing the condition of absolute parallelism. These features provide us a sound framework to inspect the dynamics induced by the teleparallelism and compare the result with that from the standard GR. Prior to the discovery of the accelerating universe, a few PGT cosmological models have been worked out in detailed by Goenner and Müller-Hoissen [@goenner]. More recently, attempts have been made to interpret the source driving the accelerated expansion by the PGT [@OscillatingUniverse; @SNY; @lxz2012; @gengpgt]. In our study here, we explore the possibility for explaining the cosmic acceleration by the teleparallelized PGT (TPGT). In such a theory, the form of torsion is strongly constrained by the teleparallelism condition. We will show that, by adopting the constrained form of torsion, the resulted formulation governing a homogeneous and isotropic universe is identical to that obtained from GR. Therefore, one asserts that the TPGT is simply a generalized TEGR in essence. As a consequence, the dynamical role played by the cosmological constant along the cosmic evolution and its connection to the Poincaré torsion will be unraveled. Moreover, we investigate the possibility of a spinning fluid as the source driving the cosmic acceleration by including the spin effect in the TPGT. Unlike the Einstein-Carton theory [@GRspin; @spinfluid3] which is considered as a natural extension of GR, the gravitational effect in TPGT is characterized by torsion, not by curvature. Thus, the usual formulation of the Weyssenhoff fluid [@spinfluid1; @spinfluid2] does not really apply in our case. In particular, it has been shown that the dust Weyssenhoff fluid model is unable to serve as an alternative to dark energy [@spinfluid4]. However, if the spin tensor of the cosmic fluid takes a specific form relating to the Poincaré torsion scalar and the spin density, we are able to show that the formulation describing the Poincaré cosmology is actually equivalent to that of the standard Friedmann cosmology plus a spinning dark energy. Under certain circumstances, the cosmological constant can be regarded as a sort of spinning vacuum energy yet to be scrutinized. This paper is organized as follows: In Sec. II we summarize the essential and necessary ingredients of the PGT to set the framework for our investigation. Section III explores the form of the Poincaré torsion by imposing the teleparallelism constraint which requires that all components of the curvature tensor vanish identically. We then explore the Poincaré cosmology in Sec. IV and reveal the connections between the cosmological constant and the torsion. Finally, we summarize our findings and discuss the implications in Sec. V. The Essential Poincaré gauge field theory of gravity ==================================================== The torsion and the curvature ----------------------------- The Poincaré gauge field theory of gravity employs two independent local translation and rotation gauge potentials, the orthonormal frame field (tetrad) $e_{i}\!^{\alpha}$ and the Lorentz connection $\Gamma_{i\beta}\!^{\alpha}$, to characterize both the curvature and torsion in a Riemann-Cartan spacetime. By means of the covariant derivative $D_i$, the two field strength tensors associated with the gravitational potentials are the torsion $$F_{ij} \! ^{\alpha } \equiv 2D_{[i} e_{j]} \! ^{\alpha } =2\left(\partial _{[i} e_{j]} \! ^{\alpha } +\Gamma _{[i|\beta } \! ^{\alpha } e_{|j]} \! ^{\beta } \right) ,$$ and the curvature $$R_{ij\alpha } \! ^{\beta } \equiv 2D_{[i} \Gamma _{j]\alpha } \! ^{\beta } =2\left(\partial _{[i} \Gamma _{j]\alpha } \! ^{\beta } +\Gamma _{[i|\gamma } \! ^{\beta } \Gamma _{|j]\alpha } \! ^{\gamma } \right) .$$ Here the Latin indices $i,j,...$ denote the holonomic (coordinate) base, and the Greek letters ${\alpha},{\beta},...$ represent the anholonomic (Lorentz) indices. The reciprocal frame $e^{i}\!_{\mu}$ satisfies $e^{i}\!_{\mu}e_{i}\!^{\nu}={\delta}_{\mu}\!^{\nu}$ and $e^{i}\!_{\mu}e_{j}\!^{\mu}={\delta}_{j}\!^{i}$. The metric tensor is uniquely determined by $$g_{ij}=e_{i}\!^{\mu}e_{j}\!^{\nu}{\eta}_{\mu\nu},$$ where the Minkowski metric ${\eta}_{\mu\nu}={\rm diag}(-1, +1, +1, +1)$. In terms of a set of space-time coordinates ([*i.e.*]{} a holonomic frame), the affine connection of the Riemann-Cartan geometry can be cast in the form $${\Gamma} _{ij} \!^{k} =\bar{\Gamma }_{ij} \!^{k} +\frac{1}{2} \left(F_{ij} \!^{k} +F^{k} \!_{ij} +F^{k} \!_{ji} \right) ,$$ where $F_{ij} \! ^{k} $ is the torsion tensor in the holonomic frame, and $\bar{\Gamma }_{ij} \!^{k}$ is the Levi-Civita connection (the Christoffel symbol) $$\bar{\Gamma }_{ij} \!^{k} =\frac{1}{2} g^{km} (g_{mj,i} +g_{mi,j} -g_{ij,m} ).$$ Accordingly, the affine curvature tensors, Ricci curvature, and scalar curvature can be obtained by $$\begin{aligned} R_{ijk} \!^{l} &=&\partial _{i} \Gamma _{jk} \!^{l} -\partial _{j} \Gamma _{ik} \!^{l} +\Gamma _{im} \!^{l} \Gamma _{jk} \!^{m} -\Gamma _{jm} \!^{l} \Gamma _{ik} \!^{m}, \\ R_{ij} &=&R_{kij} \!^{k}, \\ R&=&g^{ij} R_{ij},\end{aligned}$$ respectively. The field equations ------------------- The conventional PGT action assume the form as [@PGTHehl], $$\begin{aligned} W=\int & d^{4} x e[L_{m}(\eta _{\alpha \beta ,} ... ,\psi ,D_{\alpha } \psi ) \\ & +L_{g}(\kappa _{1} ,\kappa _{2} , ... ,\eta _{\alpha \beta } ,F_{\alpha \beta } \!^{\gamma } ,R_{\alpha \beta \gamma} \!^{\delta } )] , \end{aligned}$$ where $e=\mathrm{det}(e_{i} \!^{\mu})$, and $\kappa_{1},\kappa_{2}, ... $ are some coupling constants. Apparently, the action is dictated by the geometric gauge field Lagrangian density $eL_{g}(e_{i} \!^{\mu} ,\partial_{j} e_{i} \!^{\mu} ,\Gamma_{i \mu} \!^{\nu} ,\partial_{j} \Gamma_{i \mu} \!^{\nu})=eL_{g}(e_{i} \!^{\mu} ,F_{ij} \!^{\mu} ,R_{ij \mu} \!^{\nu})$, as well as by the minimally coupled source Lagragian density $eL_{m}(e_{i}\!^{\mu},\Gamma_{i \mu} \!^{\nu},\psi,\partial_{i}\psi)=eL_{m}(e_{i}\!^{\mu},\psi,D_{i}\psi)$, in which $\psi$ represents all the matter fields. Varying the action $W$ with respect to the geometric gauge field potentials $e_{i} \!^{\alpha}$ and $\Gamma_{i}\!^{\alpha \beta}$ yields two gravitational field equations as $$\begin{aligned} \frac{\delta eL_{g}}{\delta e_{i} \,^{\alpha } } &=-\frac{\delta eL_{m}}{\delta e_{i} \,^{\alpha } } \equiv e\Sigma _{\alpha } \!^{i}, \\ \frac{\delta eL_{g}}{\delta \Gamma _{i} \! ^{\alpha \beta } } &=-\frac{\delta eL_{m}}{\delta \Gamma _{i} \! ^{\alpha \beta } } \equiv e S_{\alpha \beta } \!^{i} ,\end{aligned}$$ where the source terms $\Sigma _{\alpha } \!^{i} $ and $S _{\alpha \beta } \!^{i} $ represent the canonical momentum current and the canonical spin current governed by the corresponding energy-momentum and angular momentum conservation laws, respectively. Carrying out variations on the left-hand side of Eqs. (10) and (11), one obtains the so-called the first (or translational) and the second (or rotational) gravitational gauge field equations as, respectively, $$\begin{aligned} D_{j} H_{\alpha } \!^{ij} -\varepsilon _{\alpha } \!^{i} &=e\Sigma _{\alpha } \!^{i} ,\\ D_{j} H_{\alpha \beta } \!^{ij} -\varepsilon _{\alpha \beta } \!^{i} &=e S_{\alpha \beta } \!^{i} ,\end{aligned}$$ where the translational field momenta are given by $$H_{\alpha } \!^{ij} = \frac{\partial eL_{g}}{\partial \partial_{j} e_{i} \!^{\alpha}} =2\frac{\partial eL_{g}}{\partial F_{ji} \!^{\alpha } } ,$$ the rotatal field momenta are described by $$H_{\alpha \beta } \!^{ij} = \frac{\partial eL_{g}}{\partial \partial_{j} \Gamma_{i} \!^{\alpha\beta}} =2\frac{\partial eL_{g}}{\partial R_{ji} \!^{\alpha \beta } } ,$$ the momentum current (energy-momentum density) is defined as $$\varepsilon _{\alpha } \!^{i} = e_{\alpha } \!^{i} eL_{g}-F_{\alpha j} \!^{\gamma } H_{\gamma } \!^{ji} -R_{\alpha j} \!^{\gamma \delta } H_{\gamma \delta } \!^{ji} ,$$ and the spin current (spin angular momentum density) is characterized by $$\varepsilon _{\alpha \beta } \!^{i} = H_{[\beta \alpha ]} \!^{i} .$$ The general Lagrangian density associated with the scalar curvature, quadratic torsion and curvature developed by Baekler and Hehl [@BnH] can be written as $$\begin{aligned} L_{\rm BH} &= \frac{1}{2\kappa } R +\frac{1}{4} F_{\alpha \beta } \!^{\gamma } (d_{1} F_{\gamma } \!^{\alpha \beta } +d_{2} F_{\gamma } \!^{\beta \alpha } +d_{3} \delta _{\gamma } ^{\beta } F_{\mu } \!^{\alpha \mu } ) \\ &-\frac{1}{4\xi } R_{\alpha \beta \gamma \delta } [R^{\alpha \beta \gamma \delta } +f_{1} R ^{\alpha \gamma \beta \delta } +f_{2} R^{\gamma \delta \alpha \beta } \\ &+f_{3} \eta ^{\alpha \delta } R^{\beta \gamma } +f_{4} \eta ^{\alpha \delta } R^{\gamma \beta } +f_{5} \eta ^{\alpha \delta } \eta ^{\beta \gamma } R] , \end{aligned}$$ where $\kappa,\xi,f_{A},d_{A}$ correspond to various dimensionless coupling constants. Adopting this form as the gravitational Lagrangian density $L_g$, the first equation (12) becomes $$\begin{aligned} &\frac{1}{\kappa} (F_{\alpha } \!^{i} -\frac{1}{2} Fe^{i} \!_{\alpha } ) \\ &+\frac{1}{e} D_{j} (ed_{1} F^{ji} \!_{\alpha } +ed_{2} F_{\alpha } \!^{\left[ij\right]} +ed_{3} e^{\left[i\right. } \!_{\alpha } F^{\left. j\right]\gamma } \!_{\gamma } ) \\ &+F_{\alpha j} \!^{\gamma } (d_{1} F^{ij} \!_{\gamma } +d_{2} F_{\gamma } \!^{\left[ji\right]} +d_{3} e^{\left[j\right. } \!_{\gamma } F^{\left. i\right]\mu } \!_{\mu } ) \\ &-\frac{1}{4} e^{i} \!_{\alpha } F_{\mu \nu } \!^{\lambda } (d_{1} F^{\mu \nu } \!_{\lambda } +d_{2} F_{\lambda } \!^{\nu \mu } +d_{3} \delta ^{\nu } _{\lambda } F^{\mu \gamma } \!_{\gamma } ) \\ &+\frac{1}{\xi } R_{\alpha jt} \!^{\gamma \delta } (R^{ji} \!_{\gamma \delta } +f_{1} R^{[j} \!_{\gamma } {} ^{i]} {} \!_{\delta } +f_{2} R_{\gamma \delta } \!^{ji} \\ &+f_{3} e^{[j} \!_{[\delta } R^{i]} \!_{\gamma ]} +f_{4} e^{[j} \!_{[\delta } R_{\gamma ]} \!^{i]} +f_{5} Re^{j} \!_{[\delta } e^{i} \!_{\gamma ]} ) \\ &-\frac{1}{4\xi } e^{i} _{\alpha } R_{\mu \nu \gamma \delta } [R^{\nu \mu \gamma \delta } +f_{1} R^{\nu \gamma \mu \delta } +f_{2} R^{\gamma \delta \nu \mu } \\ &+f_{3} \eta ^{\nu \delta } R^{\mu \gamma } +f_{4} \eta ^{\nu \delta } R^{\gamma \mu } +f_{5} \eta ^{\nu \delta } \eta ^{\gamma \mu } R] =\Sigma _{\alpha }\!^{i}, \end{aligned}$$ and the second equation (13) can be explicitly recast as $$\begin{aligned} &\frac{\xi }{2\kappa } (F _{\alpha \beta } \!^{i} +2e^{i} \!_{[\alpha } F_{\beta ]k} \!^{k} ) \\ &+\frac{1}{e} D_{j} (eR^{ij} \!_{\alpha \beta } +ef_{1} R^{[i} \!_{[\alpha } {} ^{j]} {} \!_{\beta ]} +ef_{2} R_{\alpha \beta } \!^{ij} \\ &+ef_{3} e^{[i} \!_{[\beta } R^{j]} \!_{\alpha ]} +ef_{4} e^{[i} \!_{[\beta } R_{\alpha ]} \!^{j]} +ef_{5} Re^{i} \!_{[\beta } e^{j} \!_{\alpha ]} ) \\ &+\xi (d_{1} F^{i} \!_{[\beta \alpha ]} -\frac{1}{2} d_{3} e^{i} \!_{[\alpha } F_{\beta ]} \!^{\mu } {} \!_{\mu } +\frac{3}{4} d_{2} F_{[\alpha \beta } \!^{i]} \\ &+\frac{1}{4} d_{2} F_{\alpha \beta } \!^{i} ) =\xi S _{\alpha \beta } \!^{i}. \end{aligned}$$ However complicated they may be, we shall use these field equations directly to probe the dynamics of the universe. Teleparallel Poincaré Torsion ============================= We now apply the PGT formalism to the large scale universe. Under spherical symmetry and spatial reflection invariance, there are only six non-null independent components in the torsion tensor $F_{\alpha \beta \gamma }$ (antisymmetric in the first couple of subscripts $\alpha, \beta$) in spherical coordinates [@BnH]. The cosmological principle requires that these quantities only depend upon time. Accordingly, we assume these non-vanishing torsional components to take the forms as $$\begin{aligned} &F_{010} =-F_{100} =-f(t) , \\ &F_{011} =-F_{101} =-h(t) , \\ &F_{122} =F_{133} =-F_{212} =-F_{313} =g(t) ,\\ &F_{022} =F_{033} =-F_{202} =-F_{303} =-\chi(t) , \end{aligned}$$ where $f,h,g,\chi$ are some torsion functions to be determined. The large scale homogeneity and isotropy can be modeled by the Friedmann-Robertson-Walker(FRW) metric in spherical coordinates as $$ds^2 =-dt^2 + a(t) ^2 \left[ \frac{dr^2}{1-k r^2} + r^2(d \theta ^2 + \sin ^{2}\theta d \phi ^2) \right] ,$$ where $a(t)$ represents the cosmic scale factor, and the curvature index $k=0, +1, -1$ correspond to, respectively, a flat, a closed, and an open universe. According to Eq. (3), the FRW metric is uniquely determined by the following set of non-trivial tetrads: $$\begin{aligned} e_{t} \! ^{0} &=1,& e_{r} \! ^{1} =\frac{a}{\sqrt{1-k r^2}}, \\ e_{\theta } \! ^{2} &=ar,& e_{\phi } \! ^{3} =ar\sin \theta ,\\ \end{aligned}$$ which associate with dual forms as, respectively, $$\begin{aligned} e_{~0}^{t} &=1, &e_{~1}^{r} =\frac{\sqrt{1-k r^2}}{a} , \\ e_{~2}^{\theta } &=\frac{1}{ar} , &e_{~3}^{\phi } =\frac{1}{ar\sin \theta } . \end{aligned}$$ Once the set of tetrads is chosen, the local Lorentz symmetry [@barrow1; @barrow2] is no longer an issue, and one has the freedom to work out the teleparallel gravity theory in either a holonomic frame (spacetime coordinates) or an anholonomic frame (tetrads) since both frames should deliver identical results. This is a merit worthy mentioning because one can perform consistency checks correspondingly. Here, we present our investigation in terms of spacetime coordinates. The torsion $F_{ij} \! ^{k} $ in a holonomic frame can be translated from the torsion $F_{\alpha \beta}\!^ {\gamma }$ in an anholonomic frame through the relation of $F_{ij} \!^{k} =e_{i} \!^{\alpha } e_{j} \!^{\beta } e_{\gamma } \!^{k} F_{\alpha \beta } \!^{\gamma }$. Accordingly, the six non-zero components of torsion assume the form as $$\begin{aligned} F_{tr} \!^{t} &=-F_{rt} \!^{t} =\frac{af}{\sqrt{1-k r^2}} , &F_{tr} \!^{r} =-F_{rt} \!^{r}=-h ,\\ F_{r\phi } \!^{\phi } &=-F_{\phi r} \!^{\phi }=\frac{ag}{\sqrt{1-k r^2}} , &F_{t\phi } \!^{\phi } =-F_{\phi t} \!^{\phi }=-\chi ,\\ F_{r\theta } \!^{\theta } &=-F_{\theta r} \!^{\theta }=\frac{ag}{\sqrt{1-k r^2}} , &F_{t\theta } \!^{\theta } =-F_{\theta t} \!^{\theta }=-\chi. \end{aligned}$$ In the other hand, all non-trivial components of affine curvature tensors, Ricci curvatures, and the scalar curvature can be obtained by means of Eqs. (6)-(8). Since the torsion and the curvature are convoluted in accordance with the affine connection \[see Eqs. (1) and (2)\], one is able to determine the exact form of torsion tensor by imposing the teleparallelism condition, i.e. insisting the absolute parallel property when transporting a given vector along a curve. It is straightforward but quite tedious to work out all the teleparallelism constraints. They consist of six independent equations from the null curvature tensors, $$\begin{aligned} &\dot{h}+\frac{\dot{a}}{a} h+\frac{\ddot{a}}{a} =0, \\ &gh+\frac{\dot{a}}{a} g+\frac{\chi-h}{ar} \sqrt{1-kr^2}=0 ,\\ &\dot{\chi}+\frac{\dot{a}}{a} \chi+fg+\frac{\ddot{a}}{a} -\frac{f}{ar}\sqrt{1-kr^2}=0, \\ &\frac{k}{a^2} + h\chi+\frac{\dot{a}}{a} \chi+\frac{\dot{a}}{a}h+\frac{\dot{a}^2}{a^2} +\frac{g}{a r}\sqrt{1-kr^2}=0 , \\ &\frac{k}{a^2} + \chi^{2}+ 2\frac{\dot{a}}{a}\chi-g^{2} +\frac{\dot{a}^{2}}{a^2} + 2\frac{g}{ar} \sqrt{1-kr^2}=0 ,\\ &f\chi+\dot{g}+\frac{\dot{a}}{a}g+\frac{\dot{a}}{a}f=0, \end{aligned}$$ three equations from the vanished Ricci curvatures, $$2\dot{\chi}+2\frac{\dot{a}}{a} \chi+\dot{h}+\frac{\dot{a}}{a} h+2fg+3\frac{\ddot{a}}{a} -\frac{2f}{ar}\sqrt{1-kr^2}=0, \\$$ $$\frac{2k}{a^2}+2h\chi+2\frac{\dot{a}}{a} \chi+\dot{h}+3\frac{\dot{a}}{a} h+\frac{\ddot{a}}{a} +2\left(\frac{\dot{a}}{a} \right)^{2} +\frac{2g}{ar}\sqrt{1-kr^2}=0, \\$$ $$\begin{aligned} &\frac{2k}{a^2}+\dot{\chi}+\chi^{2} +h\chi+4\frac{\dot{a}}{a} \chi+\frac{\dot{a}}{a} h-g^{2} +fg+\frac{\ddot{a}}{a} \\ &+2\left(\frac{\dot{a}}{a} \right)^{2} +\frac{3g-f}{ar}\sqrt{1-kr^2}=0 , \end{aligned}$$ and one equation obtained by the null scalar curvature, $$\begin{aligned} &\frac{6k}{a^2} + 4\dot{\chi}+2\chi^{2} +4h\chi+12\chi\frac{\dot{a}}{a} +2\dot{h}+6h\frac{\dot{a}}{a} -2g^{2} \\ &+4fg +6\frac{\ddot{a}}{a} +6\left(\frac{\dot{a}}{a} \right)^{2} +\frac{8g-4f}{ar}\sqrt{1-kr^2} =0. \end{aligned}$$ The only consistent result fulfilling these curvatureless Eqs. (26)-(35) is that $g=f=0$, and $h=\chi$ with two constraints $$\begin{aligned} \chi^2 + 2\frac{\dot{a}}{a} +\frac{\dot{a}^2}{a^2} +\frac{k}{a^2}=0,\\ \dot{\chi}+\frac{\dot{a}}{a} \chi+\frac{\ddot{a}}{a} =0.\end{aligned}$$ As a consequence, the torsion functions are $$g=f=0 ,~~~h=\chi=\frac{\sqrt{-k}}{a}-\frac{\dot{a}}{a} ,$$ and we obtain the non-null torsion components as [^1] $$\begin{aligned} F_{tr} \!^{r} =-F_{rt} \!^{r} &=F_{t\theta } \!^{\theta } =-F_{\theta t} \!^{\theta } \\ &=F_{t\phi } \!^{\phi } =-F_{\phi t} \!^{\phi } =\frac{\dot{a}}{a}-\frac{\sqrt{-k}}{a}. \end{aligned}$$ Evidently, the form of the Poincaré torsion is strongly constrained by imposing the teleparallelism condition. Teleparallel Poincaré cosmology =============================== Imposing the teleparallism condition to PGT not only strongly constrain the form of the Poincaré torsion, but also greatly simplifies the field equations (19) and (20) to retain merely quadratic terms in torsion as a result of revoking corresponding curvature terms. In this section, we would like to explore the torsion effect on the cosmological dynamics and extend our scheme to including a spinning fluid as a source driving the cosmic acceleration. To match the current observations, it is sufficient to consider only the case of a flat universe with $k=0$. TPGT as a generalized TEGR -------------------------- It is natural to consider the intrinsic spin of elementary particles as one of the physical sources for torsion. However, it is usually assumed that the averaged spin density of matter is tiny and randomly oriented in macroscopic regimes [@GRspin; @SNY]. The hypothesis amounts to neglect the spin angular momentum tensor $S_{ijk}$ in the second field equation. This is exactly the situation that the standard GR concentrates on. For a spatially flat universe, $k=0$ and Eq. (39) gives rise to the non-trivial components of the Poincaré torsion as $$%\begin{aligned} F_{tr} \!^{r} =-F_{rt} \!^{r} =F_{t\theta } \!^{\theta } =-F_{\theta t} \!^{\theta } =F_{t\phi } \!^{\phi } =-F_{\phi t} \!^{\phi } =\frac{\dot{a}}{a} . %F_{tr} \!^{r} &=-F_{rt} \!^{r} =F_{t\theta } \!^{\theta } =-F_{\theta t} \!^{\theta } =F_{t\phi } \!^{\phi } =-F_{\phi t} \!^{\phi } \\ %&=\frac{\dot{a}}{a} . %\end{aligned}$$ Substituting Eq. (40) into the second field equation (20), we obtain the nontrivial components of the spin angular momentum tensor as $$\begin{aligned} a\dot a \zeta &=& S_{trr} = -S_{rtr} ,\\%[-0.5cm] a\dot a\zeta r^2 &=& S_{t \theta \theta} = -S_{\theta t \theta} ,\\%[-0.5cm] a\dot a \zeta r^2 \sin^2{\theta} &=& S_{t \phi \phi} = -S_{\phi t \phi} ,\end{aligned}$$ where $$\zeta=\frac{1}{2} d_{1} + \frac{1}{4} d_{2} +\frac{3}{4} d_{3} -\frac{1}{\kappa }.$$ If one is to neglect the spin angular momentum, i.e. requiring that $S_{ijk}=0$, then $\zeta=0$ as long as the universe keeps developing. Thus, the dimensionless coupling constants $d_1, d_2$ and $d_3$ must satisfy the following constraint $$\frac{1}{2} d_{1} + \frac{1}{4} d_{2} +\frac{3}{4} d_{3} =\frac{1}{\kappa }.$$ In the other hand, substituting Eq. (40) into Eq. (19), the first field equation gives rise to $$\begin{aligned} {3\left( \frac{1}{2} d_{1} + \frac{1}{4} d_{2} +\frac{3}{4} d_{3}\right)}\left(\frac{\dot{a}}{a} \right)^{2} &=\Sigma_{tt}, \\ - {6\left(\frac{1}{2} d_{1} + \frac{1}{4} d_{2} +\frac{3}{4} d_{3}\right)}\left(\frac{\ddot{a}}{a} \right) &=\Sigma,\end{aligned}$$ where $\Sigma_{tt}$ is the time-time component of the energy-momentum tensor of the source, and $\Sigma$ represents the trace of it. Assuming a perfect fluid as the matter source under consideration, then its energy-momentum tensor would take the form of $$\Sigma_{ij}=(\rho + p)u_{i}u_{j} + p g_{ij},$$ where $u_{i}$ is the four-velocity, $\rho$ denotes the energy density, $p=w\rho$ is the pressure, and $w$ represents the equation of state. As a consequence, we have $\Sigma_{tt}=\rho$ and the trace $\Sigma=\rho+3p=(1+3w)\rho$. Along with the constraint Eq. (45), Eqs. (46) and (47) become $$\begin{aligned} H^2 &= \left(\frac{\dot{a}}{a} \right)^{2} =\frac{\kappa }{3} \rho , \\ \frac{\ddot{a}}{a} &=-\frac{\kappa }{6} \left(1 +3w\right)\rho,\end{aligned}$$ where $H\equiv\dot a/a$ is the Hubble parameter characterizing the rate of the cosmic expansion. One immediately finds that, when ignoring the spin angular momentum, the TPGT recovers the Friedmann cosmology provided that $$\kappa=8\pi G$$ with $G$ being the Newtonian gravitational constant. In this sense, TPGT can be considered as a generalized version of TEGR. Torsion effects on the cosmic expansion --------------------------------------- In general, a torsion scalar $\mathcal{T}$ can be defined as $$\mathcal{T} ^2 \equiv F_{\alpha \beta } \!^{\gamma } (F_{\gamma } \!^{\alpha \beta } + F_{\gamma } \!^{\beta \alpha } + \delta _{\gamma } ^{\beta } F_{\mu } \!^{\alpha \mu } ).$$ Subsequently, Eq. (40) indicates that, in a spatially flat universe, the torsion scalar reads $$\mathcal{T} = 3\frac{\dot{a}}{a} .$$ If we take a time derivative of the torsion scalar, then $$\mathcal{\dot{T}}=3 \left[\frac{\ddot{a}}{a}-\left(\frac{\dot{a}}{a}\right)^2 \right].$$ Comparing Eqs. (49)-(50) with Eqs. (53)-(54), a simple relation holds between the equation of state, the torsion scalar, and its time derivative as $$\mathcal{\dot{T}}=-{1\over 2}(1+w)\mathcal{T}^2.$$ The current observation using the baryon acoustic oscillations (BAO) and CMB data indicates that the equation of state for the expanding universe is always negative but close to $-1$ [@planck13]. Accordingly, Eq. (55) with $w=-1$ shows that $\mathcal{\dot T} = 0$, i.e. the torsion scalar $\mathcal{T}$ is truly a constant no matter how the universe changes. As a consequence, the universe exhibits a de Sitter expansion with a scale factor evolving as $$a(t)=a_{0} \exp\left({\mathcal{T}\over 3} t\right),$$ where $a_{0}$ is an integration constant. Hence, the torsion scalar $\mathcal{T}$ is playing the role of the cosmological constant, which is naturally encoded in the teleparallel Poincaré cosmology. Universe containing a spin fluid -------------------------------- Having shown that the framework of TPGT can be consistently blended in GR, we now extend the theory to include the spin angular momentum as the source of torsion. With the help of a four-velocity $u_i$, the macroscopic spin tensor of a curvatureless universe can be characterized as $$S_{ijk} = u_{i}S_{jl} S_{k} \!^{l} - u_{j} S_{jl} S_{k}\! ^ {l},$$ where $S_{ij}$ is the spin density tensor in a holonomic frame. Because the intrinsic spin is space-like, the Frenkel condition [@spinfluid2] $$S_{jk}u^k=0$$ is automatically satisfied. The direct products of two spin density tensors in Eq. (57) allows us to contract them after transforming the couple to an anholonomic frame. We then obtain the square of spin density $S^2$ as $$S^2 = S_{\alpha\beta} S^{\alpha\beta}.$$ As a consequence, the macroscopic spin tensor becomes $$S_{ijk}=\left( u_{i} g_{jk}-u_{j} g_{ik} \right)S^2.$$ Employing this form in Eqs. (41)-(43), the nontrivial components of the spin angular momentum tensor generated by the second equation are governed by $$\begin{aligned} a \dot{a} \zeta &= -S^2 a^2,\\ a \dot{a} r^2 \zeta &= -S^2 a^2 r^2,\\ a \dot{a} r^2 \sin{\theta}^2 \zeta &= -S^2 a^2 r^2 \sin{\theta}^2 ,\end{aligned}$$ which unanimously lead to a constraint to the square of spin density, $$S^2=-\frac{\dot{a}}{a} \zeta .$$ Substituting Eq. (64) into Eqs. (46)-(47), the Friedmann equation becomes $$\left(\frac{\dot{a}}{a} \right)^{2} =\frac{\kappa }{3} \rho + \kappa \frac{\dot{a}}{a} S^2,$$ and the acceleration equation is given by $$\frac{\ddot{a}}{a} =-\frac{\kappa }{6}\left(1+3w \right)\left(\rho + 3 S^2 \frac{\dot{a}}{a} \right).$$ In order to unravel the role played by intrinsic spins from convoluting with the gravitational effect, we assume that the spin density tensor $S_{\alpha\beta}$ can be characterized by the torsion scalar $\mathcal{T}$ as $$S_{\alpha\beta}=\frac{1}{\sqrt{2 \mathcal{T}}}s_{\alpha\beta},$$ where $s_{\alpha\beta}$ denotes the reduced spin density such that $s^2 = \frac{1}{2} s_{\alpha\beta} s^{\alpha\beta}$. Accordingly, Eqs. (65) and (66) become $$\begin{aligned} \left(\frac{\dot{a}}{a} \right)^{2} &=&\frac{\kappa }{3}\rho_{\rm eff},\\ \frac{\ddot{a}}{a} &=&-\frac{\kappa }{6} (1+3w)\rho_{\rm eff},\end{aligned}$$ with the effective energy density of the spin fluid described by $$\rho_{\rm eff}=\rho+s^2\equiv\rho+\rho_s.$$ That is, we recover the standard Friedmann formulation for a homogeneous and isotropic universe. We note that, according to Eq. (70), the square of spin density has positive contribution to the total energy density of the universe, contrary to the Weyssenhoff spin fluid in the framework of the Einstein-Carton theory [@spinfluid4]. In addition, a dust spin fluid model with $w=0$ is incapable of explaining the late-time cosmic acceleration. Now let us assume Eqs. (68) and (69) be describing the evolution of the expanding universe driven by the energy content of the vacuum. Then, under the condition that the vacuum energy $p=-\rho \rightarrow 0$ ($w=-1$), the spin energy would serve as an alternative of dark energy responsible for cosmic acceleration. Therefore, the cosmological constant $\Lambda$ is no more than a uniform spin energy density in the context of the spinning TPGT. Conclusion ========== In this work we investigated the cosmological effects of the TPGT. Adopting the specific torsion form constrained by the teleparallelism condition, the formulation of the teleparallel Poincaré cosmology is completely equivalent to that of the Friedmann cosmology. As a consequence, the naturally built-in torsion scalar $\mathcal{T}$ plays the role of the cosmological constant $\Lambda$ and the universe exhibits a de Sitter expansion provided that the equation of state $w=-1$. When extending the teleparallel framework to include a spinning fluid as the source of the Poincaré torsion, the standard Friedmann prescription to a homogeneous and isotropic universe is recovered, as long as the macroscopic spin tensor satisfies Eq. (60) with a spin density governed by Eq. (67). Contributing positively to the effective energy density of the universe, such a dust spinning fluid with $w=0$ acts just like a normal matter. Under the circumstances that the vacuum energy with $w=-1$ vanishes, however, the vacuum spin may be considered as the source driving the late-time cosmic acceleration. People attempt to solve the cosmological constant problem, namely, why the dark energy that we observe is so much smaller than any known energy scales or otherwise it is zero as protected, if there is any, by a symmetry. The results here may open another window for us to understand the cosmological constant problem. While the vacuum energy is thought to be originated from zero-point energy fluctuations, the spin energy may be associated with spin fluctuations of the vacuum that should be very different from zero-point energies. Understanding the microscopic spin structure of the vacuum is a very interesting subject and its effect on the cosmological scales as alluded here should be further studied. The authors are grateful to Wai Bong Yeung and Jim Nester for helpful suggestions and discussions. This work was supported in part by the National Science Council, Taiwan, ROC under the Grant No. NSC101-2112-M-001-010-MY3, and by the Office of Research and Development, National Taiwan Normal University, Taiwan, ROC under the Grant No. 102A05. [^1]: Equation (38) is a short handed notation for consistency checks for $k\neq 0$ models. The complex value for $k=+1$ case naturally arises due to the comoving coordinate $r$ used in the metric Eq. (22). It would disappear if we employed another comoving coordinate $x$ defined by integrating over $dx=(1-kr^2)^{-1/2}dr$.
--- abstract: 'Large scale Hartree-Fock-Bogoliubov (HFB) calculations with the finite-range Gogny force D1S have been performed in order to extract the corresponding theoretical average mass dependence of the nuclear gap values. Good agreement with experimental data from the three-point filter $\Delta^{(3)}(N)$ with $N$ odd has been found for both the neutron and proton gaps. The results of our study support earlier findings \[W. Satu[ł]{}a, J. Dobaczewski, and W. Nazarewicz, Phys. Rev. Lett. [**81**]{} 3599 (1998)\] that the mass dependence of the gap is much weaker than the so far accepted $12 \, A^{-1/2}\,$ MeV law.' address: - '$^1$Département de Physique Théorique et Appliquée, CEA/DAM Ile-de-France, B.P. 12 - F91680 Bruyères-le-Châtel, France' - '$^2$Institute of Theoretical Physics, University of Warsaw, ul. Hoża 69, PL-00-681 Warsaw, Poland' - '$^3$Royal Institute of Technology, Physics Department Frescati, Frescativägen 24, S-104 05 Stockholm, Sweden' - '$^4$Institut de Physique Nucléaire, Université Paris–Sud, F-91406 Orsay Cedex, France' author: - 'S. Hilaire$^{1}$, J.-F. Berger$^{1}$, M. Girod$^{1}$, W. Satu[ł]{}a$^{2,3}$, and P. Schuck$^{4}$' title: 'Mass Number Dependence of Nuclear Pairing.' --- Pairing Gaps ,Hartree-Fock-Bogoliubov calculations ,Gogny Force 21.10.Dr ,21.10.Pc ,21.60.Jz. In recent years, the study of pairing properties in systems of condensed matter so small that the coherence length of the Cooper pairs becomes comparable with the size of the system has increased considerably. This is the case for ultra small superconducting metallic grains [@braun] but one also thinks that magnetically trapped fermionic atoms like $^{6}$Li can become superfluid at temperatures which may be reached experimentally in the near future [@marco]. A fermionic system of finite size where the superfluid properties have been studied experimentally and theoretically since decades is the atomic nucleus [@bohr]. Very efficient mean field approaches have been developped in the past to account quantitatively for a great amount of experimental data. One of the most successfull models in this context is that developped by Gogny and collaborators with the use of a finite range effective interaction D1S [@Dec80; @cpc91]. However, the global mass number (A) dependence of the gap has never been investigated in a systematic way using this force. Such a study has now become particularly timely because it has been observed [@Sat98] that the commonly accepted law for the average gap parameter ($\Delta=12 \, A^{-1/2}\,$ MeV) strongly overestimates the gap values in light nuclei. Empirical information concerning gap parameters can be derived in principle from large-scale analysis of odd-even staggering (OES) of nuclear binding energies. One should bear in mind, however, that there are two basic physical mechanisms behind OES, namely : (i) an effect of spontaneous breaking of spherical symmetry (Jahn-Teller mechanism [@Jah37] or shape effect) and, (ii) the blocking of pair correlation by an unpaired fermion. Determination of the pairing component of OES therefore requires a careful deconvolution, at least to the extent possible, of both effects. Thus the aim of this work is to demonstrate that the average gap parameters at the Fermi surface deduced from large-scale unconstrained Gogny-HFB calculations are consistent with component of OES deduced from empirical data according to the method proposed in [@Sat98]. In particular, it will be shown that the theoretical A dependence of the average gap is much weaker than $12/\sqrt{A}$ dependence what is in coincidence with the experimental data analysis performed in [@Sat98]. The simplest way to quantify the OES of binding energies is to use the three-point filter $$\label{d3} \Delta^{(3)}(N) = {\pi_N \over 2} [B(N-1) + B(N+1) -2B(N)]$$ where $\pi_N = (-1)^N$ is the number parity and $B(N)$ the (negative) binding energy of the system of particle number $N$. Eq. (\[d3\]) assumes proton number $Z$ to be fixed and thus provides neutron OES. An expression appropriate for proton OES can be obtained by replacing $N$ with $Z$ in Eq. (\[d3\]) and by fixing the neutron number $N$. In nuclear structure studies, filter (\[d3\]) is not considered as an appropriate measure of the neutron or proton pairing gaps. This is mainly due to strong symmetry energy \[$B_{sym}\propto (N-Z)^2$\] contributions. However, because symmetry energy is number-parity independent and rather weakly depends on shell effects, its influence can be removed by using higher order filters like the four-point formula: $$\Delta^{(4)}(N)=\frac{1}{2} [\Delta^{(3)}(N) + \Delta^{(3)}(N-1)] \label{d4}.$$ Global analysis of empirical data using filter (\[d4\]) leads to the commonly used estimate $\Delta = 12 \, A^{-1/2}\,$ MeV for the pairing gap [@Zeldes]. This classical way of reasoning leading from formula (\[d3\]) to (\[d4\]) has its roots in the macroscopic-microscopic model. It assumes [@Jen84] that the major contribution \[apart from pairing\] to (\[d3\]) comes from the smooth liquid-drop component of the total energy (or more precisely from the symmetry energy term as mentioned above) while the shell-correction energy $\delta E_{shell}$ varies slowly enough with $N$ and $Z$ to neglect its contribution to (\[d3\]) or (\[d4\]). This assumption is, however, hardly acceptable because $\delta E_{shell}$ is by definition the difference between the strongly oscillating shell-energy, $E_{sp}=\sum_{occup} e_i$, and the smooth Strutinsky-smeared energy, $\tilde{E}_{sp}$. The single-particle (sp) shell-energy term, $E_{sp}$, gives rise to OES which is well recognized in metallic clusters [@Man94]. In the extreme case of independent particles (fermions) filling two-fold Kramers-degenerated levels of a fixed, deformed potential well, the sp OES is $\Delta^{(3)}_{sp}(2n+1) \approx 0$ and $\Delta^{(3)}_{sp}(2n) \approx (e_{n+1}-e_n)/2$, where $e_n$ and $e_{n+1}$ stand for [*effective*]{} Nilsson levels at the Fermi energy [@Sat98]. In a previous study [@Sat98], it has been demonstrated, using self-consistent Skyrme-Hartree-Fock calculations, that the contribution to (\[d3\]) due to the smooth Strutinsky energy, $\tilde{E}_{sp}$, nearly cancels out the contribution coming from the liquid-drop symmetry energy. Consequently, only $\Delta (N)\equiv\Delta^{(3)}(N=2n+1)$ can be considered as a probe of the [*pairing*]{} component of OES, while $\Delta^{(3)}(N=2n)$ mixes both mean-field and pairing effects. Note, that filter (\[d4\]) always mixes pairing and sp components whatever the number-parity is. These ideas have recently been tested within a wide class of exactly solvable models invoking monopole pairing Hamiltonians  [@Dob; @gatl]. Although these models do always oversimplify various properties of complex nuclei, these studies clearly indicate the correctness of the proposed method, particularly for weak and intermediate pairing correlations, which is by far the most commonly encountered situation in finite nuclei. In this case a consistency between the BCS (or HFB) pairing gap and the $\Delta^{(3)}(2n+1)$ filter has been found as well. We therefore think that $\Delta^{(3)}(N=2n+1)$ is the best suited filter for the extraction of gap values from experimental data. One should be aware, however, that there is ongoing debate concerning detailed interpretation of $\Delta^{(3)}(N=2n+1)$ as well as higher order filters \[$\Delta^{(5)}$\] [@Bender; @Dug1; @Dug2]. In particular, an effect of [*time-odd*]{} mean-field was intensively studied in Ref. [@Dug2]. This mean-field effect indeed enters directly empirical $\Delta^{(3)}(N=2n+1)$ \[as well as $\Delta^{(4,5)}$\] through odd-A nuclei and should, in principle, be removed explicitely. However, our knowledge concerning its magnitude is highly uncertain. For example, systematic Skyrme-Hartree-Fock calculations of Ref. [@gatl] indicate attractiveness (repulsiveness) of this effect for SLy4 (SIII, SkM$^*$) respectively with average absolute magnitude of the order of 100keV in odd-A light nuclei. Extensive HFB calculations have been performed in order to determine the ground state structure of nearly 400 even-even nuclei located in the neighborhood of those for which experimental pairing gaps have been extracted [@Sat98]. The D1S parameterization of the Gogny Force [@Dec80; @cpc91] has been employed throughout this work. Theoretical pairing gaps are then deduced from the pairing field obtained with this force in these nuclei, with the purpose of making comparisons with experimental gaps. It is of importance to point out that the calibration of the matrix elements of the Gogny force in the pairing channel has been based on OES in tin isotopes [@Dec80] and that the A dependence of the calculated pairing gaps is ultimately governed by self-consistency requirements of the HFB solutions. According to the Bogoliubov theory [@bogo], the quasiparticle states can be obtained from the iterative diagonalization of the HFB Hamiltonian $$\label{bog} H = \left( \begin{array}{cc} h -\mu I & -\Delta\\ -\Delta & -h +\mu I \end{array} \right) ,$$ where $h$ and $\Delta$ are the matrices of the Hartree-Fock hamiltonian – the sum of the nucleon kinetic energy and average field – and of the pairing field in the HO basis, respectively, $\mu$ represents the chemical potential ensuring conservation of nucleon numbers, and $I$ is the unity matrix. Using time-reversal symmetry and appropriate phase conventions, $h$ and $\Delta$ can be taken as real symmetric matrices. In order to derive from the HFB method theoretical quantities corresponding to empirical proton and neutron pairing gaps, the following technique is used. Single-particle energies $\varepsilon_{i}$, pairing gaps $\Delta_{i}$ and occupation probabilities $v_{i}^2$ analogous to those defined in BCS theory are first calculated. They can be derived by either diagonalizing the Hartree-Fock Hamiltonian $h$, or expressing all relevant quantities in the canonical basis [@BloMes]. In the first case, the $\varepsilon_{i}$ are taken as the eigenvalues of $h$, and the $\Delta_{i}$ and $v_{i}^2$ as the diagonal components of $\Delta$ and of the one-body density matrix $\rho$ once they are expressed in the Hartree-Fock representation. In the second case, the $v_{i}^2$ are the eigenvalues of $\rho$, while the $\Delta_{i}$ and $\varepsilon_{i}$ are taken as the diagonal components of $\Delta$ and $h$ in the canonical basis. The two methods have been checked to yield very close single particle energies and practically identical values of the $v_{i}^2$ and $\Delta_{i}$ [@girod]. In the present work, the first method has been employed, and we will assume that the above quantities have their usual physical meaning. When applied separately to each kind of nucleons, single particle quantities denoted $\varepsilon_{i}^{\pi}$, $\Delta_{i}^{\pi}$, ${v_{i}^{\pi}}^2$ for protons, and $\varepsilon_{i}^{\nu}$, $\Delta_{i}^{\nu}$, ${v_{i}^{\nu}}^2$ for neutrons can thus be derived for all nuclei under consideration. Numbers representing the proton and neutron pairing gaps $\Delta^{\pi}$ and $\Delta^{\nu}$ in each nucleus have then been defined in two different ways. On the one hand, we define $\Delta_{last}^{\pi} = \Delta_{i=Z}^{\pi}$ and $\Delta_{last}^{\nu} = \Delta_{i=N}^{\nu}$, where $i=Z$ (resp. $i=N$) is the $Z$th (resp. $N$th) proton (resp. neutron) state counted from the deepest one. On the other hand, we define $$\Delta_{aver}^{\pi}= \displaystyle{\sum_{i}} {u_{i}^{\pi}}^2 {v_{i}^{\pi}}^2 \Delta_{i}^{\pi} / \displaystyle{\sum_{i}} {u_{i}^{\pi}}^2 {v_{i}^{\pi}}^2$$ and similarly for neutrons. In the second definition, the individual level gaps $\Delta_{i}$ are averaged out around the Fermi surface, with weights equal to the pair correlation probability of the two nucleons on each level. The purpose of this averaging is to smear out the sometimes large fluctuations ($\approx$ 100 keV) obtained for the individual $\Delta_{i}$’s in the vicinity of the Fermi surface. The gaps derived from the two methods are both functions of the nucleus proton and neutron numbers $Z$ and $N$. Finally, as in Ref. [@Sat98], proton (resp. neutron) pairing gaps averaged over $N$ (resp. $Z$) are defined as $$\label{avera} \overline{\Delta_{type}^{\pi}} (Z) = { 1 \over M}\sum_{N=N_{1},N_{2},...,N_{M}} \Delta_{type}^{\pi} (N,Z). \label{defpmic}$$ where $type$ is either $last$ or $aver$. We have decided to compare experimental $\Delta^{(3)}$’s with the pairing gaps of Eq.$(\ref{defpmic})$ instead of theoretical $\Delta^{(3)}$’s because the D1S Gogny force has not been designed to reproduce masses of odd-even or odd-odd nuclei. Indeed, HFB calculations do not account for particle-vibration coupling (which is known to be responsible for a decrease of a few hundreds of keV of the odd-even or odd-odd nuclei masses) but correctly describe even-even nuclei pairing properties. The theoretical gaps given by Eq.($\ref{avera}$) are displayed in Figs. \[fig1\] and  \[fig2\] together with the corresponding experimental data. It is important to mention here that even if theoretical calculations have been performed for even-even nuclei, we have deliberately plotted the gaps as function of the odd-$Z$ (resp. odd-$N$) values (the same as those used in  [@Sat98] to extract experimental data) from which the neighboring even-even nuclei studied theoretically have been selected. In view of the great sensitivity of the gap with respect to all input parameters (force, effective mass, etc ...) there is an excellent overall agreement of the theoretical quantities with experiment. The $A^{-1/3}$-law in the fits of Figs. \[fig1\] and  \[fig2\] has no particular deep theoretical fundation (see, however the remarks made in connection with Fig. \[fig3\]) and other A-dependences can represent the average trend as well. This average behavior makes it however clear that the A-dependence of the gaps is much weaker than the $\Delta = 12 \, A^{-1/2}\,$ MeV law previously assumed. This finding is very satisfying as these theoretical results give further credit to the analysis of experimental data in [@Sat98] also concluding that the A dependence of the gap is weaker than the $\Delta = 12 \, A^{-1/2}\,$ MeV law. For magic numbers, theoretical gaps go to zero, since there is no pairing. In this case, the $\Delta^{(3)}$ value deduced from experiment does not really describe a pairing effect, but rather an average of single-particle gaps around shell closures. Theoretical $\Delta$’s agree particularly well with experimental ones in mid-shell nuclei where experimental $\Delta^{(3)}$’s represent a genuine pairing effect. For these reasons, we did not include in the theoretical average, the nuclei having a magic number of protons or neutrons. In Fig. \[fig1\], one notices that theoretical $\Delta_{aver}$ overestimate experimental data. One reason is the absence, in our calculations, of the Coulomb Interaction in the pairing field since it would require too much computing time. However, we have checked for a couple of nuclei that including it reduces the gap-values by 100 to 200 keV, depending on the nucleus proton number, thus improving the agreement with the experimental data. Other sources of uncertainty may partly be accounted for through the effective force. This is likely to be the case for the recently debated influence of surface vibrations on nuclear pairing [@prlpair; @jetp99] which is claimed to give a sizeable contribution to nuclear superfluidity. Since, however, the gap values calculated from the D1S Gogny force are quite realistic (see Figs. \[fig1\] and  \[fig2\]), it is justified to assume that the Gogny force accounts for such effects at least on the average. In view of the good agreement of experiment and theory found in Figs. \[fig1\] and  \[fig2\], we further investigate the average trend of the theoretical Gogny-HFB gap values versus A. For this purpose, we define $\Delta_{N}$ for a given $N$ as an arithmetic average over several $Z$-values – taken in an interval so that the nucleus $(Z,N)$ belongs either to what we call the stability valley (SV) or the neutron rich region (NR) – of the theoretical $\Delta_{aver}$. Noticing that the relation [@book] $Z_{s}=A/(1.98+0.0155 A^{2/3})$ almost perfectly defines the most stable nuclei, the SV and NR are defined as the region $0.94 Z_{s} \leq Z \leq 1.05 Z_{s}$ and $Z \leq 0.94 Z_{s}$, respectively. In order to further smoothen the curves, we also average the mean $\Delta_{N}$’s together with the $\Delta_{N \pm 4}$’s and $\Delta_{N \pm 2}$’s. The width $\Delta N=8$ of this last average should be small enough not to affect significantly the mean trends. Finally, this procedure gives us the full black squares in Fig. \[fig3\]. It is important to mention that for nuclei close to drip lines, the method used to solve the HFB equations does not allow us to include continuum effects. In order to test the validity of our results for such nuclei, we have checked that their pairing properties are stable with respect to a large increase of the harmonic oscillator basis. This test consists in introducing quite a different representation of unbound orbitals and therefore the observed stability indicates that our theoretical $\Delta$’s are not significantly sensitive to continuum effects. From the obtained curve in Fig. \[fig3\](a), one again clearly sees that the old $\Delta = 12 \, A^{-1/2}\,$ MeV law strongly overestimates the average trend, at least for small $A$. We also inserted our least square fit assuming a $\Delta_{N}=\alpha + \beta A^{-1/3}$ law. Justification for this choice stems from the weak coupling approximation for the gap, i.e. $\Delta \propto \exp(-1/G.\rho)$, where $G \propto 1/A$ is the usual constant pairing matrix element and $\rho \propto A(1+cA^{-1/3})$ the level density at the Fermi energy [@bohr]. Indeed, performing a Taylor-expansion in powers of the small parameter $c$ yields the above mentioned law for $\Delta$. It is also worth mentionning that such a mass dependence of the pairing gap has also been obtained in ref. [@vogel] (see also ref [@moller]). The best fit values for Fig. \[fig3\](a) are found to be $\alpha=0.3$ and $\beta=3.1$. Calculation indicate slightly different trends for $\Delta_{N}$ in SV and NR regions. In particular for Fig. \[fig3\](b) we find $\alpha=0.35$ and $\beta=2.6$. The asymptotic value is rather close to the nuclear matter value $\Delta_{nm}=0.4$ MeV obtained with the Gogny Force [@pinf]. The fact that in Fig. \[fig3\](b) rather large $\Delta_{N}$ values for large $N$ are found is likely an indication of the increasing role of the neutron skin. Similar tendencies are obtained for the proton gaps (not shown). The different average trends seen in Figs. \[fig2\] and  \[fig3\] (in particular for low $N$ values) are due to the fact that in Fig. \[fig2\] all experimentally available data are taken into account irrelevant whether they correspond to stable or exotic nuclei whereas in Fig. \[fig3\] two regions have been sorted out. Our choice $\Delta_{N}=\alpha+\beta A^{-1/3}$ is certainly not unique but the $A^{-1/3}$ dependence, besides having some theoretical justification as explained above, yields overall the best results among the various choices we tried. For example, an improved fit with different parametrisation can be obtained in Fig. \[fig3\](a), but there is no point in making a separate fit for each figure. The increase of the gap with decreasing size of the nucleus may eventually be a rather generic feature in meso- and nano-scopic systems. Indeed, also in small superconducting metallic grains and in thin superconducting films there seems to be a tendency for increasing gap-values as the size of the system is reduced [@last]. Whether the physical origin of the effect is the same in all cases remains to be seen. In summary we investigated the mass dependence of the average gap values for neutrons and protons in large scale HFB calculations with the Gogny D1S effective interaction. Very good agreement with the experimental filter $\Delta^{(3)}(N=2n+1)$ is found. This indicator was advocated previously [@Sat98] for its capability to eliminate spurious mean field components from the gap values in an optimal way. The present theoretical study therefore supports the much weaker dependence of the gap, advanced in [@Sat98], than the so far accepted $\Delta = 12 \, A^{-1/2}\,$ MeV law. The agreement between experimental and theoretical size dependence of $\Delta$ is a non trivial fact and this study may open similar investigations in other finite superfluid or superconducting systems. This work was supported by the Swedish Institute (SI) and the Polish Committee for Scientific Research (KBN) under contract No. 5 P03B01421 and performed in the frame of the “Groupement de Recherche - Structure des Noyaux Exotiques”. One of us (P.S.) acknowledges useful discussions with F. Hekking. [00]{} F. Braun et al., Phys. Rev. Lett. 79 (1997) 921. B. DeMarco and D. S. Jin, Science 285 (1999) 1703. A. Bohr and B. R. Mottelson, Nuclear Structure, Vol I, Benjamin, New York (1969). J. Decharge and D. Gogny, Phys. Rev. C 21 (1980) 1568. J.F. Berger et al., Comp. Phys. Comm. 63 (1991) 365. W. Satu[ł]{}a et al., Phys. Rev. Lett. 81 (1998) 3599. H.A. Jahn and E. Teller, Proc. Roy. Soc. A161 (1937) 220. N. Zeldes et al., Mat. Fys. Skr. Dan. Vid. Selsk. 3 (1967) No. 5. A.S. Jensen et al., Nucl. Phys. A 431 (1984) 393. M. Manninen et al., Z. Phys. D 31 (1994) 259. J. Dobaczewski et al., Phys. Rev. C 63 (2001) 024308. W. Satu[ł]{}a, AIP Conference Proceedings 481, ed. C. Baktash, p. 141; nucl-th/0003019. M. Bender et al. Eur. Phys. J. A8 (2000) 59. T. Duguet et al. nucl-th/0105049 v2 T. Duguet et al. nucl-th/0105050 v1 N.N.Bogoliubov, Sov. Phys. JETP 7 (1958) 41. C.Bloch and A.Messiah, Nucl. Phys. 39 (1962) 95. M. Girod and B. Grammaticos, Phys. Rev. C 27 (1983) 2317. F. Barranco et al., Phys. Rev. Lett. 83 (1999) 2147. A. V. Avdeenkov and S. P. Kamerdzhiev, JETP Lett. 69 (1999) 715. P. Marnier and E. Sheldon, Physics of Nuclei and Particles, Part I, Academic Press Inc, London (1969), p.36. P. Vogel et al., Phys. Lett. B 139 (1984) 227. P. Möller and J. R. Nix, Nucl. Phys. A 520 (1990) 369c. H. Kucharek et al., Phys. Lett. B 216 (1989) 249. C. T. Black et al., Phys. Rev. Lett. 76 (1996) 688, and references therein.
--- abstract: 'The real Scarf II potential is discussed as a radial problem. This potential has been studied extensively as a one-dimensional problem, and now these results are used to construct its bound and resonance solutions for $l=0$ by setting the origin at some arbitrary value of the coordinate. The solutions with appropriate boundary conditions are composed as the linear combination of the two independent solutions of the Schrödinger equation. The asymptotic expression of these solutions is used to construct the $S_0(k)$ $s$-wave $S$-matrix, the poles of which supply the $k$ values corresponding to the bound, resonance and anti-bound solutions. The location of the discrete energy eigenvalues is analyzed, and the relation of the solutions of the radial and one-dimensional Scarf II potentials is discussed. It is shown that the generalized Woods–Saxon potential can be generated from the Rosen–Morse II potential in the same way as the radial Scarf II potential is obtained from its one-dimensional correspondent. Based on this analogy, possible applications are also pointed out.' address: - ' Institute for Nuclear Research, Hungarian Academy of Sciences (Atomki), Debrecen, Pf. 51, Hungary 4001' - 'Faculty of Informatics, University of Debrecen, Debrecen, Pf. 400, Hungary 4002' author: - 'G. Lévai' - 'Á. Baran' - 'P. Salamon' - 'T. Vertse' title: Analytical solutions for the radial Scarf II potential --- Scarf II potential ,Analytical solutions ,radial potentials ,$S$-matrix ,bound states ,resonances 03.65.Ge,03.65.Nk,02.30.Gp,02.30.Ik,24.10.Ht Introduction ============ Exactly solvable quantum mechanical potentials proved to be invaluable tools in the understanding of many fundamental quantum mechanical concepts. In particular, they give insight into complex phenomena, like the symmetries of quantum mechanical systems, and they allow the investigation of transitions through critical parameter domains. Besides this, analytical solutions serve as a firm basis for the development of numerical techniques. The one-dimensional Schrödinger equation $$-\psi''(x)+V(x)\psi(x)=E\psi(x) \label{scheq}$$ occurs in many applications. Here the potential function and the energy eigenvalue are defined such that they contain reduced mass $m$ and $\hbar$ as $V(x)=2mv(x)/\hbar^2$ and $E=2m \epsilon/\hbar^2$, so their physical dimension is distance$^{-2}$. In the simplest case (\[scheq\]) is defined on the full $x$ axis, i.e. $x\in(-\infty,\infty)$, while for spherical potentials defined in higher, typically three dimension, Eq. (\[scheq\]) can be obtained after the separation of the angular variables, if only the $s$-wave ($l=0$) solutions are considered. In this case the problem is defined on the positive half axis, $r\in[0,\infty)$, and the $x$ variable is denoted by $r$. Besides these options, (\[scheq\]) can also be defined on finite sections of the real $x$ axis, or even on more complicated trajectories of the complex $x$ plane, but we shall not consider these in the present work. Being a second-order ordinary differential equation, (\[scheq\]) has two independent solutions, and the physical solutions can be obtained as linear combination of these, satisfying the appropriate boundary conditions. Due to normalizability, bound states have to vanish at the boundaries (i.e. $x=\pm\infty$ in one dimension, and $r=0$ and $r=\infty$ in the radial case). Unbound solutions, e.g. scattering and resonance solutions also have to satisfy asymptotic boundary conditions, depending on the nature of the potential. If $V(x)$ vanishes exactly or exponentially for $x\rightarrow\pm\infty$, then these solutions of the one-dimensional problem have exponential asymptotic components $\exp(\pm {\rm i}kx)$, where $E=k^2$. In the radial case the same asymptotics are valid for $r\rightarrow\infty$, while for $r=0$ these solutions have to vanish. There are some potentials that are defined both as one-dimensional and as radial problems, e.g. the harmonic oscillator. The bound-state solutions of these two problems are related to each other in a special way: the odd wave functions of the one-dimensional potential, which vanish at $x=0$, are identical for $x\ge 0$ to the $s$-wave ($l=0$) radial wave functions, and the energy eigenvalues are also identical. A rather effective way for the unified discussion of bound, scattering and resonance solutions in asymptotically vanishing potentials is the application of the transmission coefficient $T(k)$ (in one dimension) and the $s$-wave $S$ matrix $S_0(k)$ (in the radial case). These quantities can be constructed from the asymptotic solutions, and their poles correspond to the bound, anti-bound and resonance states. From the exact solutions of these problems $T(k)$ and $S_0(k)$ can also be expressed in closed analytic form. Here we dicuss the Scarf II potential as a radial problem. This potential has two independent terms and belongs to the shape-invariant [@gendenshtein] subclass of the Natanzon potential class [@natanzon], which contains problems with bound-state solutions written in terms of a single hypergeometric function. The first reference to potential (\[sciipot\]) in the English literature occurred in 1983 in Ref. [@gendenshtein], so it is sometimes referred to as the Gendenshtein potential. However, it was already mentioned a year before in a Russian monograph [@natpriv]. Its detailed description was presented later, e.g. the normalization coefficients of its bound-state solutions have been calculated only recently [@pla02]. The transmission and reflection coefficients have been given in Ref. [@ks88], with corrections added in Ref. [@jpa01]. It has been a favourite toy model in ${\cal PT}$-symmetric quantum mechanics, where it was used to demonstrate the breakdown of ${\cal PT}$ symmetry [@Ahm01a; @mpla01a]. Further studies concerned its algebraic [@BQ00; @jpa01] and scattering aspects [@jpa01], the combined effects of SUSYQM and ${\cal PT}$ symmetry [@jpa02b], the pseudo-norm of its bound states [@pla02], the handedness (chirality) effects in scattering [@Ahm04], spectral singularities [@Ahm09], unidirectional invisibility [@Ahm13] and the accidental crossing of its energy levels [@Ahm15]. Despite its prominent status as a one-dimensional quantum system, the Scarf II potential has not been considered yet as a radial problem. Here we fill this gap by introducing a lower cut at a certain $x=r_0$ value and prescribing the appropriate boundary conditions. We construct the $S$-matrix for the $s$-wave solutions, $S_0(k)$, and determine its poles on the complex $k$ plane to identify its bound, anti-bound and resonance solutions. This will be done in Sec. \[radscii\], following the discussion of the one-dimensional problem for reference in Sec. \[1dscii\]. In Sec. \[gwsanalogy\] the analogy with the case of the generalized Woods–Saxon and the Rosen–Morse II potentials will be outlined, and possible applications are pointed out. Finally the results are summarized in Sec. \[summary\]. The Scarf II potential in one dimension {#1dscii} ======================================= A possible parametrization of this potential is [@jpa02b] $$V(x)=-\frac{V_1}{\cosh^2(cx)} +\frac{V_2\sinh(cx)}{\cosh^2(cx)}\ , \label{sciipot}$$ where $$V_1=c^2\left(\frac{\alpha^2+\beta^2}{2}-\frac{1}{4}\right)\hspace{2cm} V_2={\rm i}c^2\frac{\beta^2-\alpha^2}{2}\ , \label{v1v2}$$ and $c>0$ is a scaling factor of the coordinate. This potential is real if $\alpha^*=\beta$ holds, while it is ${\cal PT}$-symmetric if $\alpha$ and $\beta$ are real or imaginary. In what follows we consider the real version only. Potential (\[sciipot\]) is depicted in Fig. \[fig1\] for some values of the parameters. It has a minimum $x_-$ and a maximum $x_+$ at $$x_{\pm}=c^{-1}\sinh^{-1}\left[\frac{V_1}{V_2} \pm \left[\left(\frac{V_1}{V_2}\right)^2+1\right]^{1/2}\right]\ . \label{xpxm}$$ The potential reflected by $x=0$ can be constructed easily by considering $V_2\rightarrow -V_2$, i.e. $\alpha\leftrightarrow\beta$. The bound-state wave functions are $$\psi_n(x) =C_n (1-{\rm i}\sinh(cx))^{ \frac{\alpha}{2}+\frac{1}{4}} (1+{\rm i}\sinh(cx))^{\frac{\beta}{2}+\frac{1}{4}} P_n^{(\alpha,\beta)}({\rm i}\sinh(cx))\ , \label{sciiwf}$$ while the corresponding energy eigenvalues are written as $$E_n= -c^2\left(n+\frac{\alpha+\beta+1}{2}\right)^2\ . \label{sciien}$$ Normalizability of (\[sciiwf\]) requires $$n<-\frac{1}{2}[{\rm Re}(\alpha+\beta)+1]\ . \label{sciincond}$$ $C_n$ in (\[sciiwf\]) was calculated for the real and the ${\cal PT}$-symmetric version of the Scarf II potential in Ref. [@pla02]. In the former case it can be written as $$C_n=2^{-\frac{\alpha+\beta}{2}-1} \left[c\frac{\Gamma(-\alpha-n)\Gamma(-\beta-n)(-\alpha-\beta-2n-1)n! }{ \Gamma(-\alpha-\beta-n)\pi } \right]^{1/2}\ . \label{Cn}$$ Note that although $\alpha$ and $\beta$ are complex, $C_n$ is real due to $\alpha=\beta^*$ and Eq. (\[sciincond\]). It can also be proven that the bound-state wave functions (\[sciiwf\]) are real for even $n$, and imaginary for odd $n$: this can be demonstrated by expressing the complex conjugate of (\[sciiwf\]), which turns out to be $[\psi_n(x)]^*=(-1)^n \psi_n(x)$, due to Eq. 22.4.1 of Ref. [@AS70]. With a reparametrization, the notation of Ref. [@jpa01] can be obtained, in which the scattering aspects of the one-dimensional Scarf II potential have been discussed. Taking $$\alpha=-s-\frac{1}{2}-{\rm i}\lambda\ , \hspace{1cm} \beta=-s-\frac{1}{2}+{\rm i}\lambda \label{ab}$$ one obtains $$\begin{aligned} V_1=c^2[s(s+1)-\lambda^2] \hspace{1cm} V_2=c^2(2s+1)\lambda \label{v1v2sl}\end{aligned}$$ in Eq. (\[sciipot\]). According to (\[ab\]), the Scarf II potential will be real for real values of $s$ and $\lambda$. Note that this potential remains invariant if the signs of $s+1/2$ and $\lambda$ are reversed simutaneously. This means that without the loss of generality one can require $s>-1/2$. Condition (\[sciincond\]) is now $n<{\rm Re}(s)=s$, so in order to obtain normalizable states one needs $s>0$. It is notable that $E_n$ in (\[sciien\]) depends only on $s=-(\alpha+\beta+1)/2$, and is independent of $\lambda={\rm i}(\alpha-\beta)/2$. In Ref. [@jpa01] the general solutions of the Schrödinger equation with the potential (\[sciipot\]) and (\[v1v2sl\]) are expressed in terms of hypergeometric functions as $$\begin{aligned} F_1(x) &&= (1-{\rm i}\sinh(cx))^{-\frac{s+{\rm i}\lambda}{2}} (1+{\rm i}\sinh(cx))^{-\frac{s-{\rm i}\lambda}{2}} \nonumber\\ &&\times _2F_1(-s-{\rm i}k, -s+{\rm i}k; {\rm i}\lambda-s+1/2; (1+{\rm i}\sinh(cx))/2) \label{sol1}\end{aligned}$$ and $$\begin{aligned} F_2(x) &&= A(1-{\rm i}\sinh(cx))^{-\frac{s+{\rm i}\lambda}{2}} (1+{\rm i}\sinh(cx))^{\frac{s+1-{\rm i}\lambda}{2}} \nonumber\\ &&\times _2F_1(1/2-{\rm i}\lambda-{\rm i}k, 1/2-{\rm i}\lambda+{\rm i}k; s+3/2-{\rm i}\lambda; (1+{\rm i}\sinh(cx))/2)\ . \nonumber\\ \label{sol2}\end{aligned}$$ Note that (\[sol2\]) is obtained from (\[sol1\]) by Eq. 15.5.4 of Ref. [@AS70], where $A=2^{{\rm i}\lambda -s -1/2}$. Note also that the two solutions are connected by the $s+1/2\leftrightarrow {\rm i}\lambda$ transformation, which leaves (\[v1v2sl\]), and thus (\[sciipot\]) invariant. The asymptotic behavior of (\[sol1\]) and (\[sol2\]) can be obtained by applying 15.3.4, 15.3.5 and 15.3.6 of Ref. [@AS70], and the results are $$\begin{aligned} \lim_{x\rightarrow\infty} F_1(x) &=& a_{1+}\exp({\rm i}kx)+b_{1+}\exp(-{\rm i}kx) \label{ab1+}\\ \lim_{x\rightarrow -\infty} F_1(x) &=& a_{1-}\exp({\rm i}kx)+b_{1-}\exp(-{\rm i}kx) \label{ab1-}\\ \lim_{x\rightarrow\infty} F_2(x) &=& a_{2+}\exp({\rm i}kx)+b_{2+}\exp(-{\rm i}kx) \label{ab2+}\\ \lim_{x\rightarrow -\infty} F_2(x) &=& a_{2-}\exp({\rm i}kx)+b_{2-}\exp(-{\rm i}kx)\ , \label{ab2-}\end{aligned}$$ where $$\begin{aligned} a_{1+}=D_1 2^{-s-2{\rm i}k/c}{\rm e}^{\pi(k/c-\lambda-{\rm i}s)/2} && b_{1+}=C_1 2^{-s+2{\rm i}k/c}{\rm e}^{\pi(-k/c-\lambda-{\rm i}s)/2} \label{ab1p}\\ a_{1-}=b_{1+}{\rm e}^{\pi(k/c+\lambda+{\rm i}s)} && b_{1-}=a_{1+}{\rm e}^{\pi(-k/c+\lambda+{\rm i}s)} \label{ab1m}\\ a_{2+}=D_2 2^{-s-2{\rm i}k/c}{\rm e}^{\pi(k/c+\lambda+{\rm i}(s+1))/2} \hspace{.4cm} && b_{2+}=C_2 2^{-s+2{\rm i}k/c}{\rm e}^{\pi(-k/c+\lambda+{\rm i}(s+1))/2} \nonumber\\ \label{ab2p}\\ a_{2-}=b_{2+}{\rm e}^{\pi(k/c-\lambda-{\rm i}(s+1))} && b_{2-}=a_{2+}{\rm e}^{\pi(-k/c-\lambda-{\rm i}(s+1))}\ , \label{ab2m}\end{aligned}$$ and $$\begin{aligned} C_1=\frac{\Gamma({\rm i}\lambda+1/2-s)\Gamma(-2{\rm i}k/c)}{ \Gamma({\rm i}\lambda+1/2-{\rm i}k/c)\Gamma(-s-{\rm i}k/c)} && D_1=\frac{\Gamma({\rm i}\lambda+1/2-s)\Gamma(2{\rm i}k/c)}{ \Gamma({\rm i}\lambda+1/2+{\rm i}k/c)\Gamma(-s+{\rm i}k/c)} \nonumber\\ \label{cd1}\\ C_2=\frac{\Gamma(-{\rm i}\lambda+3/2+s)\Gamma(-2{\rm i}k/c)}{ \Gamma(s+1-{\rm i}k/c)\Gamma(-{\rm i}\lambda+1/2-{\rm i}k/c)} \hspace{.4cm} && D_2=\frac{\Gamma(-{\rm i}\lambda+3/2+s)\Gamma(2{\rm i}k/c)}{ \Gamma(s+1+{\rm i}k/c)\Gamma(-{\rm i}\lambda+1/2+{\rm i}k/c)} \nonumber\\ \label{cd2}\end{aligned}$$ For a wave traveling to the right the transmission and reflection coefficients are expressed as [@ks88; @jpa01] $$\begin{aligned} T(k)&=&\frac{a_{1+}b_{2+}-b_{1+}a_{2+}}{a_{1-}b_{2+}-b_{1+}a_{2-}} \nonumber\\ &=& \frac{\Gamma(-s-{\rm i}k/c)\Gamma(s+1-{\rm i}k/c) \Gamma({\rm i}\lambda+1/2-{\rm i}k/c)\Gamma(-{\rm i}\lambda+1/2-{\rm i}k/c) }{\Gamma(-{\rm i}k/c)\Gamma(1-{\rm i}k/c)\Gamma^2(1/2-{\rm i}k/c)} \nonumber\\ \label{tk} \\ R(k)&=&\frac{b_{1-}b_{2+}-b_{1+}b_{2-}}{a_{1-}b_{2+}-b_{1+}a_{2-}} \nonumber\\ &=& T(k)\left(\frac{\cos(\pi s)\sinh(\pi\lambda)}{\cosh(\pi k/c)} +{\rm i}\frac{\sin(\pi s)\cosh(\pi\lambda)}{\sinh(\pi k/c)}\right)\ . \nonumber\\ \label{rk} \end{aligned}$$ The poles of $T(k)$ are located at $-n=-s-{\rm i}k/c$, $-n=s+1-{\rm i}k/c$, $-n=-{\rm i}\lambda+1/2-{\rm i}k/c$ and $-n={\rm i}\lambda+1/2-{\rm i}k/c$. The first choice corresponds to the energy eigenvalues $$E_n=-c^2(s-n)^2 \label{sciiensl}$$ in accordance with (\[sciincond\]), and converts $F_1(x)$ in (\[sol1\]) into (\[sciiwf\]) (up to the constant factor $(-1)^n n! \Gamma(\beta+1)[\Gamma(\beta+n+1)C_n]^{-1}$) after applying Eqs. 15.3.6 and 22.5.42 of Ref. [@AS70]. The second one stands for anti-bound or virtual states with $k$ located on the negative imaginary axis, while the last two poles correspond to non-normalizable complex-energy states, i.e. resonances with $E_n=k^2=-c^2(n-1/2\pm {\rm i}\lambda)^2$. The Scarf II potential as a radial problem {#radscii} ========================================== In this case the general wave function is constructed from the linear combination of the two independent solutions (\[sol1\]) and (\[sol2\]) with boundary condition that it should vanish at the origin. The position of the origin need not be chosen at $x=0$, rather one can cut the one-dimensional potential (\[sciipot\]) at an arbitrary finite value. Let us thus define $x=r+r_0$, where $r\in[0,\infty)$, i.e. $x\in [r_0,\infty)$. Figure \[fig1\] displays three possible radial Scarf II potential with origin corresponding to various values of $x=r_0$. The general solution $$\psi(r)=F_1(x)+C F_2(x) \label{gensol}$$ should vanish at $x=r_0$, which defines the constant $C$ as $$C=-F_1(r_0)/F_2(r_0)\ . \label{orig}$$ The asymptotic behavior of the solution has to be inspected only for $r\rightarrow \infty$, and the $S$-matrix for $l=0$ can be obtained from $$\lim_{r\rightarrow\infty} \psi(r) = \exp(-{\rm i}kr) -S_0(k)\exp({\rm i}kr)\ . \label{smatrix}$$ Making use of Eqs. (\[ab1+\]) and (\[ab2+\]) of the one-dimensional problem the $S$-matrix can be constructed as $$S_0(k)=-\frac{a_{1+}+C a_{2+}}{b_{1+}+C b_{2+}} \label{scarfiism1}$$ After some straightforward algebra one obtains $$\begin{aligned} S_0(k)=&&-2^{-4{\rm i}k/c} \exp(\pi k/c) \frac{\Gamma(2{\rm i}k/c)}{\Gamma(-2{\rm i}k/c)} \nonumber \\ &&\times \left[ \frac{\Gamma({\rm i}\lambda+1/2-s)}{ \Gamma({\rm i}\lambda+1/2+{\rm i}k/c)\Gamma(-s+{\rm i}k/c)} +{\rm i}C \frac{\Gamma(-{\rm i}\lambda+3/2+s)\exp(\pi(\lambda+{\rm i}s))}{ \Gamma(s+1+{\rm i}k/c)\Gamma(-{\rm i}\lambda+1/2+{\rm i}k/c)} \right] \nonumber \\ &&\times \left[ \frac{\Gamma({\rm i}\lambda+1/2-s)}{ \Gamma({\rm i}\lambda+1/2-{\rm i}k/c)\Gamma(-s-{\rm i}k/c)} +{\rm i}C \frac{\Gamma(-{\rm i}\lambda+3/2+s)\exp(\pi(\lambda+{\rm i}s))}{ \Gamma(s+1-{\rm i}k/c)\Gamma(-{\rm i}\lambda+1/2-{\rm i}k/c)} \right]^{-1} \nonumber\\ \label{scarfiism}\end{aligned}$$ where $$C=-\frac{(1+{\rm i}\sinh(cr_0))^{-s+{\rm i}\lambda-1/2}\ _2F_1(-s-{\rm i}k/c, -s+{\rm i}k; {\rm i}\lambda-s+1/2; (1+{\rm i}\sinh(cr_0))/2) }{A _2F_1(1/2-{\rm i}\lambda-{\rm i}k/c, 1/2-{\rm i}\lambda+{\rm i}k/c; s+3/2-{\rm i}\lambda; (1+{\rm i}\sinh(cr_0))/2) } \\ \label{cconst}$$ The $S$-matrix of Ref. [@npa96] is recovered in the special case of $c=1$, $\lambda=0$ and $r_0=0$. In that case the radial wave functions are obtained from the odd-$n$ solutions of the one-dimensional problem that vanish at the origin. The poles of the $S$-matrix displayed for the various $r_0$ used in Fig. \[fig1\] are shown in Fig. \[2-9\]. The solutions of the one-dimensional and the radial problems can be related to each other by various ways. First, if $r_0$ is defined to be at a node of a particular wave function $\psi_n(x)$ of the one-dimensional problem, then Eq. (\[gensol\]) implies that $\psi_n(r_0)=0$ can occur only for $C=0$, i.e. the solution of the radial problem will be the corresponding solution of the one-dimensional problem, defined for $x\ge r_0$. Furthermore, the energy eigenvalues (and $k$) will also be the same. This scenario is illustrated by the $E_3$ excited state of the one-dimensional problem in Table 1: $r_0=-2.017825$ coincides with the first of the three nodes of $\psi_3(x)$ in (\[sciiwf\]), so this function will also act as the second excited state ($n=2$) of the radial problem with the same energy eigenvalue ($E_2=-0.64$), since it has two more nodes. The unnormalized bound-state wave functions of this potential are displayed in Fig. \[fig3\]. Obviously, the remaining solutions of the radial problem cannot be calculated in the same way. The situation is analogous to the case of the harmonic oscillator, where some solutions of the radial problem can be generated from those of the one-dimensional problem. Another relation follows in situations when $r_0$ is defined at a large enough negative value, where the bound-state wave functions of the one-dimensional problem are close to zero. In this case the boundary condition at $r_0$ implies that the second term of (\[gensol\]) should also be small in magnitude ($C$ will be small), so the solution will be dominated by $F_1(x)$, i.e. the bound-state solution of the one-dimensional problem. This also means that the energy eigenvalues of the radial problem will also be close to those of the one-dimensional problem. A simple test for some parameters is displayed in Table \[calcener\]. It is seen that the energy eigenvalues match reasonably well, and the agreement gets better with $r_0\rightarrow-\infty$ and for lower values of the $n$ quantum number, i.e. in situations when the magnitude of $\psi_n(r_0)$ is smaller. Considering that the resonance solutions do not vanish asymptotically, the same argumentation cannot be applied to them. Consequently, the resonance energies of the one-dimensional and the radial Scarf II potential differ from each other significantly. Relation to the generalized Woods–Saxon potential and possible applications {#gwsanalogy} =========================================================================== It can be noted that the radial Scarf II potential can be brought to a form that is close to the notation of the generalized Woods–Saxon potential. Applying the variable transformation $$cx=c(r-R)\equiv \frac{r-R}{2a} \label{vartr}$$ in (\[sciipot\]), the equivalent form $$\begin{aligned} V(x)&=&-4V_1\frac{\exp((r-R)/a)}{[1+\exp((r-R)/a)]^2} \nonumber\\ &&+2V_2\left[\frac{\exp((r-R)/(2a))}{1+\exp((r-R)/a)} -2\frac{\exp((r-R)/(2a))}{[1+\exp((r-R)/a)]^2} \right] \label{sciipottr}\end{aligned}$$ is obtained. $r_0$ corresponds to $-R$, so the radial version of the potential can be obtained by defining the origin at the negative value of $r_0=-R$. In fact, the same variable transformation relates the generalized Woods–Saxon potential with the Rosen–Morse II potential [@jpa09b] $$V(x)=-\frac{U_1}{\cosh^2(cx)}+U_2\tanh(cx)\ . \label{rmiipot}$$ After applying the (\[vartr\]) transformation one obtains $$V(r)= -4U_1 \frac{\exp((r-R)/(a))}{[1+\exp((r-R)/a)]^2} -U_2\frac{2}{1+\exp((r-R)/a)} +U_2\ , \label{rmiipotws}$$ which is the Woods–Saxon potential with a shifted energy scale. Note that the first terms of (\[sciipottr\]) and (\[rmiipotws\]) are the same, and also that the diffusity parameter $a$ is related to $c$ via (\[vartr\]). The first “surface” term of (\[rmiipotws\]) (and thus of (\[rmiipot\])) can be obtained as the first derivative of second “volume” term. There is no similar relation for the Scarf II potential: however, the second term of (\[sciipot\]) can be obtained from the first-order derivative of $(\cosh(cx))^{-1}$, i.e. the square root of the first term. Obviously, similar relation holds for (\[sciipottr\]) too. It should be noted that the Rosen–Morse II potential is defined on the full real $x$ axis, so in order to obtain the Woods–Saxon potential from it, one should consider $r\in [0,\infty)$ in (\[vartr\]). This means that the two solutions have to be matched at $r=0$ in a way similar to that considered previously for the Scarf II potential. In fact, the procedure applied there is the same as that described in the notable work of Bencze [@bencze], where the $S$ matrix of the Woods–Saxon potential was constructed in an analytical form. See also Ref. [@npa16] as a more accessible source of the formulas. Note that in the one-dimensional Rosen–Morse II potential the normalizable states are expressed in terms of only one of the two independent solutions (similarly to the case of the one-dimensional Scarf II potential), and they exist only for $U_1>0$ [@jpa09b]. In contrast with this, in the radial problem, this “surface” term usually plays the role of a barrier, i.e. $U_1<0$, and the attractive component of the potential is represented by the “volume” term with $U_2>0$. The relation between the Rosen–Morse II and the generalized Woods–Saxon potential has been pointed out in Ref. [@berkd], where analytical expressions were given for the $l=0$ bound-state wave functions and the corresponding energy eigenvalues. However, those wave functions do not vanish at the origin (their structure is similar to the wave functions of the one-dimensional Rosen–Morse II potential), so that approach can be considered as an approximation only. Given the analogy with the generalized Woods–Saxon potential, possible applications of the radial Scarf II potential can be envisaged in nuclear physics, for example. The barrier in Fig. \[fig1\] can simulate the effects of the Coulomb barrier that occurs when charged particles (e.g. protons or $\alpha$-particles) interact with a nucleus. The situation is qualitatively similar to the case of the generalized Woods–Saxon potential, which has a similar barrier, and the difference occurs within the nucleus, where the latter potential is constant, while the radial Scarf II potential has a clear minimum, and depending on $r_0$ it can increase close to the origin. Potentials of this kind might be relevant to nuclei exhibiting depleted central nucleon density. Such “bubble” nuclei have been observed in recent experiments [@bubble]. It is also possible to shift the barrier [*inside*]{} the nucleus by formally reflecting the potential curve in Fig. \[fig1\] about $x=0$ and defining the origin near the potential maximum. Staying with the original formalism, this corresponds to considering $V_2<0$, i.e. taking $\alpha\leftrightarrow\beta$ in (\[v1v2\]) or $\lambda\rightarrow -\lambda$ in (\[v1v2sl\]). Potentials with such shape occur in hypernuclei, where the interaction of $\Lambda$ particles with $\alpha$ particles or nucleons requires the presence of a soft repulsive core with variable heigth [@daska]. Finally, the formalism can be extended to the case of optical potentials with complex values of $V_1$ and $V_2$: the calculation of the wave functions, $S$-matrix, energy eigenvalues, including that of the resonances will be the same. Summary ======= Based on the results of the one-dimensional Scarf II potential, the radial version of this potential was studied. For this, the origin was defined at an arbitrary value on the real $x$ axis, and the $s$-wave solutions were constructed from the two independent solutions of the one-dimensional Schrödinger equation, after prescribing the appropriate boundary conditions. The asymptotic form of these solutions was used to construct the $S_0(k)$ $S$-matrix. The poles of $S_0(k)$ were located, and were identified with the bound, anti-bound and resonance solutions. It was shown that by selecting the origin far enough from the potential minimum, the bound-state energy eigenvalues and wave functions of the radial potential tended to those of the one-dimensional potential. Furthermore, selecting the origin at the node of some bound-state wave function of the one-dimensional potential, this wave function appeared as a bound-state wave function of the radial potential with the same energy eigenvalue. With a slightly modified parametrization, the radial Scarf II potential could be compared with the generalized Woods–Saxon potential, and it was shown that they share a term (the “surface” term of the latter potential). In fact, it was demonstrated that the radial Scarf II potential can be generated from the one-dimensional Scarf II potential in the same way as the generalized Woods–Saxon potential is generated from the one-dimensional Rosen–Morse II potential. The connection between the latter two potentials has been known before [@berkd], however, the bound-state wave functions generated from this connection did not satisfy the appropriate boundary conditions. Based on its similarity with the generalized Woods–Saxon potential, the radial Scarf II potential could be applied in nuclear physics, for example. One possibility is considering problems, which are characterized by a barrier at the surface of the nucleus, but in which the flat potential inside the nucleus is replaced with a potential well with a clear minimum. Another option is placing the barrier inside the nucleus near the origin, simulating a repulsive interaction there. The formalism can be extended to the case of complex values of $V_1$ and $V_2$, i.e. to optical potentials. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by the Hungarian Scientific Research Fund – OTKA, grant No. K112962. [00]{} L. E. Gendenshtein, [*Zh. Eksp. Teor. Fiz. Pis. Red.*]{} [**38**]{} (1983) 299 (Eng. transl. [*JETP Lett.*]{} [**38**]{} (1983) 35). G. A. Natanzon G, [*Teor. Mat. Fiz.*]{} [**38**]{} (1971) 146. G. Natanson 2011, private communication G. Lévai, F. Cannata and A. Ventura, [*Phys. Lett. A*]{} [**300**]{} (2002) 271. A. Khare and U. P. Sukhatme, [*J. Phys. A:Math. Gen.*]{} [**21**]{} (1988) L501. G. Lévai, F. Cannata and A. Ventura, [*J. Phys. A: Math. Gen.*]{} [**34**]{} (2001) 839. Z. Ahmed, [*Phys. Lett. A*]{} [**282**]{} (2001) 343. G. Lévai and M. Znojil, [*Mod. Phys. Lett. A*]{} [**30**]{} (2001) 1973. B. Bagchi and C. Quesne, [*Phys. Lett. A*]{} [**273**]{} (2000) 285. G. Lévai and M. Znojil, [*J. Phys. A: Math. Gen.*]{} [**35**]{} (2002) 8793. Z. Ahmed, [*Phys. Lett. A*]{} [**324**]{} (2004) 152. Z. Ahmed, [*J. Phys. A:Math. Theor.*]{} [**42**]{} (2009) 472005. Z. Ahmed, [*Phys. Lett. A*]{} [**377**]{} (2013) 957. Z. Ahmed, D. Ghosh, J. A. Nathan and G. Parkar, [*Phys. Lett. A*]{} [**379**]{} (2015) 2424. M. Abramowitz and I. A. Stegun, [*Handbook of Mathematical Functions*]{}, Dover, New York, 1970. D. Baye, G. Lévai and J.-M. Sparenberg, [*Nucl. Phys. A*]{} [**599**]{} (1996) 435. G. Lévai and E. Magyari, [*J. Phys. A: Math. Theor.*]{} [**42**]{} (2009) 19:5302. Gy. Bencze, [*Commentationes Physico-Mathematicae*]{} [**31**]{} (1966) 1. P. Salamon, Á. Baran and T. Vertse, [*Nucl. Phys. A*]{} [**952**]{} (2016) 1. C. Berkdemir, A. Berkdemir and R. Sever, [*Phys. Rev. C*]{} [**72**]{} (2005) 027001. A. Mutschler, A. Lemasson, O. Sorlin, D. Bazin, C. Borcea, R. Borcea, Z. Dombrádi, J.-P. Ebran, A. Gade, H. Iwasaki, E. Khan, A. Lepailleur, F. Recchia, T. Roger, F. Rotaru, D. Sohler, M. Stanoiu, S. R. Stroberg, J. A. Tostevin, M. Vandebrouck, D. Weisshaar and K. Wimmer, [*Nature Physics*]{} (2016) doi:10.1038/nphys3916 C. Daskaloyannis, M. Grypeos and H. Nassena [*Phys. Rev. C*]{} [**26**]{} (1982) 702. [lcccc]{} & 1D case & $r_0=-0.4965$ & $r_0=-2.017825$ & $r_0=-6.0$\ \ $E_0$ & $-14.44$ & $-5.69802$ & $-14.41825$ & $-14.44000$\ $E_1$ & $-7.84$ & – & $-7.34803$ & $-7.84000$\ $E_2$ & $-3.24$ & – & $-0.64000$ & $-3.23998$\ $E_3$ & $-0.64$ & – & – & $-0.61366$\ \[tab1\]
--- abstract: 'The $q$-gradient is an extension of the classical gradient vector based on the concept of Jackson’s derivative. Here we introduce a preliminary version of the $q$-gradient method for unconstrained global optimization. The main idea behind our approach is the use of the negative of the $q$-gradient of the objective function as the search direction. In this sense, the method here proposed is a generalization of the well-known steepest descent method. The use of Jackson’s derivative has shown to be an effective mechanism for escaping from local minima. The $q$-gradient method is complemented with strategies to generate the parameter $q$ and to compute the step length in a way that the search process gradually shifts from global in the beginning to almost local search in the end. For testing this new approach, we considered six commonly used test functions and compared our results with three Genetic Algorithms (GAs) considered effective in optimizing multidimensional unimodal and multimodal functions. For the multimodal test functions, the $q$-gradient method outperformed the GAs, reaching the minimum with a better accuracy and with less function evaluations.' author: - 'Aline C. Soterroni' - 'Roberto L. Galski' - 'Fernando M. Ramos' title: 'The $q$-gradient method for global optimization' --- Introduction {#sec:intro} ============ Over the last decades the $q$-calculus has been connecting mathematics and physics in applications that span from quantum theory and statistical mechanics, to number theory and combinatorics (see [@ernst1] and references therein). Its history dates back to the beginnings of the last century when, based on pioneering works of Euler and Heine, the English reverend Frank Hilton Jackson developed the $q$-calculus in a systematic way [@ernst2]. His work gave rise to generalizations of series, functions and special numbers within the context of the $q$-calculus [@chaundy]. More important, he reintroduced the concepts of the $q$-derivative [@jackson1] (also known as Jackson’s derivative) and introduced the $q$-integral [@jackson2]. The $q$-derivative of a function $f(x)$ of one variable is defined as $$D_q f(x) = \frac{f(qx)-f(x)}{qx-x},$$ where $q$ is a real number different from $1$ and $x$ is different from $0$. In the limit of $q \rightarrow 1$ (or $x \rightarrow 0$), the $q$-derivative reduces to the classical derivative. Let $f(x)=x^n$, for example. In this case, the classical derivative of $f$ is $nx^{n-1}$ and the $q$-derivative is $[n]x^{n-1}$, where $[n]$ is the $q$-analogue of $n$ given by $$[n]= \frac{q^n -1}{q-1}.$$ As $q \rightarrow 1$, $[n]$ tends to $n$. This definition is used to calculate the $q$-binomial and establish a $q$-analogue of Taylor’s formula that encompasses many results such as the Euler’s identities for $q$-exponential functions, Gauss’s $q$-binomial formula, Heine’s formula for a $q$-hypergeometric function, among others mathematical results [@kac]. Considering arbitrary functions $f(x)$ and $g(x)$, the $q$-derivative operator satisfy the following properties [@kac]: 1. The $q$-derivative is a linear operator for any constants $a$ and $b$ $$D_q(af(x) + bg(x)) = aD_qf(x) + bD_qg(x).$$ 2. The $q$-derivative of the product of $f(x)$ and $g(x)$ is given by $$D_q( f(x)g(x) ) = f(qx) D_qg(x) + g(x)D_qf(x)$$ that, by symmetry, is equivalent to $$D_q( f(x)g(x) ) = f(x) D_qg(x) + g(qx)D_qf(x).$$ 3. The $q$-derivative of the quotient of $f(x)$ and $g(x)$ is calculated as $$D_q \left( \frac{f(x)}{g(x)} \right)= \frac{g(x) D_qf(x) -f(x)D_qg(x)}{g(x)g(qx)}$$ or equivalently $$D_q \left( \frac{f(x)}{g(x)} \right) = \frac{g(qx) D_qf(x) -f(qx)D_qg(x)}{g(x)g(qx)}.$$ The chain rule for $q$-derivatives does not exist, except for a function of the form $f(u(x))$, where $u(x) = \alpha x^{\beta}$, with $\alpha,\beta$ being constants. More details on the properties of $q$-derivatives can be found in [@kac]. Let a general nonlinear unconstrained optimization problem be defined as $$\label{eq:pmin} \min F(\mathbf{x}), \quad \mathbf{x} = (x_1,\ldots,x_i,\ldots,x_n)$$ where $\mathbf{x} \in \Re^n$ is the vector of the independent variables and $F:\Re^n \rightarrow \Re$ is the objective function. The steepest descent method (also known as the gradient descent method) uses information on the gradient of the objective function in seeking the optimum. The search direction is given by the negative of the gradient of $F$. This search strategy is an obvious choice since along this direction the objective function decreases most rapidly. Requiring only information about first-derivatives, the steepest descent method is attractive because of its limited computational cost and storage requirements [@gianni]. However, for multimodal functions, unless one knows in advance where to start from, the search procedure frequently gets stuck in one of the local minima. Consequently, the steepest descent method is not recommended for real-world optimization problems that are usually multimodal. Nevertheless, because of its inherent simplicity, it represents a good starting point for the development of more advanced optimization methods. Here we propose a generalization of the steepest descent method in which the gradient of the objective function is replaced by its $q$-analogue. Accordingly, the search direction is taken as the negative of the $q$-gradient of $F$. For $q = 1$, the here called $q$-gradient method reduces to the classical steepest descent method. In order to evaluate the performance of the $q$-gradient method we consider three unimodal and three multimodal test functions commonly used as benchmarks. We compare our results with those obtained with the Genetic Algorithms (GAs) G3-PCX developed by Deb et al. [@deb], and the SPC-vSBX and SPC-PNX developed by Ballester and Carter [@ballester], which previous studies have shown to be effective in minimizing multidimensional unimodal and multimodal functions. The rest of the paper is organized as follows. In Section \[sec:qgrad\] the $q$-gradient vector is defined. In Section \[sec:qgradmethod\] the strategies to obtain the parameter $q$ and the step length are described. Section \[sec:description\] shows the computational experiments and Section \[sec:results\] discusses the results. Finally, in Section \[sec:conclusions\] some conclusions and future work are presented. The $q$-Gradient {#sec:qgrad} ================ Given a differentiable function of $n$ variables $F(\mathbf{x})$, the gradient of $F$ is the vector of the $n$ first-order partial derivatives of $F$. Similarly, the $q$-gradient is the vector of the $n$ first-order partial $q$-derivatives of $F$. Thus, let the parameter $q$ be a vector $\mathbf{q} = (q_1,\ldots,q_i,\ldots,q_n)$, where $q_i \neq 1 \ \forall i$, the first-order partial $q$-derivative with respect to the variable $x_i$ is given by $$\small D_{q_i,x_i} F(\mathbf{x}) = \displaystyle \frac{F(x_1,\ldots,q_ix_i,\ldots,x_n)- F(x_1,\ldots, x_i,\ldots,x_n)} {q_ix_i - x_i}$$ with $$\left. D_{q_i,x_i} F(\mathbf{x}) \right|_{x_i=0} = \frac{\partial F(\mathbf{x})}{\partial x_i}$$ and $$\left. D_{q_i,x_i} F(\mathbf{x}) \right|_{q_i=1} = \frac{\partial F(\mathbf{x})}{\partial x_i}.$$ This framework can be extended to define the $q$-gradient of a function of $n$ variables as $$\label{eq:qgrad} \nabla_{\mathbf{q}} F(\mathbf{x}) = \left[ D_{q_1,x_1} F(\mathbf{x}) \ \ldots \ D_{q_i,x_i} F(\mathbf{x}) \ \ldots \ D_{q_n,x_n} F(\mathbf{x})\right]$$ with the classical gradient being recovered in the limit of $q_i \rightarrow 1$, for all $i=1,\ldots,n$. The Fig. \[fig:geometrico\] illustrates the geometric interpretation of the classical gradient and the $q$-gradient of a function of one variable $f(x) = 2 -( e^{-x^2} + 2 e^{-(x-3)^2})$. In this case, the gradient is simply the slope of the tangent line at $x$. Similarly, the $q$-gradient is the slope of the secant line passing through the points $(x,f(x))$ and $(qx,f(qx))$. If the slope of the secant line is positive (negative), the $q$-gradient points to the right (left) direction. ![Geometric interpretation of the classical derivative (dotted line) and the $q$-derivative of $f(x)$ at $x = 1.0$ and different values of the parameter $q$.[]{data-label="fig:geometrico"}](Fig1){width="3.5in"} Since the slope of the tangent line (dotted line) at $x=1$ is positive, the steepest descent method at this point will move necessarily to the left and, thus, will be trapped by the local minimum at $x=0$. The slope of the secant line passing through the points $(x,f(x))$ and $(qx,f(qx))$ can be positive or negative depending on the value of the parameter $q$. For instance, if $q=2$ the $q$-derivative is negative at $x=1$ (see the secant line passing through $(x,f(x))$ and $(q_4x,f(q_4x))$ in Fig. \[fig:geometrico\]), which potentially allows a minimization strategy based on the value of the $q$-gradient to take a leap to the right, towards the global minimum of $f$. Note that there is a value $1.5 < q_3 < 2$ for which $x=1$ is a stationary point of the $q$-gradient ($\nabla_qF(x)=0$) but that does not correspond to any minimum or maximum of $f$. The stationary points of the $q$-gradient are avoided in the method by generating a new the parameter $q$. Finally, for $0<q_1<0.5$ or $1<q_2<1.5$ the slope of the secant line is positive and the $q$-gradient method will move to the left as the steepest descent method. This simple example above shows that the use of the $q$-gradient offers a new mechanism to escape from local minima. Moreover, the transition from global to local search might be controlled by the parameter $q$, provided a suitable strategy for generating $q$-values is incorporated into the minimization algorithm. $q$-Gradient Method Description {#sec:qgradmethod} =============================== A general optimization strategy is to consider an iterative procedure that, starting from $\mathbf{x}^0$, generates a sequence $\{\mathbf{x}^k\}$ given by [@vanderplaats] $$\label{eq:iterprocess} \mathbf{x}^{k+1} = \mathbf{x}^{k} + \alpha^{k} \mathbf{d}^{k}$$ where $\mathbf{d}^{k}$ is the search direction and $\alpha^{k}$ is the step length or the distance moved along $\mathbf{d}^{k}$ in the iteration $k$. Optimization methods can be characterized according to the direction and step length used in Eq. (\[eq:iterprocess\]). The steepest descent method sets $\mathbf{d}^{k} = -\nabla F(\mathbf{x}^{k})$ and the step length $\alpha^{k}$ is usually determined by a line-search technique that minimizes the objective function along the direction $\mathbf{d}^{k}$. In the $q$-gradient method, as here proposed, the search direction is the negative of the $q$-gradient of the objective function $- \nabla_{q} F(\mathbf{x})$. Thus the optimization procedure defined by Eq. (\[eq:iterprocess\]) becomes $$\label{eq:search} \mathbf{x}^{k+1} = \mathbf{x}^{k} - \alpha^{k} \nabla_{q} F(\mathbf{x}^{k}).$$ Key to the performance of the $q$-gradient method, the strategies used to specify the parameter $q$ and the step length $\alpha$ are described below. Parameter $\mathbf{q}$ {#sub:parameter} ---------------------- Considering a function of $n$ variables $F(\mathbf{x})$, a set of $n$ different parameters $q_i \in \Re-\{1\}$ ($i=1,\ldots,n$) are needed to compute the $q$-gradient vector of $F$. The overall strategy adopted here is to draw the values of $q_i$ (or some variable related to them) from some suitable probability density function (pdf), and with a standard deviation that decreases as the iterative search proceeds. In this sense, the role of the standard deviation here is reminiscent of the one played by the temperature in a simulated annealing (SA) algorithm, that is, to make the algorithm go from a very random (at the beginning) to a very deterministic search (at the end). In the current implementation we opted to first draw the values of $q_i^kx_i^k$ from a Gaussian pdf given by $$f(x) = \frac{1}{\sigma \sqrt(2\pi)} \exp\left[ - \frac{(x-\mu)^2}{2\sigma^2} \right],$$ with $\mu = x_i^k$ and $\sigma = \sigma^k$; then, we computed the values of $q_i^k$. Starting from $\sigma^0$, the standard deviation of the pdf is decreased by the following “cooling” schedule, $\sigma^{k+1} = \beta \cdot \sigma^{k}$, where $0<\beta <1$ is the reduction factor. As $\sigma^{k}$ approaches zero, the values of $q_i^k$ tend to unity, the algorithm reduces to the steepest descent method, and the search process becomes essentially local. As in a SA algorithm, the performance of the minimization algorithm depends crucially on the choice of parameters $\sigma^0$ and $\beta$. A too rapid decrease of $\sigma^{k}$, for example, may cause the algorithm to be trapped in a local minimum. Step Length {#suc:step} ----------- The calculation of the step length $\alpha$ is a tradeoff. On the one hand, $\alpha$ should give a considerable reduction of the objective function. On the other hand, its calculation should not take too many evaluations of $F$ [@nocedal]. Steepest descent algorithms generally use line-search techniques to determine the step length $\alpha^k$ along the steepest descent direction $\mathbf{d}^k=-\nabla F(\mathbf{x}^k)$ at the iteration $k$. A first version of our algorithm [@soterroni] applied the golden section for step length determination. However, traditional line-search algorithms, like the golden section, ensure that the condition $F(\mathbf{x}^{k+1}) < F(\mathbf{x}^k)$ is always satisfied, what obviously is a poor strategy when dealing with multimodal minimization problems. In addition, depending on the value of $q$, the negative of the $q$-gradient may not point to the local descent direction. One way to circumvent these problems is to use a diminishing step length $\alpha^k$, i.e., the initial step length $\alpha^0$ is reduced by $\alpha^{k+1} = \beta \cdot \alpha^{k}$, where, for the sake of simplicity, $\beta$ is the same reduction factor used to compute $\sigma^{k}$. As the step length decreases (and the values of $q_i^k$, in parallel, tend to unity), a smooth transition to an increasingly local search process occurs.\ The main idea behind the $q$-gradient method is to use the negative of the $q$-gradient of $F$, instead of the negative of the classical gradient of $F$, as the search direction. Strategies for generating the parameter $\mathbf{q}^k$ and the step length $\alpha^k$, at each iteration, complement this very simple algorithm. Note that there are three free parameters to be specified, namely, $\sigma^0$, $\alpha^0$ and $\beta$. The initial standard deviation $\sigma^0$ determines how stochastic is the search. For multimodal functions, it must be high enough to allow the method to properly sample the search space. The reduction factor $\beta$ controls the speed of the transition from stochastic to deterministic search. A $\beta$ close to $1$ reduces the risk of being trapped in a local minimum. The last free parameter, the initial step length $\alpha^0$, depends heavily on the topology of the search space and, thus, requires some empirical exploration. In the end, as with the choice of the cooling schedule in a SA algorithm [@locatelli], an appropriate specification of the three free parameters is strictly dependent on the objective function. Although a bad choice may lead to some deterioration in its performance, the $q$-gradient method has shown to be sufficiently robust to still be capable of reaching the global minimum. Computational Experiments {#sec:description} ========================= The performance of the $q$-gradient method was evaluated over six $20$-variable test functions ($n=20$) commonly employed in the literature. We use the same experimental setup and stopping criteria as described in [@deb] and [@ballester] in order to allow a direct comparison with their results. The stopping criteria are: maximum of $10^6$ function evaluations or $F(\mathbf{x})< 10^{-20}$. As in [@deb] and [@ballester], we set the three free parameters for the $q$-gradient method after preliminary exploratory runs. The results presented here are for those which yielded the best performance. The benchmark consists of the following analytical functions: 1. Ellipsoidal function ($F_{elp}$) $$\label{eq:ellipsoidal} F_{elp}= \sum_{i=1}^{n} i \mathbf{x}_i^2.$$ Although the Ellipsoidal function is convex and unimodal, it is an example of a poorly scaled function. 2. 3. Schwefel’s function ($F_{sch}$) $$\label{eq:schwefel} F_{sch}= \sum_{i=1}^{n} \left( \sum_{j=1}^{i} \mathbf{x}_j \right)^2.$$ The Schwefel’s function is an extension of the Ellipsoidal function and it is also a unimodal and poorly scaled function. 4. 5. Generalized Rosenbrock’s function ($F_{ros}$) $$\label{eq:rosenbrock} F_{ros}= \sum_{i=1}^{n-1} [ 100 \cdot (\mathbf{x}_i^2- \mathbf{x}_{i+1})^2 +(1-\mathbf{x}_i)^2].$$ Although the Rosenbrock’s function is a well-known unimodal function for $n=2$, numerical experiments have shown that for $ 4 \leq n \leq 30$ the function has two minima, the global one at $\mathbf{x}=\mathbf{1}$ and a local minimum that changes with the dimensionality $n$ [@shang]. The Rosenbrock’s function is considered a test case for premature convergence once the global minimum lays inside a long, narrow, and parabolic shaped flat valley. 6. 7. Ackley’s function ($F_{ackl}$) $$F_{ackl}=20 + e -20 \exp{\left(-0.2 \sqrt{\frac{1}{n} \sum_{i=1}^{n}\mathbf{x}_i^2}\right)} - \exp{\left(\frac{1}{n} \sum_{i=1}^{n} \cos(2 \pi \mathbf{x}_i )\right)}.$$ The Ackley’s function is highly multimodal and the basin of the local minima increase in size as one moves away from the global minimum [@ballester]. 8. 9. Rastrigin’s function ($F_{rtg}$) $$\label{eq:rastrigin} F_{rtg}= 10 n + \sum_{i=1}^{n} ( \mathbf{x}_i^2-10\cos(2\pi\mathbf{x}_i) ).$$ The Rastrigin’s function has a parabolic landscape away from the global minimum, but as we move towards the global minimum, the size of the basins increase. The function is highly multimodal and its characteristics are known to be difficult for many optimization algorithms to achieve the global minimum [@ballester]. 10. 11. Rotated Rastrigin’s function ($F_{rrtg}$). $$\begin{aligned} F_{rrtg}&{}={}& 10 n + \sum_{i=1}^{n} ( \mathbf{y}_i^2 -10\cos(2\pi\mathbf{y}_i) ), \quad \mathbf{y} = A\cdot \mathbf{x} \\ &&{}\: A_{i,i} = 4/5 \nonumber \nonumber \\ &&{}\: A_{i,i+1} = 3/5 \ (i \ \mbox{odd}) \nonumber \\ &&{}\: A_{i,i-1} = -3/5 \ (i \ \mbox{even}) \nonumber \\ &&{}\: A_{i,j} = 0 \ (\mbox{otherwise}) \nonumber\end{aligned}$$ The rotated Rastrigin’s is a highly multimodal function without local minima arranged along the axis [@ballester]. 12. For all these functions the global minimum is $F(\mathbf{x}^{*}) = 0$ at $\mathbf{x}^{*} = \mathbf{0}$, except the Generalized Rosenbrock’s function where $\mathbf{x}^{*} = \mathbf{1}$. The initial point set $\mathbf{x}^0$ for each function is generated by a uniform distribution within $[-10,-5]$, as used in [@deb] and [@ballester]. Results {#sec:results} ======= Extensive comparisons between the GAs G3-PCX (results obtained from [@deb] for Ellipsoidal, Schwefel, Rosenbrock and Rastrigin functions; and from [@ballester] for Ackley and Rotated Rastrigin), SPC-vSBX and SPC-PNX (results obtained from [@ballester]) and the $q$-gradient method are presented in Tables \[tab:unimodal\] and \[tab:multimodal\]. As in [@deb] and [@ballester], the “Best”, “Median” and “Worst” columns refer to the number of function evaluations required to reach the accuracy $10^{-20}$. When this condition is not achieved, the best value found so far for the test function after $10^6$ evaluations is given in column “$F(\mathbf{x}_{best})$”. The column “Success” refers to how many runs reached the target accuracy, for unimodal functions, or ended up within the global minimum basin, for multimodal ones. The best performances are highlighted in bold in each table. The corresponding values of the best parameters $\sigma^0$, $\alpha^0$ and $\beta$ used in each test function are given in Table \[tab:parameters\]. [llll]{} Functions & $\mathbf{\sigma^0} \quad$ & $\mathbf{\alpha^0} \quad$ & $\mathbf{\beta} \quad$\ Ellipsoidal & $0.4$ & $38$ & $0.86$\ Schwefel & $0.1$ & $1$ & $0.997$\ Rosenbrock & $0.1$ & $0.1$ & $0.9995$\ Ackley & $20$ & $12$ & $0.90$\ Rastrigin & $21$ & $0.3$ & $0.9995$\ Rotated Rastrigin & $30$ & $0.5$ & $0.999$\ [lllllll]{} Function & Method & Best & Median & Worst & $F(\mathbf{x}_{best})$ & Success\ & **G3-PCX** & $\mathbf{5,826}$ & $\mathbf{6,800}$ & $\mathbf{7,728}$ & $\mathbf{10^{-20}}$ & $\mathbf{10/10}$\ & SPC-vSBX & $49,084$ & $50,952$ & $57,479$ & $10^{-20}$ & $10/10$\ & SPC-PNX & $36,360$ & $39,360$ & $40,905$ & $10^{-20}$ & $10/10$\ & **$q$-Gradient** & $\mathbf{5,905}$ & $\mathbf{7,053}$ & $\mathbf{7,381}$ & $\mathbf{10^{-20}}$ & $\mathbf{50/50}$\ & **G3-PCX** & $\mathbf{13,988}$ & $\mathbf{15,602}$ & $\mathbf{17,188}$ & $\mathbf{10^{-20}}$ & $\mathbf{10/10}$\ & SPC-vSBX & $260,442$ & $294,231$ & $334,743$ & $10^{-20}$ & $10/10$\ & SPC-PNX & $236,342$ & $283,321$ & $299,301$ & $10^{-20}$ & $10/10$\ & $q$-Gradient & $289,174$ & $296,103$ & $299,178$ & $10^{-20}$ & $50/50$\ & **G3-PCX** & $\mathbf{16,508}$ & $\mathbf{21,452}$ & $\mathbf{25,520}$ & $\mathbf{10^{-20}}$ & $\mathbf{36/50}$\ & SPC-vSBX & $10^6$ & - & - & $10^{-4}$ & $48/50$\ & SPC-PNX & $10^6$ & - & - & $10^{-10}$ & $38/50$\ & $q$-Gradient & $10^6$ & - & - & $10^{-10}$ & $50/50$\ [lllllll]{} Function & Method & Best & Median & Worst & $F(\mathbf{x}_{best})$ & Success\ & G3-PCX & $10^6$ & - & - & $3,959$ & $0$\ & SPC-vSBX & $57,463$ & $63,899$ & $65,902$ & $10^{-10}$ & 10/10\ & SPC-PNX & $45,736$ & $48,095$ & $49,392$ & $10^{-10}$ & 10/10\ & **$q$-Gradient** & $\mathbf{11,850}$ & $\mathbf{12,465}$ & $\mathbf{13,039}$ & $\mathbf{10^{-15}}$ & $\mathbf{50/50}$\ & G3-PCX & $10^6$ & - & - & $15,936$ & $0$\ & SPC-vSBX & $260,685$ & $306,819$ & $418,482$ & $10^{-20}$ & 6/10\ & SPC-PNX & $10^6$ & - & - & $4.975$ & 0\ & **$q$-Gradient** & $\mathbf{676,050}$ & $\mathbf{692,450}$ & $\mathbf{705,037}$ & $\mathbf{10^{-20}}$ & $\mathbf{48/50}$\ & G3-PCX & $10^6$ & - & - & $309.429$ & $0$\ Rotated & SPC-vSBX & $10^6$ & - & - & $8.955$ & $0$\ Rastrigin & SPC-PNX & $10^6$ & - & - & $3.980$ & $0$\ & **$q$-Gradient** & $\mathbf{541,857}$ & $\mathbf{545,957}$ & $\mathbf{549,114}$ & $\mathbf{10^{-20}}$ & $\mathbf{20/50}$\ In Table \[tab:unimodal\], for the Ellipsoidal function, the $q$-gradient method achieved the required accuracy $10^{-20}$ for all 50 runs, with an overall performance similar to the one displayed by the G3-PCX, the best algorithm among the GAs. As for the Schwefel’s function, the $q$-gradient method again attained the required accuracy for all runs but was outperformed by the G3-PCX in terms of the number of function evaluations. Finally, for the Rosenbrock’s function, the $q$-gradient was beaten by the G3-PCX (the only to achieve the required accuracy) but performed better then the two other GAs. The overall evaluation of the $q$-gradient method performance in these numerical experiments with unimodal (or quasi-unimodal) test functions indicates that it reaches the required accuracy (or the minimum global basin) in $100\%$ of the runs, but it is not faster than the G3-PCX. This picture improves a lot when it comes to tackle the multimodal Ackley’s and Rastringin’s functions. In Table \[tab:multimodal\], due to limited computing precision the required accuracy for the Ackley’s function was set to $10^{-10}$ for the GAs and $10^{-15}$ in our simulations[^1]. The $q$-gradient method was here clearly better than the GAs, reaching the required accuracy in more runs or in less functions evaluations. For the Rastrigin’s function, the G3-PCX and the SPC-PNX were unable to attain the global minimum basin. The other two algorithms reached the required accuracy $10^{-20}$, but the $q$-gradient method was the only to do it in $96\%$ of the runs (48 over 50). Finally, in the case of the rotated Rastrigin’s function, the $q$-gradient was the only algorithm to reach the minimum, attaining the required accuracy in $20$ out of $50$ independent runs. Summarizing the results with multimodal functions, we may say that the $q$-gradient method outperformed the GAs in all the three test cases considered, reaching the minimum with less function evaluations or in more independent runs. Conclusions and Future Work {#sec:conclusions} =========================== The main idea behind the $q$-gradient method is the use of the negative of the $q$-gradient of the objective function — a generalization of the classical gradient based on the Jackson’s derivative — as the search direction. The use of Jackson’s derivative provides an effective mechanism for escaping from local minima. The method has strategies for generating the parameter $q$ and the step length that makes the search process gradually shifts from global in the beginning to almost local search in the end. For testing this new approach, we considered six commonly used 20-variable test functions. These functions display features of real-world optimization problems (multimodality, for example) and are notoriously difficult for optimization algorithms to handle. We compared the $q$-gradient method with GAs developed by Deb et al. [@deb], and Ballester and Carter [@ballester] with promising results. Overall, the $q$-gradient method clearly beat the competition in the hardest test cases, those dealing with the multimodal functions. It comes without suprise the (relatively) poor results of the $q$-gradient method with the Rosenbrock’s function, a unimodal test function specially difficult to be solved by the steepest descent method. This result highlights the need for the development of a $q$-generalization of the well-known conjugate-gradient method, a research line currently being explored.\ The authors gratefully acknowledge the support provided by the National Counsel of Technological and Scientific Development (CNPq), Brazil. [1]{} Ernst, T.: The history of $q$-calculus and a new-method. U.U.D.M. Report 2000:16, Department of Mathematics, Uppsala University, Sweden (2000) Ernst, T.: A method for $q$-calculus. J. Nonlinear Math. Phys. 10, 487–525 (2003) Chaundy, T.W.: Frank Hilton Jackson (obituary). J. London Math. Soc. 37, 126–128 (1962) Jackson, F.H.: On $q$-functions and a certain difference operator. Trans. Roy. Soc. Edinburgh 46, 253-281 (1908) Jackson, F.H.: On $q$-definite integrals. Quart. J. Pure Appl. Math. 41, 193-203 (1910) Kac, V., Cheung, P.: Quantum Calculus. Universitext, Springer New York, New York, NY (2002) Di Pillo, G., Palagi, L.: Unconstrained Nonlinear Programming. In: Pardalos, J.M., Resende, M.G.C. (eds.) Handbook of Applied Optimization, pp. 268–284. Oxford University Press, New York (2002) Deb, K., Anand, A., Joshi, D.: A computationally efficient evolutionary algorithm for real-parameter optimization. Evolutionary Computation 10-4, pp. 371–395 (2002) Ballester, P.J., Carter, J.N.: An effective real-parameter genetic algorithm with parent centric normal crossover for multimodal optimisation. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 901–913. Seattle, WA, USA (2004) Vanderplaats, G.N.: Multidiscipline design optimization. Vanderplaats Research and Development Inc., Monterey, CA (2007) Nocedal, J., Wright, S.J.: Numerical optimization (2nd ed). Springer Science Business Media, New York, NY (2006) Soterroni, A. C., Galski, R. L., Ramos, F. M.: The $q$-gradient vector for unconstrained continuous optimization problems. In: Operations Research Proceddings 2010, pp. 365–370. Springer-Verlag Berlin Heidelberg (2011) Locatelli, M.: Simulated annealing algorithms for continuous global optimization. In: Pardalos, J.M. and Romeijn, H. E. (eds.) Handbook of Global Optimization II, pp. 204–207. Kluwer Academic Publishers, Dordrecht (2002) Shang, Y., Qiu, Y.: A note on the extended Rosenbrock function. Evolutionary Computation 14, 119–126 (2006) [^1]: Numerical experiments have shown that Ackley’s function evaluated at $\mathbf{x}=0$ with double precision is equal to $-0.4440892098500626$E$-015$ and not zero.
--- address: | HEFIN, University of Nijmegen/NIKHEF,\ Toernooiveld 1, 6525 ED Nijmegen, The Netherlands author: - 'W. KITTEL' title: | INTERCONNECTION EFFECTS AND W$^+$W$^-$ DECAYS\ (a critical (p)(re)view) [^1] --- \#1[\#1$#1$]{} \#1 \#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{} Ø ¶ § \#1[$\setbox0=\hbox{#1} \dp0=1.5pt \mathsurround=0pt \underline{\box0}$]{} HEN-420 Introduction ============ When Bo Andersson reported on the incorporation of Bose-Einstein correlations into the Lund string during an earlier workshop,[@1] he started: “this is the most difficult work I have ever participated in” and he did not even refer to the W$^+$W$^-$ overlap! The statement sets the scale, but should be squared when applied to the latter. That’s why it is easier to be [*critical*]{} on this topic than to [*review*]{} it and why I shall reduce my task at this Rencontre to giving a personal (though still critical) [*view*]{}, instead. Interconnection effects, at first sight a nuisance when trying to measure the W mass, on the other hand may open new handles for the study of basic issues as the structure of the vacuum and the space-time development of a $\rq\bar \rq$ system at high energy. Of course, the phenomenon of color reconnection is by no means restricted to W$^+$W$^-$ decay. Other examples are J/$\Y$ production in B decay, event shapes in Z-decay or rapidity gaps at HERA. Color reconnection ================== The models ---------- If produced in the same space-time point, pairs of quarks and anti-quarks ($\rq_1\bar \rq_4$) and $(\rq_3\bar \rq_2)$ originating from the decay of [*different*]{} W’s can form strings, if they happen to be in a color singlet.[@2] From color counting, this is fulfilled in $1/9$ of the cases, but this recoupling probability can be enhanced by gluon exchange. However, the pairs $(\rq_1\bar \rq_2)$ and $(\rq_3\bar \rq_4)$ are produced at a distance $\propto 1/\G_\rW\approx 0.1$ fm, small compared to the hadronic scale, but large enough to suppress exchange and/or interference of hard $(E_\rg\gtrsim\G_\rW$) gluon.[@3; @4] Soft-gluon interference, is of course possible. It depends on the vacuum structure and a number of models exist.[@3; @4; @5; @6; @7; @8] According to the underlying software package used, they can be grouped into the following. 1\. [*PYTHIA based models:*]{}\ a) SKI [@3] uses Lund strings and allows at most one reconnection. The color field is treated as a Gaussian-profile flux tube (as in a type I superconductor) with a radius of $\sim$0.5 fm and the recoupling probability $\r$ depends on the overlap of the two flux tubes in space-time. The recoupling probability density is a free parameter quite arbitrarily chosen to be 0.9 fm$^{-4}$ (but varied easily). At 183 GeV, recoupling is predicted to occur in 38% of the events.[@9] b\) SKII [@3] also uses Lund strings with at most one reconnection, but the color field is treated as an exponential-profile vortex line (as in a type II superconductor). When two vortex lines cross, i.e. have a space-time point in common, for the first time, they recouple with unit probability $(\r=1)$. At 183 GeV, this gives a recoupling in 22% of the events.[@9] c\) SKII’ [@3]: like SKII’ but with $\r=1$ upon first crossing reducing the total string length, giving a recoupling in 20% of the 183 GeV/$c$ events.[@9] d\) ŠTN [@5]: is an important extension of SKI and SKII to implement the space-time evolution of the shower, as well as multiple reconnection, including self-interaction of strings. This approach is more realistic than the SK versions, but still shares one problem: the color reconnection is performed after the generation of the complete parton shower and, therefore, cannot change its development. 2\. [*Color-dipole based models*]{} [@4; @6]:\ Here, the Lund string and its gluon kinks are replaced by a chain of dipoles. Within, or between two dipole chains, reconnection is possible when the color indices (ranging from 1 to 9) of two (non-adjacent) dipoles are the same. Reconnection is indeed performed when the string length measure $\la=\sum^{n-1}_1 \ln (m^2_{i,i+1}/m^2_0)$ is reduced $(m_{i,i+1}$ is the invariant mass of the string segment stretched by partners $i$ and $i+1$ and $m_0$ a hadronic mass scale around 1 GeV). Also these models exist in a number of versions. In [@4], the number of reconnections per event was at most one, and there was no reconnection within a W. In version [@6], two dipole systems $\rq_1\bar \rq_2$ and $\rq_3\bar \rq_4$ first evolve separately, radiating gluons with $E_\rg>\G_\rW$ independently, but with color reconnections within each dipole system. Then, when $E_\rg<\G_\rW$, reconnections between the two systems are switched on. In practice, because of the $k_\rT$-ordering of CDM, the cascade is run twice: first for $E_\rg> \G_\rW$ without reconnections between the two systems, and a second time allowing only $E_\rg<\G_\rW$ with interconnections. An an alternative, the second cascade can be omitted, but interconnections between the systems is allowed before fragmentation. 3\. [*Cluster models*]{}: Quarks and gluons originating from the parton showers combine into clusters. These are less extended and less massive than strings are and decay isotropically into a small number of hadrons. a\) HERWIG based [@7]: After showering, the gluons are split non-perturbatively into quark-antiquark pairs and each may form a color-singlet cluster with a color-connected partner. At the start of the cluster-formation phase, color connections are established between clusters that reduce the space-time extension of the clusters, and reconnections are allowed in $1/9$ of the cases. Reconnection among the products of a single shower are natural in this model. b\) VNI based [@8]: Three scenarios are considered for cluster formation, one of which including non-singlet clustering, where the net color of the cluster is carried off by a secondary parton. [ *Two critical comments on all models: they should contain reconnection within a single W, and if they do, they should be very carefully retuned on the Z.*]{} Interesting in this connection is an OPAL study [@abbi] of gluon production in Z decay, e$^+$e$^-\to \rq\bar\rq \rg_\incl$, where reconnection effects are expected to contribute [@4]. Two versions of the dipole model tested predict noticeably fewer particles at small rapidities and energies than are observed in the data (or the conventional QCD programs), as well as a downward shift of about one unit in the $\rg_\incl$ charged-particle multiplicity. The data -------- The recent data are beautifully summarized by the previous speaker,[@10] so that I can restrict myself to a few comments. Fig. 1 reproduces a comparison of OPAL data [@9] and model predictions for the charged-particle multiplicity (a) and (b) and thrust distributions (c). The full lines correspond to model versions without reconnection, the other lines to models with reconnection. Two conclusions from this figure are: 1\. The VNI based model [@8] is way off, it does not fit the data at all, but the simulations are also in strong disagreement with results published in [@8], which, in their turn, were equally far off (at least in thrust $T$), but in the opposite direction. Furthermore, the MC code is reported not to conserve energy.[@9] I leave it to the reader to decide to do something about this or to forget the model. 2\. All the other models (including reconnection or not) look so similar in $n_{\ch}$ and $1-T$, that these variables are obviously not discriminative. So, the search for color reconnection boils down to the search for discriminative variables. As reconnections reduce the string length or the cluster size, and these determine the average multiplicity, $\lan n_\ch\ran$ was suspected to be a good candidate. Fig. 1 does, however, not give a lot of hope, but one can look at $\lan n_\ch\ran$ in the overlap regions, alone. In Fig. 2 we reproduce a study of a recent working group.[@11] The model predictions for the multiplicity shift n\_= n\^\_- 2 n\^\_are given for all momenta (leftmost points), as well as for a number of rapidity $y$ and momentum $P$ cuts reducing the sample to that of the overlap region. The present LEP average for the leftmost point (all $P$, all $y$) is $\D n_\ch=0.18\pm 0.39$ [@10] (or $0.54\pm1.08$% on Fig. 2). That means no effect outside errors, but also agreement with color reconnection as predicted by PYTHIA and HERWIG. As the reduced available string length or cluster size will be felt first by heavy particles, it has been suggested to look at kaons + protons with momenta restricted to $0.2-1.2$ GeV/$c$. DELPHI [@DELPHI] finds a shift of $(+3\pm15)$% while ($-8$ to $-3$)% is predicted. [*However, we have to do with a complex overlap of two complex systems, where correlations and not averages are at play! Besides that, where the multiplicity is reduced by reducing the string length, it is quite likely to be increased by Bose-Einstein correlations! The least I recommend, if one wants to restrict oneself to averages, is to study the shift in integrated [*two-particle*]{} density, i.e., the second-order factorial moment F\_2 = F\^\_2-2 F\^\_2-2n\^\^2 or, better, to look at the shift $\D\r(1,2)$ in the two-particle density itself*]{}.[@12] Bose-Einstein correlations ========================== The previous speaker [@10] has given a beautiful summary on the contradictory results on inter-W BE effects. Obviously, before embarking on a study of inter-W BE correlation effect, we first have to understand intra-W BE correlations and the space-time shape of a [*single*]{} W. Since even that is impossible with present statistics, we have to go back and look at the Z$^0$ in more detail! Experimental results on the $Z^0$ --------------------------------- From BE analysis of the Z$^0$ [@13] we know, first of all, that BE correlations indeed exist in its hadronic decay. So, they can, in principle, give problems in WW overlap. However, more importantly, these very BE correlations can be used as a pion-interferometry laboratory to measure the space-time development of hadronic Z-decay, and, ultimately in WW overlap. It is this, where actually much more is known already than generally used in WW studies: 1\. [*Elongation of the pion source*]{} [@14; @15]: Applying a two- or three-dimensional (instead of the usual one-dimensional) parametrization of the correlation function [@16] R\_2(Q\_,Q\_,Q\_)= 1+(-r\^2\_Q\^2\_-r\^2\_ Q\^2\_-r\^2\_Q\^2\_) in the longitudinal, out and side components of the squared two-particle four-momentum difference $Q=(-(p_1-p_2)^2)^{1/2}$ and the corresponding size parameters $r_\rL,r_\out,r_\side$, DELPHI [@14] and L3 [@15] find a clear elongation along the thrust axis of the pion source in the longitudinal cms.[@17] The ratios of the transverse radii ($r_\out, r_\side)$ and $r_\rT=(r^2_\out + r^2_\side)$ are given in Table 1. L3[@15] JETSET DELPHI (2-jet) [@14] ----------------- -------------------------------- -------------------- ---------------------- $r_\rT/r_\rL$ $0.73\pm 0.02^{+0.03}_{-0.10}$ $0.92\pm0.02$ $0.64\pm0.02\pm0.04$ CL(%) 0.8 $0.4\cdot 10^{-3}$ $r_\out/r_\rL$ $0.71\pm 0.02^{+0.05}_{-0.08}$ $0.82\pm0.02$ $0.73\pm0.01\pm0.05$ $r_\side/r_\rL$ $0.80\pm 0.02^{+0.03}_{-0.18}$ $1.06\pm0.02$ $0.58\pm0.01\pm0.02$ CL(%) 3.1 0.016 : Transverse over longitudinal size parameters measured and predicted by JETSET The elongation is of course stronger in the 2-jet sample used by DELPHI, but even clear in the full data used by L3. This is in contradiction with the assumption of a spherically symmetric correlation function in most of the models (see below). 2\. [*Position-momentum correlation*]{}: The correlation length of $0.7<r_\rL<0.8$ fm [@14; @15] (called the [*length of homogeneity*]{}) corresponds to the length in space from which pions are emitted that have momenta similar enough to be able to interfere. The spatial extension of the Z emission function $S(t,z)$, on the other hand, is expected to be of the order 100 times that of $r_\rL$ (see Fig. 3a [@18]). This invokes a strong momentum ordering in space. Experimentally, this emission has been measured only in hadron-hadron [@19] and heavy-ion collisions,[@20] so far (see Fig. 3b,c), but its measurement in Z decay will tell us the actual shape of such a decay in space-time, and therefore how the overlap of WW has to be visualized! a\) -10mm     -5mm   3\. [*Non-Gaussian correlation function*]{}: For simplicity, parametrization Eq. (3), even if not spherically symmetric anymore, is still Gaussian. Strong deviation from such a behavior is known from hadron-hadron collisions,[@21] but deviations also exist at the Z.[@13] Generalizing the Gaussian to a so-called Edgeworth expansion [@22] R\_2=1+\_i (-r\^2\_i Q\^2\_i) , where $\k_3$ is the third-order cumulant moment and $H_3$ is the third-order Hermite polynomial, shows [@15] that the correlation is indeed stronger than Gaussian at small $Q$ (see Fig. 4). While maintaining the elongation, the CL value of the 3-D fit is now increased from 3.1% (see Table 1) to 30%. 4\. [*The transverse mass dependence*]{}: From heavy-ion collisions,[@23] it is known that the radii in Eq. (3) decrease with increasing average transverse mass $m_\rT$ of the particle pair. Preliminary results [@14; @15] indicate that such a behavior is also present in Z decay. The $m_\rT$ dependence is reproduced by JETSET/LUBOEI for $r_\out$, but not for the two other components (see Fig. 5). 5\. [*Genuine higher-order correlations*]{} exist in hadron-hadron collisions [@21; @24] and also in Z decay.[@25; @26] The DELPHI results are given in Fig. 6. For identical particles, they are not reproduced by JETSET/LUBOEI.[@25] 6\. [*Density (or multiplicity) dependence*]{}: A linear increase of the size of the pion-emission region with increasing particle density, combined with decrease of the correlation-strength parameter $\la$ is well known from heavy-ion and higher-energy ISR and collider results (e.g. [@27] and refs. therein). At least the decrease of $\la$ can be understood from the overlap of an increasing number of independent mechanisms (e.g. strings or clusters). OPAL [@28] has shown that a similar dependence is also present in Z decay (Fig. 7). It can at least in part be explained from the presence of two- and three-jet events. Figure 7. Dependence of $\la$ and $r$ on the charged-particle multiplicity $n$ for e$^+$e$^-$ collisions at the Z mass.[@28] Three types of Monte-Carlo implementation ----------------------------------------- 1\. [*Reshuffling*]{}: The MC code LUBOEI [@29] in JETSET treats BE correlations as a final state interaction (!) and actually changes particle momenta according to a spherically symmetric Gaussian (or, alternatively, exponential) correlator. The advantages are that it is a fast and unit-weight (i.e. efficient) generator. The bad news are that it is imposed a-posteriori (without any physical basis), is even unphysical (since it changes the momenta), is not self-consistent (since it introduces an artificial length scale [@30; @31]), see Fig. 8, is spherically symmetric, does not treat higher-order correlations properly, etc. The worse news is that it is used by everybody to correct for detector effects and that there is no perfectly tested alternative, at the moment. 2\. [*Global reweighting*]{}: Another, theoretically better justified approach is to attach to each pre-generated event a BE weight depending on its momentum configuration, but leaving this momentum configuration untouched. Based on the use of Wigner functions [@32] rather than amplitudes, a weight factor can be derived of the form [@33] W(p\_1,…p\_n)= \_[{P\_n}]{} \^n\_[i=1]{} K\_2 (Q\_[iP\_n(i)]{}) where $n$ is the number of identical particles, $K_2(=R_2-1)$ is the two-particle correlator and $P_n(i)$ is the particle which occupies the position $i$ in the permutation $P_n$ of the $n$ particles. Applications of the global weighting [@34; @35; @36; @37; @38] are essentially all variations on this theme, with varying model assumptions on the exact form of $K_2$. In general, $K_2(Q)$ is still assumed to be spherical in $Q$, even though a generalisation would be simple to implement. Higher-order correlations are, in principle, included, but either assume [@36] a quantum optical model, already shown to be wrong,[@24; @39] or factorization in terms of Eq. (5) not allowing for phases between the terms. More seriously, as in [@29] the weight is imposed a posteriori on a MC event pre-fabricated according to a given model, so not as a part of this model, itself. Problems arise from the fact that the number of permutations is $n$! so that simplifications have to be introduced.[@38] Wild fluctuations of event weights can occur, so that cuts on event weight are necessary. The weight may even change the parton distributions, while BE correlations only work on the pion level. Retuning is of course necessary, but this can, in practice be achieved by just retuning the multiplicity distribution.[@38] 3\. [*Symmetrizing*]{}: Bose-Einstein correlations have been introduced into string models, directly.[@40; @41] In these models, an ordering in space-time exists for the hadron momenta within a string. Bosons close in phase space are nearby in space-time and the length scale measured by Bose-Einstein correlations is not the full length of the string, but the distance in boson-production points for which the momentum distributions still overlap. The (non-normalized) probability $\rd\G_n$ to produce an $n$-particle state $\{ p_j \}$, $j=1,\dots n$ of distinguishable particles is \_n=\[\^n\_[j=1]{} N p\_j (p\^2\_j-m\^2\_j)\] (§p\_j-P)(-bA\_n) , where the exponential factor can be interpreted as the square of a matrix element $M_n = \exp (i\xi A_n), \rR\re (\xi)=\k, \rI\rrm (\xi)=b/2$, and the remaining terms describe phase space, with $P$ being the total energy-momentum of the state. $N$ is related to the mean multiplicity and $b$ is a decay constant related to the correlation length in rapidity. $A_n$ corresponds to the total space-time area covered by the color field, or to an equivalent area in energy-momentum space divided by the square of the string tension $\k=1$ GeV/fm.[@41] -3.5cm Figure 9. Space-time diagram for two ways to produce two identical bosons in the color-string picture [@40]. The production of two identical bosons (1,2) is governed by the symmetric matrix element M= (M\_[12]{}+M\_[21]{})= \[(iA\_[12]{})+ (iA\_[21]{})\] . There is an area difference and, consequently, a phase difference between $M_{12}$ and $M_{21}$ of $\D A=|A_{12}-A_{21}|$, where the indices 1, 2 particles 1, 2, respectively (see Fig. 9). Using this matrix element, one obtains R\_1+(A)/(bA/2)  , \[13-13b3\] where the average runs over all intermediate systems I. In the limit $Q^2=0$ follows, $\D A=0$ and $R_\BE= 2$, in agreement with the results from the conventional interpretation for completely incoherent sources. However, for $Q^2\not= 0$ follows an additional dependence on the momentum $p_\rI$ of the system I produced between the two bosons. The model can account well for most features of the e$^+$e$^-$ data, including the non-spherical shape of the BE effect. More recently, the symmetrization has been generalized to more than 2 identical particles.[@45] This approach deserves strong support. A more detailed account will be given in the next talk.[@18] Conclusions =========== With respect to color reconnection, my view is that VNI is out, that no effect has been observed in WW decay with the variables used so far, but that more discriminative methods , as those applied in correlation and fluctuation analysis, have to be used. With respect to BE correlations, I conclude, they may form a problem, but also can be used to study the very space-time development of the WW overlap. Since this first needs a detailed study of the space-time development of a single high-energy $\rq_1\bar\rq_2$ system, I suggest (in parallel to continued direct WW analysis) a four-step program for an analysis of the final data to come: 1\. Look at the Z in much more detail. In fact, a lot more information is available or becoming available than used by most of the model builders. E.g., the elongated, non-Gaussian shape of the correlation function excludes the present version of all models, except those of [@18; @40; @41; @45]. The shape of the emission function for a single $\rq_1\bar\rq_2$ system in space-time determines the actual WW overlap. This shape is known for hh and heavy-ion collisions and should be urgently measured at the Z. Higher-order correlations, a density dependence and a transverse-mass dependence are observed and can be expected to discriminate between models. 2\. Tune the models passing these tests on the Z, with and without b-quark contribution. 3\. Check them on a single W. 4\. Only then apply them to WW decay. One important last point: color reconnection and Bose-Einstein effects can (partially) cancel, as e.g. in multiplicity. So, in fully hadronic WW decay, their effects have definitely to be studied simultaneously, in the data, as well as in the models! References {#references .unnumbered} ========== [99]{} B. Andersson in [*Proc. 7th Int. Workshop on Multiparticle Production*]{}, Nijmegen 1996, eds. R.C. Hwa et al. (World Scientific, Singapore 1997), p.86. G. Gustafson, U. Pettersson and P. Zerwas, [*Phys. Lett.*]{} B [**209**]{} (1988) 90. T. Sjöstrand and V.A. Khoze, [*Z. Phys.*]{} C [**62**]{} (1994) 281; [*Phys. Rev. Lett.*]{} [**72**]{} (1994) 28; V.A. Khoze and T. Sjöstrand, [*Eur. Phys. J.*]{} C [**6**]{} (1999) 271. G. Gustafson and Häkkingen, [*Z. Phys.*]{} C [**64**]{} (1994) 659; C. Friberg, G. Gustafson and J. Häkkingen, [*Nucl. Phys.*]{} B [**490**]{} (1997) 289. Š. Todorova-Nová, [*Colour Reconnections in String Model*]{}, DELPHI96-158 PHYS651. L. Lönnblad, [*Z. Phys.*]{} C [**70**]{} (1996) 107. B.R. Webber, [*J. Phys.*]{} G [**24**]{} (1998) 287. J. Ellis and K. Geiger, [*Phys. Rev.*]{} D [**54**]{} (1996) 1967; [*Phys. Lett.*]{} B [**404**]{} (1997) 230. G. Abbiendi et al. (OPAL), [*Colour reconnections studies in e$^+$e$^-\to W^+W^-$ at $\sqrt s=183$ GeV*]{}, CERN-EP/98-196. G. Abbiendi et al. (OPAL), [*Experimental properties of gluon and quark jets from a point source*]{}, CERN-EP/99-028. F. Martin, Bose-Einstein Correlations and Color Reconnection in W-pair decays at LEP, these proceedings, and references therein. A. Ballestrero et al., [*J. Phys.*]{} G [**24**]{} (1998) 365. P. Abreu et al. (DELPHI), DELPHI, 99-21 CONF220. S.V. Chekanov, E.A. De Wolf and W. Kittel, [*Eur. Phys. J.*]{} C [**6**]{} (1999) 403. P.D. Acton et al. (OPAL), [*Phys. Lett.*]{} B [**267**]{} (1991) 143; G. Alexander et al. (OPAL), [*Z. Phys.*]{} C [**72**]{} (1996) 389; D. Decamp et al. (ALEPH), [*Z. Phys.*]{} C [**54**]{} (1992) 75; P. Abreu et al. (DELPHI), [*Phys. Lett.*]{} B [**286**]{} (1992) 201 and [*Z. Phys.*]{} C [**63**]{} (1994) 17. B. Lörstad, O.G. Smirnova (DELPHI) in [*Proc. 7th Int. Workshop on Multiparticle Production*]{}, eds. R.C. Hwa et al. (World Scientific, Singapore, 1997) p.42; O.G. Smirnova (DELPHI) in [*Proc. XXVIII Int. Symp. on Multiparticle Dynamics*]{}, eds. N. Antoniou et al. (World Scientific, Singapore, 1999) to be publ.; B. Lörstad, R. Mureşan, O. Smirnova (DELPHI), [*Two-dimensional Analysis of Bose-Einstein Correlations in e$^+$e$^-$ Annihilation at the $Z^0$ peak*]{}, DELPHI 99-52 CONF 245 and R. Mureşan, priv. comm. J. v. Dalen et al. in [*Proc. 8th Int. Workshop on Multiparticle Production*]{}, eds. T. Csörgő et al. (World Scientific, Singapore, 1999) to be publ.; M. Acciari et al. (L3), [*Measurement of an elongation of the pion source in Z decays*]{}, CERN-EP/99-050, subm. to Phys. Lett. B. G. Bertsch, M. Gong and M. Tokyama, [*Phys. Rev.*]{} C [**37**]{} (1988) 1896; S. Pratt, T. Csörgő and J. Zimanyi, [*Phys. Rev.*]{} C [**42**]{} (1990) 2646. T. Csörgő and S. Pratt in [*Proc. Workshop on Relativistic Heavy-Ion Physics*]{}, eds. T. Csörgő et al. (KFKI-1991-28/A, Budapest, 1991) p.75. Š. Todorova-Nová, these proceedings. N.M. Agababyan et al. (NA22), [*Phys. Lett.*]{} B [**422**]{} (1998) 359. A. Ster in [*Proc. 8th Int. Workshop on Multiparticle Production*]{}, eds. T. Csörgő et al. (World Scientific, Singapore, 1999) to be publ. N.M. Agababyan et al. (NA22), [*Z. Phys.*]{} C [**59**]{} (1993) 405; N. Neumeister et al. (UA1), [*Z. Phys.*]{} C [**60**]{} (1993) 633; H.C. Eggers, P. Lipa and B. Buschbeck, [*Phys. Rev. Lett.*]{} [**79**]{} (1997) 197. S. Hegyi and T. Csörgő in [*Proc. Budapest Workshop on Relativistic Heavy Ion Collisions*]{}, eds. T. Csörgő et al. (KFKI-1993-11/A, Budapest, 1993) p.47. H. Beker et al. (NA44), [*Phys. Rev. Lett.*]{} [**74**]{} (1995) 3340; T. Albar et al. (NA35), [*Z. Phys.*]{} C [**66**]{} (1995) 77. N.M. Agababyan et al. (NA22), [*Z. Phys.*]{} C [**68**]{} (1995) 229. P. Abreu et al. (DELPHI), [*Phys. Lett.*]{} B [**355**]{} (1995) 415. K. Ackerstaff et al. (OPAL), [*Eur. Phys. J.*]{} C [**5**]{} (1998) 239. C. Albajar et al. (UA1), [*Phys. Lett.*]{} B [**226**]{} (1989) 410; T. Alexopoulos et al. (E735), [*Phys. Rev.*]{} D [**48**]{} (1993) 1931. G. Alexander et al. (OPAL), [*Z. Phys.*]{} C [**72**]{} (1996) 389. L. Lönnblad, T. Sjöstrand, [*Phys. Lett.*]{} B [**351**]{} (1995) 293; [*Eur. Phys. J.*]{} C [**2**]{} (1998) 165. K. Fiałkowski, R. Wit, [*Z. Phys.*]{} C [**74**]{} (1997) 145. R. Mureşan, O. Smirnova, B. Lörstad, [*Eur. Phys. J.*]{} C [**6**]{} (1999) 629. S. Pratt, [*Phys. Rev. Lett.*]{} [**53**]{} (1984) 1219. A. Białas and A. Krzywicki, [*Phys. Lett.*]{} B [**354**]{} (1995) 134. R. Haywood, Rutherford Lab report RAL 94-074 (1995) V. Kartvelishvili, R. Kvatadze, R. M[ø]{}ller, [*Phys. Lett.*]{} B [**408**]{} (1997) 331. S. Jadach and K. Zalewski, [*Acta Phys. Pol.*]{} B [**28**]{} (1997) 1363. Q.H. Zhang et al., [*Phys. Lett.*]{} B [**407**]{} (1997) 33. K. Fiałkowski, R. Wit, [*Acta Phys. Pol.*]{} B [**28**]{} (1997) 2039; K. Fiałkowski, R. Wit, J. Wosiek, [*Phys. Rev.*]{} D [**58**]{} (1998) 094013. N. Neumeister et al. (UA1), [*Phys. Lett.*]{} B [**275**]{} (1992) 186. B. Andersson and W. Hofmann, [*Phys. Lett.*]{} B [**169**]{} (1986) 364. M.G. Bowler, [*Z. Phys.*]{} C [**29**]{} (1985) 617, [*Phys. Lett.*]{} B [**180**]{} (1986) 289; ibid. [**185**]{} (1987) 205; X. Artru and M. Bowler, [*Z. Phys.*]{} C [**37**]{} (1988) 293. B. Andersson and M. Ringnér, [*Nucl. Phys.*]{} B [**513**]{} (1998) 627; Š. Todorova-Nová and J. Rameš, Simulation of Bose-Einstein effect using space-time aspects of Lund string fragmentation model, Strasbourg preprint IReS97-29 and hep-ph/9710280. [^1]: Invited talk at the XXXIVth Rencontre de Moriond, QCD and High Energy Hadronic Interactions, Les Arcs (France), March 20-27, 1999 (full version)
--- abstract: 'We determine a necessary and sufficient condition for a polynomial over an algebraically closed field $k$ to induce a surjective map on matrix algebras $M_n(k)$ for $n \ge 2$. The criterion is given in terms of algebraic conditions on the polynomial and the proof uses simple linear algebra. Following that, we formulate and prove a corresponding result for entire functions as well.' author: - Shubhodip Mondal date: 'March 17, 2015' title: Surjectivity of maps induced on matrices by polynomials and entire functions --- Introduction : : Suppose that $A$ is a complex square matrix. The question of existence of a matrix $B$ such that $B^2 = A$ is a very well-known problem in linear algebra. It turns out that it is not possible to find such a $B$ for every $A$. In this note we extend the above question. Let $f$ be any polynomial over the complex numbers. Is it possible to find a matrix $B$ for every $A$ such that $f(B) = A$? We extend the question even further and ask the same for an entire function $f$ instead of a polynomial. After we prove our results, we can deduce some known results as corollaries e.g., the surjectivity of the exponential map on the invertible matrices [@ro] and a question (Picard’s theorem for matrices) originally asked by Pólya and answered by Szegö, which appears as an exercise in [@ps]. We begin with an arbitrary algebraically closed field $k$. We denote the set of polynomials in one variable over $k$ by $k[X]$ and the set of $n \times n$ matrices with entries from $k$ by $\text{M}_{n} (k)$. A polynomial $f \in k[X]$ induces a map $\text{M}_n(f): \text{M}_n(k) \to \text{M}_n(k)$, where $A \mapsto \sum a_i A^i \in \text{M}_n(k)$. By abuse of notaton, we denote $\text{M}_n (f) (A)$ as $f(A)$, when $n$ is understood. Let $f' \in k[X]$ denote the derivative polynomial of $f$ and let $Z(f')$ denote the set of zeros of the polynomial $f'$. If $t \in k$ is such that $f^{-1}(t) \subseteq Z(f')$, we say that $t$ is a critical value of $f$. [**Theorem 1.**]{} Let $n \ge 2$ be fixed. $\text{M}_n(f): \text{M}_n (k) \to \text{M}_n(k)$ is non-surjective iff there exists a $t\in k$ such that $f^{-1}(t) \subseteq Z(f') $, i.e., $f$ has a critical value. [**Note:**]{} The algebraic condition on the polynomial $f$ is independent of $n$. So either $\text{M}_n(f)$ is a surjection for all $n \ge 2$ or a non-surjection for all $n \ge 2$. [**Examples:**]{} 1. Evidently, every polynomial $f \in k[X]$ of degree $1$ induces a surjection on $M_n(k)$ for any $n \ge 2$. 2. Let $f \in k[X]$ be a quadratic polynomial, i.e., $f = aX^2+bX+c$. If char $k \ne 2$, then $Z(f')$ will be the singleton $\{-b/2a \}$ and the fibre $f^{-1}(f(-b/2a))= \{-b/2a\}$. If char $k=2$, and $b=0$ then $Z(f')= k$. So applying our result in both of these cases, we see that $M_n(f)$ cannot be a surjection. If char$ k=2$ and $b \ne 0$, then $Z(f')$ is empty, hence $M_n(f)$ is a surjection. 3. Let char $k = p$. We show that for any $d>2$, there exists a polynomial $f$ of degree $d$ for which $\text{M}_n(f)$ is a surjection. If $p \mid d$, then consider $f(z) = z^d + z$. Since $Z(f')$ is empty, it induces a surjection. If $p$ does not divide $d$ but $p \mid d-1$, then let $f(z) = z^d + z^{d-1}$. Since $Z(f') = 0$ and $\{-1\} \in f^{-1}(0)$, it follows that $f$ induces surjection. The only remaining case is $\gcd(p, d(d-1)) = 1$. Let $f(z) = z^d - dz$. Since $p$ does not divide $d-1$, $f'(z)$ has $d-1$ distinct roots $\{\zeta_1, \ldots,\zeta_{d-1} \}$. Let there be a $t \in k$ such that $f^{-1}(t) \subseteq Z(f')$. Then for some $1 \le r \le d-1$, $\zeta_r$ is a root of the polynomial $h(z) = f(z) - t$. If the multiplicity of $\zeta_r$ in $h(z)$ is at least 3, the multiplicity of $\zeta_r$ in $h'(z) = f'(z)$ is at least 2. But this is a contradiction since $f'(z)$ has $d-1$ distinct roots. So the multiplicity of $\zeta_r$ in $h(z)$ is at most 2. Since $\deg(h(z)) = d > 2$, there exists another root of $h(z)$. So there exists some $1 \le r \ne s \le d-1$ such that $\zeta_s$ is a root of $h(z)$. But then $t= f(\zeta_r) = f(\zeta_s)$, which imples $(1-d) \zeta_r = (1-d) \zeta_s$, contradicting the fact that $\zeta_r \ne \zeta_s$. Hence there cannot be any $t \in k$ such that $f^{-1}(t) \subseteq Z(f')$. Therefore $f$ induces a surjection. We recall some definitions before going into the proof of Theorem 1. Recall that a Jordan matrix $J$ is a block diagonal matrix $$J= \begin{pmatrix} J_1 & 0 & \cdots & 0 \\ 0 & J_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & J_p \end{pmatrix}$$ where the Jordan blocks $J_i$’s are square matrices of the form $$\begin{pmatrix} \lambda_i & 1 & 0 & \cdots & 0 \\ 0 & \lambda_i & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots& \ddots & \vdots \\ 0 & 0 & 0 & \lambda_i & 1 \\ 0 & 0 & 0 & 0 & \lambda_i \end{pmatrix}$$ The following facts are quite well-known and their proofs of can be found in [@hk] for example.\ 1. If $k$ is algebraically closed, any matrix $A \in \text{M}_n(k)$ is similar to a Jordan matrix $J$. The matrix $J$ is said to be the Jordan normal form of $A$. 2. The Jordan normal form of a matrix is unique up to a permutation of the diagonal blocks. 3. $\lambda_i$’s are eigenvalues of $A$ and number of Jordan blocks corresponding to the eigenvalue $\lambda_i$ in the Jordan normal form of $A$ is the dimension of the Kernel of $(A - \lambda_i \cdot I )$. This is a direct consequence of the rank-nulity theorem. Now we prove two lemmas which will be used to prove the theorem.\ [**Lemma 1.**]{} Let $U$ be a Jordan block with $\lambda$ as its eigenvalue. Let $p(X) = \sum_{m=0}^{n} a_m X^m \in k[X]$. Then, $p(U)_{ij} = 0$ for $i > j$ and $$p(U)_{ij} = \sum_{m=0}^{n} a_m \binom{m}{j-i} \lambda ^{m - (j-i)}$$ otherwise. (Here we follow the convention that $\binom{p}{q} = 0$ for $q >p$.)\ [**Proof.**]{} Observe that $U = \lambda I + N$ where $N_{ij} = 1$ if $j- i = 1$, and $N_{ij} = 0$ otherwise. By linearity, it suffices to prove the lemma in the case of $p(X) = X^k$, which is done below. $$\begin{aligned} U^k _{ij} &= (\lambda I + N)^k _{ij}\\ &= \sum_{r=0}^{k} \binom{k}{r} \lambda I^r N^{k-r}_{ij}\\ &= \sum_{r=0}^{k} \binom{k} {r} \lambda ^r N^{k-r}_{ij}\end{aligned}$$ Now, $N^{k-r} _{ij}$ is nonzero, only when $j-i = k-r$, or $r = k - (j-i)$.\ Hence the sum equals, $$\binom{k} {k - (j-i)} \lambda ^{k- (j-i)} = \binom{k}{j-i} \lambda ^{k - (j-i)}$$ as asserted.\ [**Note:**]{} Irrespective of the characteristic of $k$, if $j-i = 1$, we have $p(U)_{ij}=p'(\lambda)$. If char $ k = 0$, then $p(U)_{ij} = \frac{p^{(j-i)}(\lambda)}{(j-i)!}$, for all $1 \le i \le j \le n$ . [**Lemma 2.**]{} Let $n \ge 2$. If $U \in \text{M}_n(k)$ is a Jordan block with $\lambda$ as its eigenvalue and $p(X) \in k[X]$, then the Jordan normal form of $p(U)$ has at least two Jordan blocks if and only if $p'(\lambda) = 0$.\ [**Proof.**]{} To prove this lemma, we use the third fact noted earlier. By Lemma 1, $p(\lambda)$ is the only eigenvalue of $p(U)$ and number of blocks is equal to the dimension of $\text{Ker} ( P(U) - p(\lambda)\cdot I )$. So if $p' (\lambda) = 0$, then the first two columns of $P(U) - p(\lambda)\cdot I $ are zero by Lemma 1. Hence the rank is at most $n-2$. By the rank-nullity theorem, $ \text{Ker} ( P(U) - p(\lambda)\cdot I ) \ge 2$.\ Conversely, if $p '(\lambda) \ne 0$, the matrix $P(U) - p(\lambda)\cdot I$ has $n-1$ linearly independent columns. Indeed, by Lemma 1, the first column $c_1$ of $P(U) - p(\lambda)\cdot I $ is zero and the $k$-th column (for $2 \le k \le n$) is $$c_k = \left( \frac{p^{(k-1)}(\lambda)} {(k-1)!} , \ldots, p'(\lambda), 0 , \ldots, 0 \right)^t.$$ Since $p'(\lambda) \ne 0$, all the $n-1$ vectors $c_k$, for $2 \le k \le n$ are linearly independent. So $\dim \text{Ker} \left(P(U) - p(\lambda)\cdot I \right) = 1$, and $p(U)$ has only one Jordan block. [**Proof of Theorem 1.**]{} Let $t$ be a critical value of $f$. We take $Y \in M_n(k)$ such that $Y$ is a Jordan block with $t$ as eigenvalue and show that $Y$ is not in the image of $ M_n(f)$. Assume that $f(X') = Y$. Let $X$ be the Jordan normal form of $X'$. So $f(X)$ is similar to $Y$ (using $f(PAP^{-1}) = P f(A) P^{-1} $) . Since $Y$ is a Jordan block, $X$ also has to be a Jordan block. Otherwise Jordan normal form of $f(X)$ will not be a single Jordan block and hence cannot be similar to $Y$. Let $u$ be the eigenvalue of $X$. Hence $f(u)$ is the only eigenvalue of $f(X)$ (by Lemma 1). Since $f(X)$ is similar to $Y$, there eigenvalues have to be the same. So $f(u) = t$ $ \implies u \in f^{-1} (t) \subset Z(f')$. Hence $f'(u) = 0$. But then by Lemma 2, Jordan normal form of $f(X)$ has more than one Jordan block. Therefore, it cannot be similar to a single Jordan block $Y$. Before proving the converse, we prove two more lemmas which will be useful later. [**Lemma 3.**]{} Let $r \ge 1$ and $Y \in \text{M}_r(k)$ be a Jordan block with eigenvalue $\lambda$ such that $\lambda$ is not a critical value of $f$. Then there exists a Jordan block $X$ in $M_r(k)$ such that Jordan form of $f(X) = Y $.\ [**Proof.**]{} For $r=1$, the result is clear, since $k$ is algebraically closed. So we assume that $r \ge 2$ in what follows. Since $f^{-1} (\lambda) \not\subset Z(f')$, there exists $u \in k$ such that $f(u) = \lambda$ and $f'(u) \ne 0$. Let $X \in \text{M}_r (k)$ be the Jordan block with eigenvalue $u$. By Lemma 2, Jordan form of $f(X)$ is a Jordan block of order $r$ and has eigenvalue $\lambda$, so it has to be equal to $Y$.\ [**Lemma 4.**]{} Let $B \in M_n(k)$ such that the eigenvalues of $B$ are not critical values of $f$. Then $B$ lies in the image of $M_n(f)$.\ [**Proof.**]{} Let $Y$ be the Jordan normal form of $B$. Then $Y = \text{diag} (Y_1, \ldots, Y_p)$ is a Jordan matrix, where $Y_i$’s are Jordan blocks. Using the hypothesis on eigenvalues of $B$ and Lemma 3, there exists a Jordan block $X_i$ such that Jordan form of $f(X_i) = Y_i$. So Jordan form of $\text{diag} (f(X_1), \ldots, f(X_p) ) = Y$. Now consider the matrix $X =\text{diag}(X_1, \ldots,X_p) $. Since $$f(X) = f(\text{diag}(X_1, \ldots,X_p)) = \text{diag} (f(X_1), \ldots, f(X_p) ),$$ the Jordan form of $f(X)$ is equal to $Y$. In particular $f(X)$ is similar to $Y$ and consequently to $B$, which implies that $B$ is in the image of $M_n(f)$ (using $f(PAP^{-1}) = P f(A) P^{-1}$ ). Therefore, if $f$ has no critical values, Lemma 4 implies that $M_n(f)$ is a surjection. This proves the converse and finishes the proof of Theorem 1. We have established the following in the first part of the proof of Theorem 1, which we note down explicitely as a lemma for later use: [**Lemma 5.**]{} If $t$ is a critical value of $f$ and $Y \in M_n(k)$ is a Joran block with eigenvalue $t$, then $Y$ does not belong to the image of $M_n(f)$. [**Matrices of entire functions:**]{} A function $f: \mathbb C \to \mathbb C$ is said to be entire if it holomorphic on the whole of $ \mathbb {C}$. Given an entire function, we write it as a power series around zero : $f(z) = \sum_{n =0}^{\infty} a_n z^n$. For an entire function $f$, $ f(A) := \sum_{n=0}^{\infty} a_n A^n $ is a well-defined matrix. The reader may refer to [@mf] for its proof. So sending $A$ to $f(A)$ gives a map $\text{M}_n(f) : \text{M}_n(\mathbb C) \to \text{M}_n (\mathbb C)$. Let $f'$ be the derivative of $f$ and let $Z(f')$ denote the zeros of the $f'$. If $t \in \mathbb C$ is such that $f^{-1}(t) \subseteq Z(f')$, we say that $t$ is a critical value of $f$. Note that if $t \not\in f(\mathbb C) $, $t$ is a critical value of $f$. We define $D_n(f)$ to be the set of all matrices in $M_n(\mathbb C)$ whose eigenvalues lie in $f( \mathbb{C})$. Lemma 1 easily extends to the case of entire functions. In particular, if $\lambda_1, \ldots, \lambda_n$ are eigenvalues of $A \in \text{M}_n (\mathbb C)$, then $f(\lambda_1) ,\ldots,f(\lambda_n) $ are the eigenvalues of $f(A)$. So $\text{Im} ( M_n(f)) \subseteq D(f)$. Since $ f(P A P^{-1}) = P f (A) P^{-1}$ and $D_n(f)$ is closed under conjugation, the discussion for the case of polynomials applies mutatis mutandis to entire functions and yields the corresponding versions of all our lemmas. We also obtain the following theorem: [**Theorem 2.**]{} Let $n\ge2$ be a fixed. Then $\text{Im} ( M_n(f)) \ne D_n(f)$ iff there exists a $t \in f( \mathbb C)$ such that $f^{-1}(t) \subseteq Z(f')$. [**Note:**]{} By Picard’s little theorem [@re], if $f$ is a non-zero entire function then at most one complex number does not belong to the image of $f$. So $D_n(f)$ is either $\text{M}_n(\mathbb{C})$ or the set of all matrices whose eigenvalues does not equal to $p_f$ for some fixed complex number $p_f$. 2 mm [**Examples.**]{} 1. Let $f(z) = \text{exp}(z)$. Then $f(\mathbb C) = \mathbb C ^{*}$. Hence $D_n(f) = \text{GL}_n(\mathbb C)$, the set of $n\times n$ invertible matrices over $\mathbb C$. Since $Z( f') = Z(f) = \emptyset$, the matrix exponential is a surjective map from $M_n(\mathbb C) \to \text{GL}_n (\mathbb C)$. 2. $\text{sin}z$ and $\text{cos}z$ are surjective entire functions. But the maps they induce from $\text{M}_n (\mathbb C) \to \text{M}_n (\mathbb C)$ are not surjective for $n \ge 2$. Indeed, we have $\sin ^2 z + \cos ^2 z = 1$. So $\sin ^{-1} ( \left \{ \pm 1 \right \}) \subseteq Z ( \cos z)$ and $\cos^{-1}( \left \{ \pm 1 \right \}) \subseteq Z(\sin z)$. Hence by Lemma 5 (which remains true for entire functions, as noted in the previous discussion), Jordan blocks with eigenvalue $1$ and $-1$ are not in the image of $\text{M}_n(\text{sin}(z))$ or $\text{M}_n(\text{cos}(z))$. 3. [**Picard’s theorem for matrices  [@ps] :**]{} Let $C_f$ denote the set of all critical values of an entire function $f$. As a corrollary to the Second Fundamental Theorem of Nevanlinna, one obtains that $|C_f| \le 2$. The reader may refer to [@nv] for an exposition on Nevanlinna theory, which also contains the stated corrollary. Now by using Lemma 4 and Lemma 5 (both of them extend to the case of entire functions, as already noted) we obtain that an entire function $f$ has at most $2$ “exceptional values” in the following special sense: $A \in M_n(\mathbb C)$ lies in the image of $M_n(f)$ if none of the eignevlaues of $A$ coincides with an exceptional value of $f$. On the other hand, there are certain matrices with eigenvalues consisiting of exceptional values not belonging to the image of $M_n(f)$. [**Acknowledgements:**]{} I would like to thank Prof. S. Inamdar and Prof. B. Sury at Indian Statistical Institute for going through the proof and valuable suggestions. [40]{} A. Frommer; V. Simoncini: *Matrix functions. Model order reduction: theory, research aspects and applications*, 275–303, Math. Ind., 13, Springer, Berlin, 2008. G. Pólya; G. Szegö: *Problems and Theorems in Analysis*, Volume II, 35, Springer-Verlag 1976. K. Hoffman & R.A. Kunze: *Linear Algebra*, 244-249, Prentice-Hall 1961. K. S. Charak: *Value Distribution Theory of Meromorphic Functions*, 13-14, Mathematics Newsletter, Ramanujan Mathematical Society, Vol. 18, March 2009. R. Remmert: *Classical Topics in Complex Function Theory*, 233-235, Graduate Texts in Mathematics 172, Springer-Verlag 1998. W. Rossmann: *Lie groups: an introduction through linear groups*, 20-21, Oxford Graduate Texts in Mathematics 5, Oxford University Press 2002.
--- author: - | Paul Kuin, , and Mat Page\ Mullard Space Science Laboratory - University College London\ E-mail:\ E-mail:\ E-mail: title: The Swift UVOT grism calibration and example spectra --- Introduction ============ The [*Swift*]{} Ultraviolet and Optical Telescope (UVOT) includes two grisms in its filter wheel. One was optimised for the UV, one for the optical, though the optical grism extends further into the UV than is accessible from the ground. Since the [*Swift*]{} launch in December 2005, grism observations of many targets have been made. The easy scheduling of [*Swift*]{} has made UV spectroscopy of many novae and supernovae possible, often within a day of their discovery. Spectroscopy of other variables such as Be-WD binary systems has also been carried out but is more challenging as these are generally fainter [@6]. Spectroscopy of comets has also been successful in identifying the production rates of various molecules and dust from the spectra of their coma [@7]. The UVOT grisms and their calibration {#cal} ===================================== In Table \[table1\] the relevant parameters of the UVOT grisms have been summarized. Both grisms are sensitive below the typical atmospheric cutoff at around 3200Å. Although there are only two grisms present in the UVOT filterwheel, each grism has two default observing modes, called ’nominal’ and ’clocked’. In the clocked mode part of the aperture is blocked so that for part of the image the first orders are free from zeroth order contamination due to field stars. The photon-counting detectors provide stable, accurate measurements of the count rate over a range of about 5 magnitudes. [**visible grism** ]{} [**grism** ]{} --------------------------------------------- ------------------------ ---------------------- -- first order wavelength range 2850-6600 Å 1700-5000 Å first order wavelength accuracy (1$\sigma$) 22 Å 9 (18$^a$) Å spectral resolution 100 at 4000 Å 75 at 2600 Å no order overlap (first order) 2850-5200 Å 1700-2740 Å effective magnitude range 12-17 mag 10-16 mag dispersion (first order) 5.9 Å/pixel at 4200Å 3.1 Å/pixel at 2600Å zeroth order $b$-magnitude zeropoint 17.7 mag 19.0 mag effective area error nominal mode 11 % 15 % effective area error clocked mode 15 % 9 % The grism provides slitless spectroscopy and the spectrum position on the image is referenced by using an anchor point. The position of the anchor on the image is derived from the sky position of the source and the pointing knowledge. One can use either the zeroth orders in the grism image, or an image in a lenticular filter taken along with the grism exposure to determine the pointing and the anchor position. The lenticular filter method being preferred, as without it the wavelength errors are currently about twice as large. The wavelength accuracy given in Table \[table1\] is based on using the lenticular filter. The anchor position accuracy is the main source for the wavelength error, with possibly a small additional error at the end of the wavelength range as a result of the non-linear dispersion sensitivity to the anchor. ![[ The effective areas. The effective areas for the clocked modes are only valid for spectra near the centre of the detector. ]{}[]{data-label="effarea"}](effareas.eps){width="76.00000%"} The UVOT microchannel plate intensified photon counting detector system is read out every 11 ms and thus the image is being built up. If in one such 11ms time frame there are multiple photons incident at the same detector location, only one is registered. Therefore there are lost counts, and this is known as coincidence loss. The process is governed by Poisson statistics, and a correction can be made. We have calibrated this correction which affects the brightest sources and/or features in the grism spectra. The correction is made by multiplying the observed count rate with a factor which is accurate to within 20%. The grism throughput varies with the wavelength of the photons. This sensitivity variation is determined in terms of the grism effective area. The effective area was determined for each grism mode after correcting for coincidence loss. The nominal grism modes show a nearly constant effective area over the whole detector. In the clocked grism modes, due to the clocking covering part of the aperture, the effective area varies by position of the spectrum on the detector. Fig. \[effarea\] shows the resulting effective areas for spectra at the centre of the detector. The calibration of the grisms is described in detail in [@3]. In the next section a sample of the spectra can be found. It should be mentioned that an easy to use spectral extraction is now possible by using the [UVOTPY]{} software [@2] which was written in the freely available Python language. Documentation for the grism[^1] and the [UVOTPY]{} software[^2] are available on-line. A display of grism spectra ========================== ![[ The evolution of the spectrum of SN2009ip observed with the UV grism. The spectra have not been offset; the brightness variations are real. ]{}[]{data-label="SN2009ip"}](final.eps){width="80.00000%"} SN2009ip -------- The Type IIP supernovae have substantial UV emission which evolves over time when the ejecta cools. A good example is SN2009ip, see Fig. \[SN2009ip\], [@4] which had a failed eruption in 2009 and was then suspected as being a SN imposter, but eventually became a well-observed full-blown supernova in 2012. The required exposure times became longer when the SN became fainter. The last spectra were obtained by summing multiple exposures. Early exposures had the spectrum positioned at the centre of the detector. By using an offset position, later spectra avoided second order contamination up to much longer wavelengths, which can clearly be seen in the spectra. Some gaps were created where zeroth orders of field stars contaminated the spectra. ![[Nova RS Oph 2006. The spectrum is a combination of 3 UV grism spectra and 3 visible grism spectra, with omission of the overexposed parts in the visible grism spectrum below 5400Å. The feature shortward of 5400Å  is due to the second order 3135 line in the UV grism which was cut off above 5400Å. ]{}[]{data-label="nova"}](rs_oph.eps){width="76.00000%"} Many supernova spectra have been observed with the UV grism. Their spectra are being carefully processed and will be made available through the SOUSA archive [@1]. Nova RS Oph 2006 ---------------- Nova RS Oph was observed with both grisms during the 2006 outburst. Combining the data from both grisms, a spectrum from 1750 to 6600Å  has been obtained as shown in Fig. \[nova\]. Note that the Mg II 2800, and (marginally) the 3135Å  lines were too bright; the count rate there is too large for a coincidence loss correction in both grisms. Such a bright feature can also be seen to affect the nearby continuum by depleting the observed count rate there. This is related to the spatial extent of the coincidence loss. ![[ The spectrum of an Active Galactic Nucleus (NGC 5548) ]{}[]{data-label="AGN"}](AGN5548.eps){width="76.00000%"} AGN --- The spectrum of NGC 5548 (April 2005) of an active galactic nucleus, is shown in Fig. \[AGN\]. The spectrum shown is the sum of a UV and visible grism spectrum. Notable are the emission lines of CIII\], Mg II, and \[OIII\]. Detailed modeling of the NGC 5548 spectrum, in combination with other multi-spectral data, can be found in [@5]. Wolf-Rayet Stars ---------------- Spectra of a number of galactic WR stars have been obtained in a fill-in program. The brighter WR 52 and WR 86 were observed earlier by IUE, but we have obtained a new spectrum of the weaker WR 121, see Fig. \[wr\]. These spectra span approximately the brightness range accessible with the UVOT grisms. ![Wolf-Rayet stars WR52, WR86, and WR121.[]{data-label="wr"}](wr1.eps){width="76.00000%"} ![Comparison of the grism spectra to HST standards. The HST spectra are in black; uvot spectra coloured.[]{data-label="standards_comp"}](spectra_160_with_reference1.eps){width="100.00000%"} Comparison to HST spectral standards ------------------------------------ For the calibration of the coincidence loss and the effective area, grism spectra have been compared to HST standards. In Fig. \[standards\_comp\] the HST and Swift UVOT spectra are plotted together for White Dwarfs and Solar type stars. The Solar type stars do not have much UV flux, while the UV dominates in the White Dwarf spectra. This means that for the Solar type stars the second order contamination can be neglected for wavelengths below 4500Å, while in the White Dwarfs the second order can affect the spectrum from 2750Å and above. [99]{} M. Morii et al., *Extraordinary luminous soft X-ray transient MAXI J0158-744 as an ignition of a nova on a very massive O-Ne white dwarf, Ap. J.* [**779**]{} (2013) 118. D. Bodewits et al., *Swift-UVOT grism spectroscopy of comets: a first application to C/2007 N3 (Lulin), Ap. J.* [**141**]{} (2011) 12. N.P.M. Kuin et al., *The Swift-UVOT ultraviolet and visible grism calibration, MNRAS* [**449(3)**]{} (2015) 2514; *Astro-ph*/[**1501.02433**]{}. N.P.M. Kuin, *UVOTPY: Swift UVOT grism data reduction, Astrophysics Source Code Library* (2014) [**record ascl:1410.004**]{}. P.J. Brown et al., *SOUSA: the Swift Optical/Ultraviolet Supernova Archive, Astrophysics & Space Science* [**354**]{} (2014) 89. R. Margutti et al., *A Panchromatic View of the Restless SN 2009ip Reveals the Explosive Ejection of a Massive Star Envelope, Ap. J.* [**780**]{} (2014) 21. M. Mehdipour et al., *Anatomy of the AGN in NGC 5548 I. A global model for the broadband spectral energy distribution, A & A* [**585A**]{} (2015) 22; [*A*stro-ph]{}/[**1501.01188**]{}. [^1]: http://www.mssl.ucl.ac.uk/www\_astro/uvot [^2]: http://github.com/PaulKuin/uvotpy
--- abstract: 'We study the central diffractive production of the (three neutral) Higgs bosons, with a rapidity gap on either side, in an MSSM scenario with CP-violation. We consider the $b\bar{b}$ and $\tau\bar{\tau}$ decay for the light $H_1$ boson and the four $b$-jet final state for the heavy $H_2$ and $H_3$ bosons, and discuss the corresponding backgrounds. A direct indication of the existence of CP-violation can come from the observation of either an azimuthal asymmetry in the angular distribution of the tagged forward protons (for the exclusive $pp\to p+H+p$ process) or of a sin$2\varphi$ contribution in the azimuthal correlation between the transverse energy flows in the proton fragmentation regions for the process with the diffractive dissociation of both incoming protons ($pp\to X+H+Y$). We emphasise the advantage of reactions with the rapidity gaps (that is production by the pomeron-pomeron fusion) to probe CP parity and to determine the quantum numbers of the produced central object.' --- IPPP/03/84\ DCPT/03/168\ 12 January 2004\ [**Hunting a light CP-violating Higgs via diffraction at the LHC**]{} <span style="font-variant:small-caps;">V.A. Khoze$^{a,b}$, A.D. Martin$^a$ and M.G. Ryskin$^{a,b}$</span>\ $^a$ Department of Physics and Institute for Particle Physics Phenomenology,\ University of Durham, DH1 3LE, UK\ $^b$ Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg, 188300, Russia\ Introduction ============ It is known that third generation squark loops can introduce sizeable CP violation in the Higgs potential of the Minimal Supersymmetric Standard Model (MSSM), if the soft-supersymmetry-breaking mass parameters of the third generation are complex; see, for example, [@AP]. As a result, the neutral Higgs bosons will mix to produce three physical mass eigenstates with mixed CP parity, which we denote $H_1,H_2$ and $H_3$ in order of increasing mass. A benchmark scenario of maximal CP violation, called CPX, was introduced in Ref. [@CEPW]. In this scenario |A\_t|=|A\_b|=2 M\_[SUSY]{},||=4 M\_[SUSY]{}, M\_[\_3,\_3,\_3]{}=M\_[SUSY]{},|M\_3|=1 [TeV]{}, \[eq:jan5a\] where $A_f$ are are the soft-supersymmetry-breaking trilinear parameters of the third generation squarks and $\mu$ is the supersymmetric higgsino mass parameter. The phenomenological consequences of this model may be quite spectacular. In particular, the $H_1ZZ$ coupling of the lightest Higgs boson can be significantly suppressed; see, for example,  [@CEPW] and references therein. In this case, it was shown that the LEP2 data do not exclude the existence of a light Higgs boson with mass $M_H<60$ GeV (40 GeV) in the minimal SUSY model with $\tan\beta\sim3$–4 (2–3) and CP-violating phase \_[CPX]{} (A\_t) = [arg]{}(A\_b) = [arg]{}(A\_)=[arg]{} (m\_[g]{}) = 90\^ (60\^). \[eq:A1\] Since the $H_1$ couplings to the $W$ and $Z$ gauge bosons become rather small, it would be hard to detect the light Higgs via the processes $e^+e^- \to Z^\star\to ZH_i$ or $e^+e^- \to Z^\star\to H_iH_j$. It is therefore interesting to consider the possibility of observing a light Higgs boson at the LHC or Tevatron collider. However, in general, it will be hard to observe a light Higgs at hadron colliders via the $\bb$ decay mode because, in particular, the transverse momenta of the outgoing $b$ and $\bar b$ jets are not large. As a consequence the signal is swamped by the QCD $\bb$ background[^1]. Therefore it was proposed [@cox] to search for a CP-violating light Higgs boson in the [*exclusive*]{} process $pp\to p + H + p$ at hadron colliders, where the $+$ signs denote the presence of large rapidity gaps. Over the past few years such exclusive diffractive processes have been considered as a promising way to search for manifestations of New Physics in high energy proton-proton collisions; see, for instance, [@KMRcan; @INC; @cox; @KKMRCentr; @DKMOR; @CR]. These processes have both unique experimental and theoretical advantages in hunting for Higgs bosons as compared to the traditional non-diffractive approaches. In particular, in the exclusive diffractive reactions the $\bb$ background is suppressed [@Liverpool; @KMRItal; @KMRmm; @DKMOR], and it may be feasible to isolate the signal. In the present paper we discuss the central [*exclusive*]{} diffractive production (CEDP) in more detail. We compare the signal and the background for observing a light neutral Higgs boson via $H_1\to\bb$ and $H_1\to\tau\tau$ decay modes. Then we evaluate the asymmetry arising from the interference of the P-even and P-odd production amplitudes. Note that this asymmetry is the most direct manifestation of CP-violation in the Higgs sector. Finally we consider the exclusive diffractive production of the heavier neutral Higgs bosons, $H_2$ and $H_3$, followed by the decays $H_2$ or $H_3\to H_1H_1\to 4 b$-jets.\ For numerical estimates, we use the formalism to describe central production in diffractive exclusive processes of [@INC], and the parameters (that is the masses, width and couplings of the Higgs bosons) given by the code “CPsuperH” [@Lee], where we choose $\phi_{\rm CPX}=90^\circ$, $\rm tan\beta=4$, $M_{\rm SUSY}=0.5$ TeV, (that is $|A_f| = 1$ TeV, $|\mu| = 2$ TeV, $|M_{\tilde g}|=1$ TeV) and the charged Higgs boson mass $M_{H^\pm}=135.72$ GeV so that the mass of the lightest Higgs boson, $H_1$, is $M_{H_1}=40$ GeV.[^2] The exclusive process is shown schematically in Fig. \[fig:1\]. The cross section may be written[@INC] as the product of the effective gluon–gluon luminosity ${\cal L}$, and the square of the matrix element of the subprocess $gg\to H$. Note that the hard subprocess is mediated by the quark/squark triangles. For a CP-violating Higgs, there are two different vertices of the Higgs–quark interaction: the scalar Yukawa vertex and the vertex containing the $\gamma_5$ Dirac matrix. Therefore the $gg\to H$ matrix element contains two terms:[^3] = g\_S(e\_1\^e\_2\^) - g\_P \^ e\_[1]{}e\_[2]{}p\_[1]{}p\_[2]{}/(p\_1p\_2) \[eq:1\] where $e^\perp$ are the gluon polarisation vectors and $\varepsilon^{\mu\nu\alpha\beta}$ is the antisymmetric tensor. In (\[eq:1\]) we have used a simplified form of the matrix element which already accounts for gauge invariance, assuming that the gluon virtualities are small in comparison with the Higgs mass. In forward exclusive central production, the incoming gluon polarisations are correlated, in such a way that the effective luminosity satisfies the P-even, $J_z=0$ selection rule [@INC; @KMRmm]. Therefore only the first term contributes to the strictly forward cross section. However, at non-zero transverse momenta of the recoil protons, $p_{1,2}^\perp\neq0$, there is an admixture of the P-odd $J_z=0$ amplitude of order $p_1^\perp p_2^\perp / Q_\perp^2$, on account of the $g_P$ term becoming active. Thus we consider non-zero recoil proton transverse momenta, and demonstrate that the interference between the CP-even ($g_S$) and CP-odd ($g_P$) terms leads to left-right asymmetry in the azimuthal distribution of the outgoing protons. First, we consider the background. Unfortunately, even in the exclusive process, we show below that the QCD $\bb$ background is too large. However, we shall see that it may be possible to observe such a CP-violating light Higgs boson in the $H\to \tau\tau$ decay mode, where the QED background can be suppressed by selecting events with relatively large outgoing proton transverse momenta, say, $p_{1,2}^\perp>300$ MeV. Exclusive diffractive $H_1$ production followed by $\bb$ decay ================================================================ First, we consider the exclusive double-diffractive process ppp+(H)+p \[eq:A2\] The signal-to-background ratio is given by the ratio of the cross sections for the hard subprocesses, since the effective gluon–gluon luminosity ${\cal L}$ cancels out. The cross section for the $gg\to H$ subprocess[^4] [@INC] (ggH) = (1-) \~(1-), \[eq:2\]as the width[^5], $\Gamma(H\to gg)$, behaves as $\Gamma\sim \alpha_S^2 G_F M_H^3$, where $G_F$ is the Fermi constant. On the other hand, at leading order, the QCD background is given by the $gg\to \bb$ subprocess \~ , \[eq:3\] where $E_T$ is the transverse energy of the $b$ and $\bar b$ jets. At leading order (LO), the cross section is suppressed by the $J_z=0$ selection rule (which gives rise to the $m_b^2/E_T^2$ factor) in comparison with the inclusive process. The extra factor was crucial to suppress the background. It was shown in [@DKMOR] that it is possible to achieve a signal-to-background ratio of about 3 for the detection of a Standard Model Higgs with mass $M_H\sim 120$ GeV, by selecting $\bb$ exclusive events where the polar angle $\theta$ between the outgoing jets lies in the interval $60^\circ<\theta<120^\circ$ if the missing mass resolution $\Delta m_{\rm missing} = 1$ GeV. The situation is much worse for a light Higgs, since the signal-to-background ratio behaves as     \~ M\_\^5 \[eq:4\] where we have used $\Delta\ln M_\bb^2 = 2\Delta M_\bb/M_\bb$. The $M^5$ behaviour comes just from dimensional counting. As the experimental resolution $\Delta M_\bb$ is larger than the width of the Higgs, $\Gamma_H$, the Higgs cross section (in the numerator) is driven by $G_F^2$, while the QCD background is proportional to $m_b^2$ and the size of the $\Delta M_\bb$ interval. To restore the dimensions we have to divide $m_b^2\Delta M_\bb$ by $M_\bb^5$. Thus, in going from $M_H\sim 120$ GeV to $M_H\sim40$ GeV, the expected leading-order QCD $\bb$ background increases by a factor of 240 in comparison with that for $M_\bb=120$ GeV. Strictly speaking, there are other sources of background [@DKMOR]. There is the possibility of the gluon jet being misidentified as either a $b$ or a $\bar b$ jet, or a contribution from the NLO $gg\to \bb g$ subprocess, where the extra gluon is not separated from either a $b$ or a $\bar b$ jet. These contributions have no $m_b^2/M_\bb^2$ suppression, and hence increase only as $M_H^{-3}$, and not as $M_H^{-5}$, with decreasing $M_H$. For $M_H\sim 120$ GeV, the LO $\bb$ QCD production was only about 30% of the total background. However, for $M_{H_1}\sim40$ GeV, the LO $\bb$ contribution dominates. Finally, with the cuts of Ref. [@DKMOR], we predict that the cross section of the $H_1$ signal is[^6] $$\sigma^{\rm CEDP}(pp\to p+(H_1\to\bb)+p)\simeq 14~{\rm fb}$$ as compared to the QCD background cross section, with the same cuts[^7], of $$\sigma^{\rm CEDP}(pp\to p+(\bb)+p)\simeq 1.4\frac{\Delta M} {1~{\rm GeV}}~{\rm pb}.$$ That is the signal-to-background ratio is only $S/B\sim 1/100$, and so even for an integrated luminosity ${\cal L} = 300~{\rm fb}^{-1}$ for $\Delta M = 1$ GeV the significance of the signal is only $3.7\sigma$. Here we have taken a $K$ factor of $K = 1.5$ for the QCD $\bb$ background, and again used the cuts and efficiencies quoted in Ref.[@DKMOR]. Therefore, to identify a light Higgs, it is desirable to study a decay mode other than $H_1\to\bb$. The next largest mode is $H_1\to\tau\tau$, with a branching fraction of about 0.07. The dependence of the results on the mass of the $H_1$ Higgs boson is illustrated in Table 1. Clearly the cross section decreases with increasing mass. On the other hand the signal-to-background ratio increases. Therefore for the case $M_{H_1} = 50$ GeV we see a slightly improved statistical significance of $4.4\sigma$ for the $\bb$ decay mode. $M(H_1)$ GeV cuts 30 40 50 --------------------------------------------- ------- ------- ------- ------- $\sigma(H_1){\rm Br}(\bb)$ $a$ 45 14 6 $\sigma^{\rm QCD}(\bb)$ $a$ 16000 1400 200 $A_{\bb}$ 0.14 0.07 0.04 $\sigma(H_1){\rm Br}(\tau\tau)$ $a,b$ 1.9 0.6 0.3 $\sigma^{\rm QED}(\tau\tau)$ $a,b$ 0.2 0.1 0.04 $A_{\tau\tau}$ $b$ 0.2 0.1 0.05 $M(H_2)$ GeV 103.4 104.7 106.2 $\sigma\dot {\rm Br} (H_2 \to 2H_1 \to 4b)$ $c$ 0.5 0.5 0.5 $\sigma\dot {\rm Br} (H_2 \to 2b)$ $a$ 0.1 0.1 0.2 $M(H_3)$ GeV 141.9 143.6 146.0 $\sigma\dot {\rm Br} (H_3 \to 2H_1 \to 4b)$ $c$ 0.14 0.2 0.18 $\sigma\dot {\rm Br}(H_3 \to 2b)$ $a$ 0.04 0.07 0.1 : The cross sections (in fb) of the central [*exclusive*]{} diffractive production of $H_i$ neutral Higgs bosons, together with those of the QCD($\bb$) and QED($\tau\tau$) backgrounds. The acceptance cuts applied are (a) the polar angle cut $60^\circ<\theta(b~{\rm or}~\tau)<120^\circ$ in the Higgs rest frame, (b) $p_i^\perp>300$ MeV for the forward outgoing protons and (c) the polar angle cut $45^\circ < \theta (b) < 135^\circ$. The azimuthal asymmetries $A_i$ are defined in eq.(12). The $\tau\tau$ decay mode ========================= At the LHC energy, the expected cross section for exclusive diffractive $H_1$ production, followed by $\tau\tau$ decay, is (ppp+(H)+p) \~1.1 [fb]{}, \[eq:5\] where the $60^\circ<\theta<120^\circ$ polar angle cut has already been included. Despite the low Higgs mass, we note that the exclusive cross section is rather small. As we already saw in (\[eq:2\]), the cross section of the hard subprocess $\hat\sigma(gg\to H)$ is approximately independent of $M_H$. Of course, we expect some enhancement from the larger effective gluon–gluon luminosity ${\cal L}$ for smaller $M_H$. Indeed, it may be approximated by  [@INC; @KKMRext] 1[/]{} (M\_H + 16 [GeV]{})\^[3.3]{}, \[eq:5a\] and gives an enhancement of about 18.8 (for $M_H=40$ GeV in comparison with that for $M_H=120$ GeV). On the other hand, in the appropriate region of SUSY parameter space, the CP-even $H\to gg$ vertex, $g_S$, is almost 2 times smaller[@cox; @Lee] than that of a Standard Model Higgs, giving a suppression of 4. Also the ratio $B(H\to \tau\tau)/B(H\to\bb)$ gives a further suppression of about 12. Although the $\tau\tau$ signal has the advantage that there is practically no QCD background[^8], exclusive $\tau^+\tau^-$ events may be produced by $\gamma\gamma$ fusion, see Fig. \[fig:2\]. The cross section for this latter QED process is appreciable. It is enhanced by two large logarithms, $\ln^2(t_{\rm min}R_p^2)$, arising from the integrations over the transverse momenta of the outgoing protons (that is of the exchanged photons). The lower limit of the logarithmic integrals is given by t\_[min]{}   -(xm\_p)\^2   -(m\_p)\^2, \[eq:6\] while the upper limit is specified by the slope $R_p^2$ of the proton form factor. To suppress the QED background, one may select events with relatively large transverse momenta of the outgoing protons. For example, if $p_{1,2}^\perp > 300$ MeV, then the cross section for the QED background, for $M_{\tau\tau}=40$ GeV, is about[^9] \_[QED]{}(ppp + + p)   0.1  [fb]{}, \[eq:7\] while the signal (\[eq:5\]) contribution is diminished by the cuts, $p^\perp_{1,2}>300$ MeV, down to 0.6 fb. Thus, assuming an experimental missing mass resolution of $\Delta M\sim 1$ GeV, we obtain a healthy signal-to-background ratio of $S/B \sim 6$ for $M_{H_1} \sim 40$ GeV. Note that in all the estimates given above, we include the appropriate soft survival factors $S^2$—that is the probabilities that the rapidity gaps are not populated by the secondaries produced in the soft rescattering. The survival factors were calculated using the formalism of Ref. [@KMRsoft]. Moreover, here we account for the fact that only events with proton transverse momenta $p_{1,2}^\perp>300$ MeV were selected. In particular, for the QED process, we have $S^2\simeq 0.7$, rather than the value $S^2\simeq 0.9$, which would occur in the absence of the cuts on the proton momenta[^10]. Azimuthal asymmetry of the outgoing protons ============================================ A specific prediction, in the case of a CP-violating Higgs boson, is the asymmetry in the azimuthal $\varphi$ distribution of the outgoing protons, caused by the interference of the CP-odd and CP-even vertices, that is between the two terms in (\[eq:1\]). The polarisations of the incoming active gluons are aligned along their respective transverse momenta, $Q_\perp - p_1^\perp$ and $Q_\perp + p_2^\perp$. Hence the contribution caused by the second term, $g_P$, is proportional to the vector product $$\vec{n}_0 \cdot (\vec{p}_1^\perp \times \vec{p}_2^\perp) \sim \sin\varphi,$$ where $\vec{n}_0$ is a unit vector in the beam direction, $\vec{p}_1$. The sign of the angle $\varphi$ is fixed by the four-dimensional structure of the second term in (\[eq:1\]); see [@KKMRCentr] for a detailed discussion. Of course, due to the P-even, $J_z=0$ selection rule, this (P-odd) contribution is suppressed in the amplitude by $p_1^\perp p_2^\perp/Q_\perp^2$, in comparison with that of the P-even $g_S$ term. Note that there is a partial compensation of the suppression due to the ratio $g_P/g_S \sim 2$. Also the soft survival factors $S^2$ are higher for the pseudoscalar and interference terms, than for the scalar term. An observation of the azimuthal asymmetry may therefore be a direct indication of the existence of CP-violation (or P-violation in the case of CEDP) in the Higgs sector[^11]. Neglecting the absorptive effects (of soft rescattering), we find, for example, an asymmetry A= = 2[Re]{}(g\_S g\_P\^\*) r\_[S/P]{} (2/)/(|g\_S|\^2 + |r\_[S/P]{} g\_P|\^2/2). Here (numerically small) parameter $r_{S/P}$ reflects the suppression of the P-odd contribution due to the selection rule discussed above. At the LHC energy in the absence of rescattering effects $A\simeq 0.09$ for $M_{H_1}=40$ GeV. However we find soft rescattering tends to wash out the azimuthal distribution, and to weaken the asymmetry. Besides this the real part of the rescattering amplitude multiplied by the imaginary part of the pseudoscalar vertex $g_P$ (with respect to $g_S$) gives some negative contribution. So finally we predict $A\simeq 0.07$. For the lower Tevatron energy, the admixture of the P-odd amplitude is larger, while the probability of soft rescattering is smaller. Therefore, at $\sqrt s=2$ TeV, we find that asymmetry is twice as large, $A\sim 0.17$. On the other hand the effective $gg^{PP}$ luminosity ${\cal L}$ and the corresponding cross section of $H_1$ (CEDP) production is 10 times smaller (for $M_{H_1}=40$ GeV). The asymmetries expected at the LHC, with and without the cut $p_{1,2}^\perp>300$ MeV on the outgoing protons, are shown for different $H_1$ masses in Table 1. The asymmetry decreases with increasing Higgs mass, first, due to the decrease of $|g_P|/|g_S|$ ratio in this mass range and, second, due to the extra suppression of the P-odd amplitude arising from the factor $p_1^\perp p_2^\perp/Q_\perp^2$ in which the typical value of $Q_\perp$ in the gluon loop increases with mass. Heavy $H_2$ and $H_3$ Higgs production with $H_1H_1$ decay =========================================================== Another possibility to study the Higgs sector in the CPX scenario is to observe central exclusive diffractive production (CEDP) of the heavy neutral $H_2$ and $H_3$ Higgs bosons, using the $H_2,H_3\to H_1 + H_1$ decay modes. For the case we considered above ($\rm tan\beta=4$, $\phi_{\rm CPX}=90^\circ$, $M_{H_1}=40$ GeV), the masses of the heavy bosons bosons are $M_{H_2}=104.7$ GeV and $M_{H_3}=143.6$ GeV. At the LHC energy, the CEDP cross sections of the $H_2$ and $H_3$ bosons are not too small – $\sigma^{\rm CEDP}=1.5$ and $0.9\ {\rm fb}$ respectively. When the branching fractions, Br$(H_2\to H_1H_1)=0.84$, Br$(H_3\to H_1H_1)=0.54$ and Br$(H_1\to\bb)=0.92,$ are included, we find $$\sigma(pp\to p+(H\to\bb\ \bb)+p)=1.1\ {\rm and}\ 0.4\ {\rm fb}$$ for $H_2$ and $H_3$ respectively. Thus there is a chance to observe, and to identify, the central exclusive diffractive production of all three neutral Higgs bosons, $H_1, H_2$ and $H_3$, at the LHC. The QCD background for exclusive diffractive production of four $b$-jets is significantly less than the signal. Other decay channels are also worth mentioning. For a very light boson, say $M_{H_1} = 30$ GeV, it is also possible to produce four $b$-jets via the cascade $H_3\to H_2H_1\to 4b$-jets. However, the expected cross section is about 0.02 fb, which looks too low to be useful. A larger cross section is expected for the direct $H_2\to\bb$ decay, where the branching fraction Br$(H_2\to\bb)=0.14$ for $M_{H_1}=40$ GeV leads to the cross section $\sigma(p+(H_2\to\bb)+p)$ = 0.2 fb. Note that in this case, we only need to tag two, and not four, $b$-jets. So the detection efficiency is about a factor of 1/0.6 larger. The situation is even better for $M_{H_1} = 50$ GeV, where Br$(H_2\to\bb)=0.25$ and $\sigma(p+(H_2\to\bb)+p)$ = 0.4 fb. If it is possible to compare the $4b$- and $2b$-jet signals, then it will allow a probe of the nature of the $H_2$ boson. Finally, for the heaviest boson, $H_3$, the decay mode $H_3\to H_1+Z$ is not small, with a branching fraction of Br$(H_3\to H_1+Z)=0.27$ for $M_{H_1}=40$ GeV. Central Higgs production with double diffractive dissociation ============================================================= To enhance the Higgs signal we study a less exclusive reaction than $pp\to p + H + p$, and allow both of the incoming protons to dissociate. In Ref.[@INC] it was called double diffractive [*inclusive*]{} production, and was written ppX + H + Y. \[eq:CIDP\] Now there is no form factor suppression as the initial protons are destroyed. Also the cross section is larger due to the increased $p_i^\perp$ phase space. Moreover the cross section is also enhanced because we no longer have the P-even selection rule, and so the pseudoscalar $gg\to H$ coupling, $g_P$, becomes active. The cross section for inclusive production, via central double dissociation (CDD) process, is found by using (i) the effective $gg^{PP}$ luminosity of Ref.[@INC], (ii) the probability, $S^2$, that the gaps survive soft rescattering, calculated using model II of [@KKMR], and (iii) the opacity of the proton given in [@KMRsoft]. Typical results, for the LHC energy, are shown in Table 2. For the Tevatron energy, the cross section appears too small, and even for a light boson of mass $M_{H_1}=30$ GeV we have Br$(H_1\to \tau\tau)\sigma<1.5$fb, while the QED background is about 15 fb. $M(H_1)$ GeV 30 40 50 --------------------------------------------- ----------- ----------- ----------- $\sigma(H_1){\rm Br}(\tau\tau)$ 19 (4) 6 (2) 2.6 (0.8) $\sigma^{\rm QED}(\tau\tau)$ 66 (2.2) 30 (1.5) 15 (0.9) $M(H_2)$ GeV 103.4 104.7 106.2 $\sigma\dot {\rm Br} (H_2 \to 2H_1 \to 4b)$ 4 (2) 4 (2) 3.5 (2) $M(H_3)$ GeV 141.9 143.6 146.0 $\sigma\dot {\rm Br} (H_3 \to 2H_1 \to 4b)$ 1.5 (0.8) 2.2 (1.2) 2 (1.1) : The cross sections (in fb) for the central production of $H_i$ neutral Higgs bosons by [*inclusive*]{} double diffractive dissociation, together with that of the QED($\tau\tau$) background. A polar angle acceptance cuts of $60^\circ<\theta(b~{\rm or}~\tau)<120^\circ$ ($45^\circ<\theta(b)<145^\circ$) in the Higgs rest frame is applied for the case of $H_1$ ($H_2,H_3$) bosons. The numbers in brackets correspond to the imposition of the additional cut of $E^\perp_i>7$ GeV for the proton dissociated systems. Of course, the missing mass method cannot be used to measure the mass of the Higgs boson for central production with double dissociation (CDD). Therefore the mass resolution will be not so good as for CEDP; we evaluate the background for $\Delta M$ = 10 GeV. Moreover, with the absence of the $J_z=0$ selection rule, the LO QCD $\bb$-background is not suppressed. Hence we study only the $\tau\tau$ decay mode for the light boson, $H_1$, and the four $b$-jet final state for the heavy $H_2$ and $H_3$ bosons. The background to the $H_1\to\tau\tau$ signal arises from the $\gamma\gamma\to\tau\tau$ QED process. It is evaluated in the equivalent photon approximation. The photon flux, N\_= F\_2(x,q\^2), \[eq:35a\] was calculated using LO MRST2001 partons[@MRST01], with the integral over the photon transverse momentum running from $q=m_\rho$ up to $q=M_{\tau\tau}/2$. The lower limit is approximately where the $\gamma^*p$ cross section becomes flat and loses its $\sigma(\gamma^*p) \sim 1/q^2$ behaviour. The upper limit reflects the dependence of the $\gamma\gamma\to\tau\tau$ matrix element on the virtuality of the photon. From Table 2 we see that the $H_1$ signal for inclusive diffractive production, (\[eq:CIDP\]), exceeds the exclusive signal by more than a factor of ten. On the other hand the signal-to-background ratio is worse; $S/B_{QED}$ is about 1/5. Moreover there could be a huge background due the misidentification of a gluon dijet as a $\tau\tau$-system. To make this QCD background satisfy $B_{QCD}<S$, would require the probability of misidentifying a gluon as a $\tau$ lepton to be $P_{g/\tau}<1/1500$. For the four $b$-jet signals of the heavy $H_2$ and $H_3$ bosons, the QCD background can be suppressed by requiring each of the four $b$-jets to have polar angle in the interval $(45^\circ,135^\circ)$, in the frame where the four $b$-jet system has zero rapidity. However in the absence of a good mass resolution, that is with only[^12] $\Delta M=10$ GeV, we expect the four $b$-jet background to be 3-5 times the signal. Nevertheless these signals are still feasible, with cross sections of the order of a few fb. For example, with an integrated luminosity of ${\cal L} = 300~{\rm fb}^{-1}$ and an efficiency of $4b$-tagging of $(0.6)^2$ [@DKMOR], we predict about 400 $H_2$ events and 200 $H_3$ events. Taking the background-to-signal ratio to be $B/S =4$, we then have a statistical significance of about $10\sigma$ for $H_2$ and $6\sigma$ for $H_3$. The inclusive CDD kinematics allow a study of CP-violation, and the separation of the contributions coming from the scalar and pseudoscalar $gg\to H$ couplings, $g_S$ and $g_P$ of (\[eq:1\]), respectively. Indeed, the polarizations of the incoming active gluons are aligned along their transverse momenta, $\vec{Q}_\perp-\vec{p}^\perp_1$ and $\vec{Q}_\perp+\vec{p}^\perp_2$. Hence the $gg\to H$ fusion vertices take the forms V\_S = (\_-\^\_1) (\_+\^\_2) g\_S V\_P = \_0 g\_P, where $g_S$ and $g_P$ are defined in (\[eq:1\]). For the exclusive (CEDP) process the momenta $p^\perp_{1,2}$ were limited by the proton form factor, and typically $Q^2\gg p^2_{1,2}$. Thus V\_S = g\_S Q\^2\_ V\_P = g\_P (\_0). On the contrary, for double diffractive dissociation production (CDD) $Q^2 < p^2_{1,2}$. In this case V\_S = g\_S p\^\_1 p\^\_2 [cos]{} V\_P = g\_P p\^\_1 p\^\_2 [sin]{}. Moreover we can select events with large outgoing transverse momenta of the dissociating systems, say $p^\perp_{1,2}> 7$ GeV, in order to make reasonable measurements of the directions of the vectors $\vec{p}_1^\perp=\vec{E}^\perp_1$ and $\vec{p}_2^\perp=\vec{E}^\perp_2$. Here $E^\perp_{1,2}$ are the transverse energy flows of the dissociating systems of the incoming protons. At LO, this transverse energy is carried mainly by the jet with minimal rapidity in the overall centre-of-mass frame. The azimuthal angular distribution has the form[^13] = \_0 (1+ a [sin]{}2+ b [cos]{}2), where the coefficients are given by a= b=. Note that the coefficient $a$ arises from scalar-pseudoscalar interference, and reflects the presence of a T-odd effect. Its observation would signal an explicit CP-violating mixing in the Higgs sector. On the other hand, in the absence of CP-violation,the sign of the coefficient b reveals the CP-parity of the new boson[^14]. The predictions for the coefficients are given in Table 3 for different values of the Higgs mass, namely $M_{H_1}$ = 30, 40 and 50 GeV. The coefficients are of appreciable size and, given sufficient luminosity, may be measured at the LHC. Imposing the cuts $E^\perp_i > 7$ GeV reduces the cross sections by about a factor of two, but does not alter the signal-to-background ratio, $S/B_{QCD}$. However the cuts do give increased suppression of the QED $\tau\tau$ background and now, for the light $H_1$ boson, the ratio $S/B_{QED}$ exceeds one. We emphasize here that, since we have relatively large $E^\perp$, the angular dependences are quite insensitive to the soft rescattering corrections. $M(H_1)$ GeV -------------- --------- --------- --------- --------- --------- --------- $H_1$ $-0.53$ $-0.73$ $-0.56$ $-0.55$ $-0.53$ $-0.33$ $H_2$ 0.44 0.90 0.41 0.91 0.37 0.92 $H_3$ $-0.38$ 0.92 $-0.40$ 0.91 $-0.42$ 0.90 : The coefficients in the azimuthal distribution $d\sigma/d\varphi = \sigma_0 (1+ a\sin 2\varphi + b \cos 2\varphi)$, where $\varphi$ is the azimuthal angle between the $E^\perp$ flows of the two proton dissociated systems. If there were no CP-violation, then the coefficients would be $a=0$ and $|b|=1$. Conclusions =========== We have evaluated the cross sections, and the corresponding backgrounds, for the central double-diffractive production of the (three neutral) CP-violating Higgs bosons at the LHC. This scenario is of interest since even a very light boson of mass about 30 GeV is not experimentally ruled out for some range of the MSSM parameters. We have studied the production of the three states, $H_1, H_2, H_3$, both with exclusive kinematics, $pp\to p + H + p$ which we denoted CEDP, and in double-diffractive reactions where both the incoming protons may be destroyed, $pp\to X + H + Y$ which we denoted CDD. Recall that a + sign denotes the presence of a large rapidity gap. Proton taggers are required in the former processes, but not in the latter. Typical results are summarised in Tables 1 and 2, respectively. The cross sections are not large, but should be accessible at the LHC. The uncertainties in the calculation of the exclusive cross sections were discussed in Refs.[@KKMRCentr; @KKMRext]. For the light $H_1$ boson, where the contribution from the low $Q_\perp$ region is more important, the uncertainty is much larger. Recall that for the semi-inclusive CDD processes the effective gluon-gluon ($gg^{PP}$) luminosity is calculated using the LO formula. Thus we cannot exclude rather large NLO corrections. On the other hand, for CDD, the values of the cross sections are practically insensitive to the contributions from the infrared domain. Moreover, with the [*skewed*]{} CDD kinematics, the NLO BFKL corrections are expected to be much smaller than in the forward (CEDP) case. So we may expect an uncertainty of the predictions to be about a factor of 3 to 4, or even better. It would be very informative to measure the azimuthal angular dependence of the outgoing proton systems, for both the CEDP and CDD processes. Such measurements would reveal explicitly any CP-violating effect, via the interference of the scalar and pseudoscalar $gg\to H$ vertices. Finally, we recall the advantages of diffractive, as compared to the non-diffractive, production of Higgs bosons: i\) a much better mass Higgs resolution is obtained by the missing mass method for exclusive events, ii\) a clean environment, which may be important to identify four $b$-jets with transverse momenta $p_T\sim M_{H_1}/2\sim 20$ GeV (for the non-diffractive process, at the LHC energy, the QCD backgroud may be too large), iii\) a possibility to measure CP-property of the Higgs boson and to detect CP-violation (note that the asymmetries $A_{\bb}$ and $A_{\tau\tau}$ are explicit manifestations of CP violation at the quark level), Next, assuming that P and C parities are conserved, iv\) the existence of the P-even, $J_z=0$ selection rule for LO central exclusive diffractive production, which means that we observe an object of natural parity (most probably $0^+$); the analysis of the azimuthal angular distribution of the outgoing protons may give additional information about the spin of the centrally produced object [@KKMRCentr], v\) in addition we know that an object produced by the diffractive process (that is by Pomeron-Pomeron fusion) has positive C-parity, is an isoscalar and a colour singlet[^15]. The properties listed above should help to distinguish the $H_2$ and $H_3$ four-jet decay channels from the production of a SUSY particle, followed by a ‘cascade’-like decay. Acknowledgements {#acknowledgements .unnumbered} ================ We thank Jeff Forshaw, Risto Orava, Albert de Roeck, Sasha Nikitenko, Apostolos Pilaftsis and, especially, Brian Cox and Jae Sik Lee for useful discussions. ADM thanks the Leverhulme trust for an Emeritus Fellowship and MGR thanks the IPPP at the University of Durham for hospitality. This work was supported by the UK Particle Physics and Astronomy Research Council, by grant RFBR 04-02-16073 and by the Federal Program of the Russian Ministry of Industry, Science and Technology SS-1124.2003.2. [XX]{} A. Pilaftsis, Phys. Rev. [**D58**]{} (1998) 096010; Phys. Lett. [**B435**]{} (1998) 88;\ A. Pilaftsis and C.E.M. Wagner, Nucl. Phys. [**B553**]{} (1999) 3. M. Carena, J. Ellis, A. Pilaftsis and C.E.M. Wagner, Phys. Lett. [**B495**]{} (2000) 155; Nucl. Phys. [**B586**]{} (2000) 92. S.Y. Choi, K. Hagiwara and J.S. Lee, Phys. Lett. [**B529**]{} (2002) 212. M. Carena, J. Ellis, S. Mrenna, A. Pilaftsis and C.E.M. Wagner, Nucl. Phys. [**B659**]{} (2003) 145. B.E. Cox, J.R. Forshaw, J.S. Lee, J. Monk and A. Pilaftsis, Phys. Rev. [**D68**]{} (2003) 075004; J.R. Forshaw, [arXiv:hep-ph/0305162]{}. V.A. Khoze, A.D. Martin and M.G. Ryskin, Eur. Phys. J. [**C14**]{} (2000) 525. V.A. Khoze, A.D. Martin and M.G. Ryskin, Eur. Phys. J. [**C23**]{} (2002) 311. A.B. Kaidalov, V.A. Khoze, A.D. Martin and M.G. Ryskin, [arXiv:hep-ph/0307064]{}. A. De Roeck, V.A. Khoze, A.D. Martin, R. Orava and M.G. Ryskin, Eur. Phys. J. [**C25**]{} (2002) 391. C. Royon, [arXiv:hep-ph/0308283]{} and references therein. V.A. Khoze, A.D. Martin and M.G. Ryskin, [hep-ph/0006005]{}, in [*Proc. of 8th Int. Workshop on Deep Inelastic Scattering and QCD (DIS2000)*]{}, Liverpool, eds. J. Gracey and T. Greenshaw (World Scientific, 2001), p.592. V.A. Khoze, A.D. Martin and M.G. Ryskin, Nucl. Phys. Proc. Suppl. [**99B**]{} (2001) 188. V.A. Khoze, A.D. Martin and M.G. Ryskin, Eur. Phys. J. [**C19**]{} (2001) 477, erratum [**C20**]{} (2001) 599. J.S. Lee, A. Pilaftsis, M. Carena, S.Y. Choi, M. Drees, J.R. Ellis and C.E.M. Wagner [arXiv:hep-ph/0307377]{}. A. Dedes and S. Moretti, Phys. Rev. Lett. [**84**]{} (2000) 22; Nucl. Phys. [**B576**]{} (2000) 29;\ S.Y. Choi and J.S. Lee, Phys.Rev. D61 (2000) 115002. A.D. Martin and M.G. Ryskin, Phys. Rev. [**D64**]{} (2001) 094017. A.B. Kaidalov, V.A. Khoze, A.D. Martin and M.G. Ryskin, [arXiv:hep-ph/0311023]{}, Eur. Phys. J. [**C**]{} in press. D. Cavalli et al., ATLAS Internal note, PHYS-NO-051, 1994;\ R. Kinnunen and A. Nikitenko, CMS Note 1997/002. V.A. Khoze, A.D. Martin and M.G. Ryskin, Eur. Phys. J. [**C18**]{} (2000) 167. V.A. Khoze, A.D. Martin and M.G. Ryskin, Eur. Phys. J. [**C24**]{} (2002) 459. S.Y. Choi and J.S. Lee, Phys. Rev. [**D62**]{} (2000) 036005. B. Grzadkowski, J.F. Gunion, Phys. Lett. [**B294**]{} (1992) 361. M. Kramer, J.H. Kuhn, M.L. Stong and P.M. Zerwas, Z. Phys. [**C64**]{} (1994) 21. G.J. Gounaris and G.P. Tsirigoti, Phys. Rev. [**D56**]{} (1997) 3030; Erratum, ibid. [**D58**]{} (1998) 059901. A.B. Kaidalov, V.A. Khoze, A.D. Martin and M.G. Ryskin, Eur. Phys. J. [**C21**]{} (2001) 521. A.D. Martin, R.G. Roberts, W.J. Stirling and R.S. Thorne, Eur. Phys. J. [**C23**]{} (2002) 73. V. Del Duca, W. Kilgore, C. Oleari, C. Schmidt and D. Zeppenfeld, Nucl. Phys. [**B616**]{} (2001) 367; Phys. Rev. Lett. [**87**]{} (2001) 122001. Belle Collaboration: C.-K. Choi et al., [arXiv:hep-ex/0309032]{}; K. Abe et al., [arXiv:hep-ex/0308029]{};\ CDF Collaboration, D. Acosta et al., [arXiv:hep-ex/0312021]{}. [^1]: The prospects for observing such a light Higgs in conventional search channels, at the Tevatron and the LHC, were studied in [@CHL; @CEMPW]. [^2]: The values are chosen to provide an ’optimistic’ scenario for the observation of a CP-violating Higgs boson in CEDP. [^3]: For calculations of $g_S$ and $g_P$ in the MSSM with CP-violation see, for example, [@DMCL]. [^4]: In [@INC] we denoted the initial state by $gg^{PP}$ to indicate that each of the incoming gluons belongs to colour-singlet Pomeron exchange. Here this notation is assumed to be implicit. [^5]: Strictly speaking, we should consider CP-even and CP-odd contributions to the width separately, but it does not change the conclusion qualitatively. [^6]: Note that our CEDP cross section is about two times larger than that quoted in [@cox]. This difference occurs mainly because we use an improved approximation for the unintegrated gluon densities. To be specific, we use eq.(26) of [@MR01], rather than the simplified formula (4) of Ref.[@DKMOR] used in [@cox]. In addition we allow for the transverse momenta $p^\perp_{1,2}$ of the recoil protons in the gluon loop of Fig.1. For smaller boson masses, $M_H\sim 40$ GeV, this leads to a steeper $p^\perp_{1,2}$ dependence of the amplitude, which emphasizes larger values of the impact parameter, $b_\perp$, where the absorptive effects are weaker. Therefore we obtain a larger soft survival factor, $S^2\simeq 0.029$, at the LHC energy. However, recall that a factor of 2 difference is within the accuracy of the approach[@DKMOR; @KKMRCentr]. [^7]: Here and in what follows we assume that the proton and $b$-tagging efficiencies and the missing mass resolution in the case of a light Higgs boson are the same as for the case of $M_{\rm Higgs}=120$ GeV [@DKMOR]. Likely, this assumption is not well justified. In particular, the missing mass resolution and proton tagging efficiency may worsen at lower masses. [^8]: There may be background caused by a pair of high $E_T$ ($\sim 15$ GeV) gluons being misidentified as a $\tau\tau$ pair. To suppress such a background down to the level of $S/B\sim 1$, the probability, $P_{g/\tau}$, that a gluon is misidentified as a $\tau$ must be less than about 1/750, assuming that the missing mass resolution is $\Delta M=1$ GeV. In [@Zepp], for an inclusive event, the probability $P_{g/\tau}$ was evaluated as 1/500. Thus it seems reasonable to suppose that the probability $P_{g/\tau}<1/750$ can be achieved in the much cleaner environment of an exclusive diffractive (CEDP) event. [^9]: As we consider sizeable $p_{1,2}^\perp$, we account for both the $F_1$ and $F_2$ electromagnetic proton form factors. [^10]: Without the momenta cuts, the main QED contribution comes from small $p_{1,2}^\perp$, that is from large impact parameters $b^\perp\gg R_p$, where the probability of soft rescattering is already small, see [@KMRphot] for details. [^11]: In Ref. [@CL] (see also [@GG; @KKSZ; @GT]) a suggestion, along the same lines, was made for the explicit observation of CP-violating effects. There, various polarization asymmetries in two-photon fusion Higgs production processes were discussed. In the absence of absorptive effects, the azimuthal asymmetry $A$ may be expressed, via gluon helicity amplitudes, in the same way as the quantity $A_2$ of [@CL], written in terms of photon helicities. [^12]: However this resolution is still sufficient to separate the $H_2$ and $H_3$ bosons. [^13]: In the CP-conserving case, an idea similar in spirit was considered in Ref.[@DKOSZ], where it was suggested to measure the azimuthal correlations of the two tagged jets in inclusive Higgs production. However the proof of the feasibility of such an approach in non-diffractive processes requires further detailed studies of the possible dilution of the effect due to the parton showers in the inclusive environment of the jets. [^14]: Note that we may search for any new pseudoscalar boson produced by the CDD process by looking for the corresponding azimuthal distribution, $d\sigma/d\varphi \sim {\rm sin}^2\varphi$. [^15]: An instructive topical example, which illustrates the power of CEDP as a spin-parity analyser, concerns the determination of the quantum numbers of the recently discovered $X(3872)$ resonance[@chi3872]. A knowledge of its C-parity is important to understand its nature. If it is a C = +1 state with spin-parity $0^+$ or $2^+$ then it may be even seen in CDD production with a large rapidity gap on either side of its J/$\psi~\pi^+\pi^-$ decay. Forward proton tagging would, of course, allow a better spin-parity analysis.
--- abstract: 'Active sensing refers to the process of choosing or tuning a set of sensors in order to track an underlying system in an efficient and accurate way. In a wireless environment, among the several kinds of features extracted by traditional sensors, the information carried by the communication channel about the state of the system can be used to further boost the tracking performance and save energy. A joint tracking problem which considers sensor measurements and communication channel together for tracking purposes is set up and solved. The system is modeled as a partially observable Markov decision problem and the properties of the cost-to-go function are used to reduce the problem complexity. In particular, upper and lower bounds to the optimal sensor selection choice are derived and used to introduce sub-optimal sensing strategies. Numerical results show the advantages of using the channel as an additional way for improving the tracking precision and reduce the energy costs.' author: - bibliography: - 'bibliography.bib' title: 'Improved Active Sensing Performance in Wireless Sensor Networks via Channel State Information - Extended Version' --- 2=3.5pt communication channel, features, measurements, wireless body area network, wireless sensor networks, partially observable Markov decision problems, optimization, policies. Introduction ============ is a common application in Wireless Sensor Networks (WSNs) and in the Internet of Things (IoT) world in which different devices collaborate to detect a common underlying state of a system. Indeed, due to the dense nature of these networks and the limited computational capabilities and energy availability of the devices, redundant data from multiple sensors can be combined to improve the tracking accuracy. Among all the physical quantities that can be exploited for tracking, the use of *Channel State Information* (CSI) as a way to improve the detection performance has been studied only marginally to date and only in certain contexts (e.g., target localization [@Paul2009]). However, since in many applications the channel is influenced by the underlying system, it can be exploited in a tracking system to improve its performance and reduce the energy costs. The goal of this paper is to investigate a *joint* active sensing optimization problem in which, in addition to the standard sensor measurements, the communication channel is also exploited. Examples of active sensing applications are compressive spectrum sensing [@Sun2013], object tracking [@Fuemmeler2011], health care [@Zhou2008] and sparse signal recovery [@Wei2015]. Moreover, active sensing has been used in Wireless Body Area Networks (WBANs) [@Thatte2011; @Ghasemzadeh2013; @Zois2013; @Quwaider2008; @Archasantisuk2015], which we will also use as a practical scenario for our model. In a WBAN, different in-body and on-body sensors obtain noisy measurements of a quantity (e.g., features of the electrocardiogram or data from a multi-axes accelerometer) related to the current underlying and unknown physical activity of a subject (e.g., sitting, standing, walking, running, etc.), and collaborate by transmitting the gathered data to a common Fusion Center (FC) (e.g., a mobile phone in a pocket). The role of the FC is to combine the measurements and assess the current activity. This may be particularly useful for health-care applications (e.g., for epidemiologic/clinical research purposes [@Zhou2008]) or for monitoring the daily physical activity of a patient. Differently from other tracking activities (e.g., RADARS) wherein reliable, accurate and expensive, in a WSN sensors are employed and the tracking task is not trivial because of energy and/or computational constraints and inaccuracies in the measurements. Indeed, since the devices are generally battery powered, energy efficiency is a key aspect to consider, and prolonging the network lifetime represents one of the most studied challenges (see [@Dietrich2009]). Among the traditional approaches to maximize the battery life, prediction-based schemes with active activation have received recent attention. The basic idea is to dynamically activate and use only the most useful subset of nodes in the network and turn off the others to save energy. The active nodes transmit their data to the FC, which estimates the underlying activity of the system and decides the portion of the network to activate in the next time slot. This question is known as the *Sensor Selection Problem* (SSP) [@Chen2007; @Chepuri2015] and aims at minimizing the energy consumption for the sensors and the fusion center, while still providing a reliable estimate. While obtaining accurate estimations is generally possible by increasing the number of gathered measurements, it also incurs high energy costs both at the sensor [@Krishnamurthy2002; @Krishnamurthy2007; @Krishnamurthy2013; @Au2012] and at the FC sides [@Thatte2011; @Zois2013]. In the KNOW-ME network considered in [@Thatte2011], the FC represents the performance bottleneck bacause of the data reception costs. The model in [@Thatte2011] was extended in [@Zois2013] to the case of an underlying dynamic process described by a Markov Chain and a POMDP framework was employed to solve the problem. Our paper extends [@Zois2013] to the case in which a communication channel is considered. The POMDP approach had been previously considered by Krishnamurthy [@Krishnamurthy2002; @Krishnamurthy2007] but, differently from our approach, without accounting for the possibility of choosing multiple sensors simultaneously and using discrete measurements, or focusing on the problem of optimizing the number of samples to gather and not on the sensor selection [@Krishnamurthy2013]. Reference [@Au2012] set up a POMDP model for selecting sensor resources for context classification under an average energy consumption constraint but considering only the two control actions “activate” or “deactivate” all sensors. The authors of [@Savage2009] studied a constrained sensor scheduling problem in the Gauss-Markov framework, and explicitly derived the optimal strategy. In [@Savage2009], the underlying system evolves in a Gaussian-fashion and is not subject to a Markov evolution. Also, cooperation between sensors is not taken into account. However, most of the previous works did not explicitly consider the communication channel between sensors and fusion center [@Quevedo2013], neither it was used to improve the tracking performance. Using the channel as an additional feature results in several advantages. First, the channel information is intrinsically related with the reception of the sensor measurements, thus no additional energy costs are required to obtain it. Second, there may be situations in which the traditional measurements are less informative than the communication channel. Finally, when a sensor measurement is lost or highly corrupted by noise, because of a bad channel condition, it is still possible to gather information about the underlying state of the system (e.g., if for certain activities it is likely that a bad channel is observed, whereas others experience good channel conditions almost always, then a packet loss may be very informative about the underlying activity). The communication channel was explicitly taken into account in [@Gupta2006], where Gupta *et al.* showed how the packet drop probability influences the optimal sensor selection policy. In [@Wu2013], the authors studied the relation between estimation quality and communication rate and showed that it is possible to significantly reduce the communication rate for a small degradation of the sensing performance. However, neither [@Gupta2006] or [@Wu2013] considered a POMDP framework and optimized the number of measurements to gather from every sensor. Limited bandwidth constraints of the channel were considered in [@Shi2012], in which an optimal scheduler was built for two independent Gauss-Markov systems. Similarly, [@Yang2013] studied bandwidth constraints in a multi-dimensional system for both the homogeneous and heterogeneous cases. The possibility to explicitly exploit the Received Signal Strength Indicator (RSSI) for body activity tracking purposes in a real scenario was described in [@Quwaider2008]. Recently, [@Archasantisuk2015] introduced a machine learning technique to achieve high detection accuracy using the RSSI. References [@Quwaider2008] and [@Archasantisuk2015] did not focus on the sensor selection optimization problem and use *only* CSI for state detection. Instead, in this work, we enclose in a POMDP framework the communication channel and the traditional measurements of the sensors jointly. This makes the optimization more challenging to solve, thus addressing the problem requires new techniques and results. The main contributions of the paper can be summarized as follows. We set up an *active sensing* model (see  \[fig:model\]) in which, at every time step, a set of sensors is chosen to track the underlying state of the system. The optimal performance in terms of tracking quality and energy consumption is derived exploiting a POMDP framework for the infinite horizon setup, so that only a stationary scheduler needs to be stored in the nodes. We assume that the sensors are *passive* (i.e., they do not influence the underlying state) and *heterogeneous* in terms of sensing cost, quality of the measurements and communication channel. As in [@Thatte2011; @Zois2013], and differently from many previous works [@Krishnamurthy2002; @Krishnamurthy2007; @Krishnamurthy2013; @Au2012] we consider an energy constrained fusion center. Since using the channel as an additional source of information adds a layer of complexity to the problem, we simplify the problem in different steps. First, with the goal of reducing the size of the belief space [@Shani2013], we use the concavity properties of the cost-to-go function, $J$, to derive a lower bound to $J$ (Corollary \[corol:J\_concave\]) [@Smallwood1973]. This, in conjunction with an upper bound based on the tangents of $J$ can be used to (1) estimate the cost of the optimal sensing strategy, (2) introduce sub-optimal probabilistic tracking strategies and (3) classify these sub-optimal solutions. Then, we decompose the tracking procedure into a simpler set of operations (Theorem \[thm:belief\_simplified\]) and cast a multi-dimensional problem in simpler uni-dimensional sub-problems. Finally, we propose a sub-optimal greedy technique which further simplifies the optimization process. Numerical results support the importance of considering channel and measurements jointly and validate our theoretical results. While we use the WBAN case as baseline for the numerical evaluations, the proposed model adopts very general assumptions and the theoretical framework can be applied to a large variety of applications (e.g., object tracking, indoor environmental monitoring, etc.). The paper is organized as follows. Section \[sec:system\_model\] defines the system model we analyze and introduces the notation. In Sections \[sec:tracking\] and \[sec:optimization\] we describe the tracking procedure and optimization problem. Section \[sec:analysis\] presents the main result of the paper in terms of bounds and sub-optimal policies. Section \[sec:numerical\_results\] shows our numerical results. Finally, Section \[sec:conclusions\] concludes the paper. System Model {#sec:system_model} ============ **Symbol** **Meaning** -- -------------------------------------------------------- ------------------------------------------------------------------------------------------------------- $\mathrm{a}_{u,s,k},\mathrm{A}_{s}$ scalar referred to the $u$-th measurement of sensor $s$ in slot $k$ and corresponding random variable $\mathbf{a}_k, \mathbf{A}$ set of all the scalars $\mathrm{a}_{u,s,k}$ in slot $k$ and corresponding random variable $\mathcal{A}_k = \{\mathbf{a}_0,\ldots,\mathbf{a}_k\}$ set of all the scalars $\mathrm{a}_{u,s,k}$ until slot $k$ $s,k$ sensor and slot indices $u = 1,\ldots,N_s^{\mathbf{u}_{k-1}}$ measurement index of a single sensor in a single slot $\nu = 1,\ldots,\sum_s N_s^{\mathbf{u}_{k-1}}$ measurement index of all sensors in a single slot (Section \[subsec:sol\_I1\]) $\iota = 1,\ldots,m$ feature index $\mathrm{X}_k$ MC state in slot $k$ $e_i$ a generic MC state (e.g., sit, walk, run, etc.) $n$ MC size $N_{\rm tot}$ maximum number of measurements in a single slot $N_s^{\mathbf{u}_{k-1}}$ number of measurements of sensor $s$ using the policy $\mathbf{u}_{k-1}$ $\mathbf{u}_{k-1}$ decision applied in slot $k$ $\mathrm{y}_{u,s,k},\mathrm{z}_{u,s,k}$ $u$-th measurement of sensor $s$ in slot $k$ at the sensor and FC side, respectively $\mathrm{h}_{u,s,k},\hat{\mathrm{h}}_{u,s,k}$ real and estimate channel gain of the $u$-th tx measurement of sensor $s$ in slot $k$ $c_{\iota,s,k}$ $\iota$-th feature of sensor $s$ in slot $k$ $\mathrm{w}_{\rm ch}$ channel estimation error $\mathrm{w}_{\rm noise}$ channel AWGN noise We study a system composed of $S$ sensors which track an unknown underlying system and transmit their data to a common fusion center. Time is slotted and the state of the system in slot $k$, $\mathrm{X}_k$, follows a Markov evolution according to a transition probability matrix $\mathbf{T}$ of size $n\times n$. The state $\mathrm{X}_k$ assumes values in the set $\{e_1,\ldots,e_n\}$ (e.g., $e_1 = $sit, $e_2 = $run, etc.). At every time step, sensor $s = 1,\ldots,S$ measures a feature related to the current state of the system. The measurement is noisy and follows a normal distribution $\mathcal{N}(m_{s,i},Q_{s,i})$, where $m_{s,i}$ and $Q_{s,i}$ are the mean and variance of the feature measured by sensor $s$ when the underlying state of the system is $e_i$ [@Thatte2011]. Since the features are state dependent, we exploit them to track the underlying system. In a single time slot, sensor $s$ extracts $N_s^{\mathbf{u}_{k-1}}$ measurements (or samples) according to the centralized decision $\mathbf{u}_{k-1}$ made by the fusion center in the previous time slot. We denote by $\mathbf{u}_{k-1}$ the column vector with entries $\mathbf{u}_{k-1} = [N_1^{\mathbf{u}_{k-1}},\ldots,N_S^{\mathbf{u}_{k-1}}]^{\rm T}$. The maximum number of samples extracted in a single time slot is $N_{\rm tot} \geq \sum_{s = 1}^S N_s^{\mathbf{u}_{k-1}}$. We assume that the $N_s^{\mathbf{u}_{k-1}}$ measurements are statistically independent and identically distributed within the same slot, but the model may be extended as in [@Thatte2011] to consider correlation between different samples. The set $\mathbf{y}_k = \{\mathrm{y}_{u,s,k}, \forall u = 1,\ldots,N_s^{\mathbf{u}_{k-1}}, \forall s=1,\ldots,S\}$ represents all measurements collected by all sensors in time slot $k$ and $u$ is the index of a measurement in a single time slot. The set $\mathbf{y}_k$ can be seen as the realization of a random variable $\mathbf{Y} = [\mathrm{Y}_1,\ldots,\mathrm{Y}_S]$ (throughout, random variables are denoted with capital letters). The probability density function (pdf) of $\mathbf{Y}$ when the state of the system $e_i$ and the number of samples per sensor $\mathbf{u}_{k-1}$ are given, is denoted by (we adopt a notation similar to [@Zois2013])  $$\begin{aligned} \label{eq:f_Y} f_{\mathbf{Y}}(\mathbf{y}_k | e_i,\mathbf{u}_{k-1}) = \prod_{s = 1}^S \prod_{u = 1}^{N_s^{\mathbf{u}_{k-1}}} f_{\mathrm{Y}_{s}}(\mathrm{y}_{u,s,k} | e_i),\end{aligned}$$ where we exploited the independence of the measurements among different sensors and over time, given the underlying state of the system. In the following, we consider the notation indicated in Table \[tab:parameters\]. Channel {#subsec:channel} ------- Every measurement $\mathrm{y}_{u,s,k}$ is sent over a wireless communication link to the fusion center. The FC receives a channel distorted and noisy version of the measurement  $$\begin{aligned} \mathrm{z}_{u,s,k} = \mathrm{h}_{u,s,k} \mathrm{y}_{u,s,k} + \mathrm{w}_{\rm noise},\end{aligned}$$ where $\mathrm{w}_{\rm noise}$ is a realization of a random variable $\mathrm{W}_{\rm noise} \sim \mathcal{N}(0,\sigma_{\rm noise}^2)$, and $\mathrm{h}_{u,s,k}$ denotes the channel gain. We assume that only a noisy version of $\mathrm{h}_{u,s,k}$ is available at the FC, namely $\hat{\mathrm{h}}_{u,s,k}$, defined as follows [@Ubaidulla2011 Section II]  $$\begin{aligned} \label{eq:delta_h} \mathrm{h}_{u,s,k} = \hat{\mathrm{h}}_{u,s,k} + \mathrm{w}_{\rm ch},\end{aligned}$$ where $\mathrm{W}_{\rm ch} \sim \mathcal{N}(0,\sigma_{\mathrm{h}}^2)$ summarizes the channel estimation errors. The received signal at the receiver side is  $$\begin{aligned} &\mathrm{z}_{u,s,k} = \hat{\mathrm{h}}_{u,s,k} \mathrm{y}_{u,s,k} + \mathrm{y}_{u,s,k} \mathrm{w}_{\rm ch} + \mathrm{w}_{\rm noise}, \label{eq:z_usk}\\ &\mathrm{Z}_s = \hat{\mathrm{h}}_{u,s,k} \mathrm{Y}_s + \mathrm{Y}_s \mathrm{W}_{\rm ch} + \mathrm{○W}_{\rm noise}, \label{eq:Z_s}\end{aligned}$$ where Equation  is expressed in the random variables domain. For estimation purposes, since the FC only knows the term $\mathrm{z}_{u,s,k}$ (and *not* $\mathrm{y}_{u,s,k}$), we need to compute its pdf. Two approaches can be considered. **Non-robust Design.** The simplest choice is to neglect $\mathrm{w}_{\rm ch}$ in Equation . When the state of the system, $e_i$, and the estimate channel gain, $\hat{\mathrm{h}}_{u,s,k}$, are given, the resulting pdf is  $$\begin{aligned} \mathrm{Z}_{s} | e_i, \hat{\mathrm{h}}_{u,s,k} \sim \mathcal{N}(\hat{\mathrm{h}}_{u,s,k} m_{s,i},\hat{\mathrm{h}}_{u,s,k}^2 Q_{s,i} + \sigma_{\rm noise}^2).\end{aligned}$$ **Robust Design.** When $\mathrm{w}_{\rm ch}$ is explicitly taken into account, the pdf of $\mathrm{Z}_s$ given $e_i$ and $\hat{\mathrm{h}}_{u,s,k}$ is not Gaussian anymore. In this case, we have to consider the product of two Gaussian random variables, $\mathrm{Y}_{s}$ and $\mathrm{W}_{\rm ch}$. First, note that $\mathrm{Y}_s | e_i \sim \mathcal{N}(m_{s,i},Q_{s,i})$ can be decomposed as $m_{s,i} + \mathrm{Y}_s^0 | e_i$, with $\mathrm{Y}_s^0 \sim \mathcal{N}(0,Q_{s,i})$. Thus the product $\mathrm{Y}_s \mathrm{W}_{\rm ch}$ given $e_i$ can be written as  $$\begin{aligned} \label{eq:Y_W_plus_Y0} \mathrm{Y}_s \mathrm{W}_{\rm ch} | e_i &= m_{s,i} \mathrm{W}_{\rm ch} + \mathrm{Y}_s^0 \mathrm{W}_{\rm ch} | e_i.\end{aligned}$$ The pdf of the second term $\mathrm{Y}_s^0 \mathrm{W}_{\rm ch} | e_i$ is  $$\begin{aligned} \label{eq:f_YW} f_{\mathrm{Y}_s\mathrm{W}_{\rm ch}}(x | e_i) = \frac{K_0\Big(\frac{|x|}{\sqrt{Q_{s,i}}\sigma_{\rm ch}}\Big)}{\pi \sqrt{Q_{s,i}} \sigma_{\rm ch}},\end{aligned}$$ where $K_0(\cdot)$ is the modified Bessel function of the second kind. The pdf of $\mathrm{Z}_{s}$ can be written as the convolution of Equation  and a Gaussian pdf with mean $\hat{\mathrm{h}}_{u,s,k}m_{s,i}$ and variance $\hat{\mathrm{h}}_{u,s,k}^2 Q_{s,i} + m_{s,i}^2 \sigma_{\rm ch}^2 + \sigma_{\rm noise}^2$, and, in general, is not Gaussian.\ While the non-robust approach is simpler because it only involves Gaussian random variables, it is less accurate than the robust one. An intermediate solution consists of using a modified robust approach obtained neglecting the term $\mathrm{Y}_s^0 \mathrm{W}_{\rm ch}$ in , which can be shown to have a small impact on $\mathrm{Y}_s$. We adopt the intermediate approach in the remainder of the paper, so that using a Gaussian framework is still possible. The channel estimation error is taken into account via the term $m_{s,i} \mathrm{W}_{\rm ch}$:  $$\begin{aligned} \label{eq:Z_s_distro} \mathrm{Z}_{s} | e_i, \hat{\mathrm{h}}_{u,s,k} \sim \mathcal{N}(\hat{\mathrm{h}}_{u,s,k} m_{s,i},\hat{\mathrm{h}}_{u,s,k}^2 Q_{s,i} + m_{s,i}^2\sigma_{\rm ch}^2 + \sigma_{\rm noise}^2).\end{aligned}$$ Tracking {#sec:tracking} ======== At the end of time slot $k$, the fusion center knows the sequence $\mathcal{F}_k$, defined as  $$\begin{aligned} \mathcal{F}_k = \{\mathcal{Z}_k,\hat{\mathcal{H}}_k\},\end{aligned}$$ where $\mathcal{Z}_k \triangleq \{\mathbf{z}_{0},\ldots,\mathbf{z}_{k}\}$ is the temporal sequence of the received measurements and $\hat{\mathcal{H}}_k \triangleq \{\hat{\mathbf{h}}_{0},\ldots,\hat{\mathbf{h}}_{k}\}$. The goal of the system is to track the underlying hidden Markov process (i.e., estimate $\mathrm{X}_k$ in every time slot $k$). Towards this goal, we exploit the sequence $\mathcal{F}_k$ as follows. At the end of every time slot, we update the *belief* of the state of the system defined as  $$\begin{aligned} \label{eq:p_k_k_x} \mathbf{p}_{k|k} \triangleq [\mathbb{P}(\mathrm{X}_k = e_1 | \mathcal{F}_k), \ldots, \mathbb{P}(\mathrm{X}_k = e_n | \mathcal{F}_k)],\end{aligned}$$ which represents the estimated probability of observing $e_1,\ldots,e_n$ at the FC. To determine , we exploit the sensor measurements as well as the channel observations. The belief $\mathbf{p}_{k|k}$ can be optimally evaluated as[^1]  $$\begin{aligned} \label{eq:p_k_k_Bayes} \mathbb{P}(\mathrm{X}_k = e_i | \mathcal{F}_k) = \frac{\mathbb{P}(\mathrm{X}_k = e_i, \mathcal{F}_k | \mathcal{F}_{k-1})}{\mathbb{P}(\mathcal{F}_k | \mathcal{F}_{k-1})}.\end{aligned}$$ The denominator can be computed with the following sum  $$\begin{aligned} &\mathbb{P}(\mathcal{F}_k | \mathcal{F}_{k-1}) = \sum_{\ell = 1}^n \mathbb{P}(\mathrm{X}_k = e_\ell,\mathcal{F}_k|\mathcal{F}_{k-1}). \label{eq:denominator}\end{aligned}$$ Therefore, since every term of the sum in  has the same form of the numerator in , we only focus on $\mathbb{P}(\mathrm{X}_k = e_i, \mathcal{F}_k|\mathcal{F}_{k-1})$. By definition, we have $\mathcal{F}_k = \{\mathcal{Z}_k,\hat{\mathcal{H}}_k\} = \mathcal{F}_{k-1} \cup \{\mathbf{z}_{k},\hat{\mathbf{h}}_{k}\}$, therefore  \[eq:P\_x\_h\_z\_hat\_h\_u\] $$\begin{aligned} \mathbb{P}(\mathrm{X}_k = e_i, \mathcal{F}_k|\mathcal{F}_{k-1}) &= \mathbb{P}(\mathrm{X}_k = e_i, \mathbf{Z} = \mathbf{z}_{k}, \hat{\mathbf{H}} = \hat{\mathbf{h}}_{k} | \mathcal{F}_{k-1}) \\ & = \mathbb{P}(\mathbf{Z} = \mathbf{z}_{k} | \mathrm{X}_k = e_i, \hat{\mathbf{H}} = \hat{\mathbf{h}}_{k}, \mathcal{F}_{k-1}) \label{eq:P_x_h_z_hat_h_u1} \\ & \ \times \mathbb{P}(\hat{\mathbf{H}} = \hat{\mathbf{h}}_{k} | \mathrm{X}_k = e_i, \mathcal{F}_{k-1}) \label{eq:P_x_h_z_hat_h_u2}\\ & \ \times \mathbb{P}(\mathrm{X}_k = e_i | \mathcal{F}_{k-1}). \label{eq:P_x_h_z_hat_h_u3}\end{aligned}$$ We now analyze the three terms - separately (see  \[fig:ori\] for a graphical representation of the estimation scheme). **First Term .** First, we remark that the policy $\mathbf{u}_{k-1}$ is decided in slot $k-1$ and applied in slot $k$. Given the sequence $\mathcal{F}_{k-1}$, $\mathbf{u}_{k-1}$ is uniquely determined (i.e., $\mathbf{u}_{k-1}$ is a deterministic function of $\mathcal{F}_{k-1}$). The map between $\mathcal{F}_{k-1}$ and $\mathbf{u}_{k-1}$ is discussed in Subsection \[subsec:MDP\]. Consider now . Given the underlying state of the system, $\mathbf{Z}$ depends upon $\mathcal{F}_{k-1}$ only through $\mathbf{u}_{k-1}$. Similar to Equation , and using the results of the robust design in Section \[subsec:channel\], we obtain  $$\begin{aligned} \label{eq:f_Z_usk} \begin{split} &\mathbb{P}(\mathbf{Z} = \mathbf{z}_{k} | \mathrm{X}_k = e_i, \hat{\mathbf{H}} = \hat{\mathbf{h}}_{k}, \mathcal{F}_{k-1})= \prod_{s = 1}^S \prod_{u = 1}^{N_s^{\mathbf{u}_{k-1}}} \!\!\! f_{\mathrm{Z}_{s}}(\mathrm{z}_{u,s,k} | e_i, \hat{\mathrm{h}}_{u,s,k}), \end{split}\end{aligned}$$ where $f_{\mathrm{Z}_{s}}(\mathrm{z}_{u,s,k} | e_i, \hat{\mathrm{h}}_{u,s,k})$ is a Gaussian random variable defined in . **Second Term .** The channel itself can be used to track the underlying state of the system. We discuss how to approximate the term $\mathbb{P}(\hat{\mathbf{H}} = \hat{\mathbf{h}}_{k} | \mathrm{X}_k = e_i, \mathcal{F}_{k-1})$ in Subsection \[subsec:channel\_appr\]. **Third Term .** For the last factor in , we exploit the previous belief of the system and the total probability theorem:  $$\begin{aligned} \label{eq:P_xk_GIV_Fkm1} &\mathbb{P}(\mathrm{X}_k = e_i | \mathcal{F}_{k-1}) = \sum_{j = 1}^n \mathbf{T}[j,i] \mathbb{P}(\mathrm{X}_{k-1} = e_j| \mathcal{F}_{k-1}),\end{aligned}$$ where the term $\mathbb{P}(\mathrm{X}_{k-1} = e_j| \mathcal{F}_{k-1})$ is a *prior* and represents the belief in state $k-1$, whereas $\mathbf{T}[j,i]$ is the entry in position $(j,i)$ of the transition probability matrix $\mathbf{T}$ defined in Section \[sec:system\_model\]. Combining -, $\mathbf{p}_{k|k}$ can be recursively computed starting from an initial belief of the system. Channel Approximation {#subsec:channel_appr} --------------------- Since the channel may exhibit temporal correlation within the same slot, modeling it as a whole random variable $\hat{\mathbf{H}}$ would be a hard task and would also incur in high numerical complexity. Since our goal is to estimate the underlying state of the system, we are interested in deriving a simpler representation of the channel. In particular, we introduce an approximation which separates the realizations of the channel from its features, used to account for the temporal correlation. In general, given the sequence $\hat{h}_{1,s,k},\ldots,\hat{h}_{N_s^{\mathbf{u}_{k-1}},s,k}$, a lot of different features can be selected according to the particular scenario and depending upon the considered sensor (some of the most common ones are [@Archasantisuk2015]: the Integrated Signal Level, the Mean Value, the Modified Mean Values, the Signal Square Integral, the Variance, the Root Mean Square, the Level Change, the Level Crossing, the Slope Change, the Willison Amplitude, the Histogram, the Range or the Slope of the Critical Point. We now discuss how to include the features in our model. Define the exact probability of observing $\hat{\mathbf{H}}$ as  $$\begin{aligned} \mathbb{P}_{\mathrm{h}} &\triangleq \mathbb{P}(\hat{\mathbf{H}} = \hat{\mathbf{h}}_{k} | \mathrm{X}_k = e_i, \mathcal{F}_{k-1}).\end{aligned}$$ We approximate the probability $\mathbb{P}_{\mathrm{h}}$ as[^2]  $$\begin{aligned} \label{eq:P_h_approx} \mathbb{P}_{\mathrm{h}} \approx \underbrace{\mathbb{P}(\hat{\mathbf{H}}^{\rm (i.i.d.)} = \hat{\mathbf{h}}_{k} | \mathrm{X}_k = e_i, \mathcal{F}_{k-1})}_{\rm \scriptstyle channel\ realizations} \underbrace{f_{\mathbf{C}}(\mathbf{c}_k | e_i,\mathbf{u}_{k-1})}_{\mathclap{\rm \scriptstyle temporal\ correlation}},\end{aligned}$$ where the first term accounts for the channel realizations in a simple way, whereas the other term considers the channel temporal correlation (i.e., the features) and $\mathbf{u}_{k-1}$ is uniquely determined from $\mathcal{F}_{k-1}$. With , we can easily model the channel gain by studying two different quantities. The random variable $\hat{\mathbf{H}}^{\rm (i.i.d.)}$ is the approximation of $\hat{\mathbf{H}}$ but, differently from the original version, it is assumed independent and identically distributed (i.i.d.) in every time slot and for every sensor. Therefore, its pdf can be simply expressed as (also in this case $\hat{\mathbf{H}}^{\rm (i.i.d.)}$ depends upon $\mathcal{F}_{k-1}$ only through $\mathbf{u}_{k-1}$)  $$\begin{aligned} \mathbb{P}(\hat{\mathbf{H}}^{\rm (i.i.d.)} = \hat{\mathbf{h}}_{k} | \mathrm{X}_k = e_i, \mathcal{F}_{k-1}) = \prod_{s = 1}^S \prod_{u = 1}^{N_s^{\mathbf{u}_{k-1}}} f_{{\hat{\mathrm{H}}}_{s}^{\rm (i.i.d.)}}({\hat{\mathrm{h}}}_{u,s,k}|e_i).\end{aligned}$$ Using Equation , the pdf of ${\hat{\mathrm{H}}}_{s}^{\rm (i.i.d.)}$ can be derived from $\mathrm{H}_{s}^{\rm (i.i.d.)}$ and $\mathrm{W}_{\rm ch}$. It has been shown that a Gamma or Weibull distribution is a good fit of $\mathrm{H}_{s}^{\rm (i.i.d.)}$ to experimental data collected for WBAN channels [@Smith2011]. The second term of Equation  represents the pdf of the features of the channel. The quantity $\mathbf{c}_k$ is the vector of features computed from $\hat{\mathbf{h}}_k$ in slot $k$. Since analyzing all the features is computational demanding, we only consider a subset of $m$ features. We identify the $\iota$-th feature of sensor $s$ in slot $k$ as $\mathrm{c}_{\iota,s,k}$, with $\iota = 1,\ldots,m$. The features are described by a joint random variable $\mathbf{C} = [C_{1,1},\ldots,C_{m,S}]$ ($C_{\iota,s}$ is the random variable associated with the $\iota$-th feature of sensor $s$), whose pdf $f_{\mathbf{C}}(\mathbf{c}_{k} | e_i,\mathbf{u}_{k-1})$, with $\mathbf{c}_{k} \triangleq \{\mathrm{c}_{\iota,s,k}, \forall \iota = 1,\ldots,m, \forall s = 1,\ldots,S\}$, can be evaluated empirically and is summarized by the “features generator” block in  \[fig:mod\]. We consider independently distributed features, so that  $$\begin{aligned} f_{\mathbf{C}}(\mathbf{c}_k | e_i,\mathbf{u}_{k-1}) = \prod_{s = 1}^S \prod_{\iota = 1}^{m} f_{\mathrm{C}_{\iota,s}}(\mathrm{c}_{\iota,s,k} | e_i, N_s^{\mathbf{u}_{k-1}}).\end{aligned}$$ We expect that the higher the number of measurements per sensor $N_s^{\mathbf{u}_{k-1}}$, the smaller the variance of $\mathrm{C}_{\iota,s}$.[^3] In summary, we rewrite  as  $$\begin{aligned} \label{eq:P_h_approx2} \begin{split} \mathbb{P}_{\mathrm{h}} = \prod_{s = 1}^S \bigg( \prod_{u = 1}^{N_s^{\mathbf{u}_{k-1}}} &f_{{\hat{\mathrm{H}}}_{s}^{\rm (i.i.d.)}}({\hat{\mathrm{h}}}_{u,s,k}|e_i) \\ \prod_{\iota = 1}^m \hspace{2mm} &f_{\mathrm{C}_{\iota,s}}(\mathrm{c}_{\iota,s,k} | e_i, N_s^{\mathbf{u}_{k-1}}) \bigg). \end{split}\end{aligned}$$ Optimization {#sec:optimization} ============ The goal of the system is to simultaneously achieve high detection accuracy and low energy expenditure. These two conflicting objectives can be handled as a multi-objective weighted minimization problem. We define the instantaneous reward function as  $$\begin{aligned} \label{eq:r_instant} r(\mathbf{p}_{k|k},\mathbf{u}_k) \triangleq (1-\lambda) \Delta(\mathbf{p}_{k|k}) + \lambda c(\mathbf{u}_k),\end{aligned}$$ where $\Delta(\mathbf{p}_{k|k})$ represents the average estimation error, $c(\mathbf{u}_k)$ is an energy cost function increasing with $\mathbf{u}_k$ and $\lambda \in [0,1]$ is the weight. We express $\Delta(\mathbf{p}_{k|k})$ as  $$\begin{aligned} \Delta(\mathbf{p}_{k|k}) &\triangleq \sum_{i = 1}^n \mathbb{E}\left[(x_{i,k} - \mathbb{P}(\mathrm{X}_k = e_i | \mathcal{F}_k))^2 | \mathcal{F}_k \right]\\ & = 1 - \sum_{i = 1}^n \mathbb{P}(\mathrm{X}_k = e_i | \mathcal{F}_k)^2,\end{aligned}$$ where the second equality can be derived after some algebraic manipulations. While  represents the instantaneous reward in a single slot, we are interested in the long-term optimization, thus, the long-run reward function becomes[^4]  $$\begin{aligned} R_\mu \triangleq \mathbb{E}\left[ \lim_{K \to \infty} \frac{1}{K}\sum_{k = 1}^K r(\mathbf{p}_{k|k},\mathbf{u}_k) \right].\end{aligned}$$ The expectation is taken with respect to the measurements and to the channels. The policy $\mu = [\mathbf{u}_1,\mathbf{u}_2,\ldots]$ defines the number of samples gathered in every time slot for each sensor. The optimization problem is  $$\begin{aligned} \label{eq:mu_star} \mu^\star = \argmin{\mu} \{ R_\mu \}.\end{aligned}$$ Markov Decision Process Formulation {#subsec:MDP} ----------------------------------- The problem can be viewed as a Partially-Observable Markov Decision Process (POMDP) [@Zois2013] and converted to an equivalent MDP [@Puterman1995] for solution. The Markov Chain (MC) state of the converted problem is the belief $\mathbf{p}_{k|k}$ (it can be shown that this represents a sufficient statistic for control purposes [@Kaelbling1998]) and a policy $\mu$ specifies the number of samples to gather and transmit for every possible combination of $\mathbf{p}_{k|k}$. Since we focus on long-term policies, we also drop all the dependencies from the slot index $k$ and enumerate the states of the Markov chain space with an index $\psi = 1,2,\ldots$ (i.e., the belief space is now composed of $\mathbf{p}_1,\mathbf{p}_2,\ldots$). Also, it can be shown that $\mu^\star$ is a deterministic policy (see [@Puterman1995 Theorems 6.2.9 and 6.2.10] for the discounted horizon case), thus we restrict our study to the class of deterministic strategies. Common algorithms to solve average long-term MDPs are the Value Iteration Algorithm (VIA) or the Policy Iteration Algorithm (PIA) [@Bertsekas2005 Vol. II, Sec. 4]. The basic step of both these approaches is the *policy improvement step*, in which the following cost-to-go function is updated  $$\begin{aligned} &J(\mathbf{p}_\psi) \leftarrow \min_{\mathbf{u}} \{ K(\mathbf{p}_\psi,\mathbf{u}) \}, \label{eq:J_k}\\ &K(\mathbf{p}_\psi,\mathbf{u}) \triangleq \mathbb{E}_{\mathbf{Z},\hat{\mathbf{H}}}[ \underbrace{r(\mathbf{p}_\psi,\mathbf{u}) + J(\mathbf{p}_{\psi'})}_{\textstyle \triangleq (\bullet)} | \mathbf{p}_\psi,\mathbf{u}],\label{eq:K_k}\end{aligned}$$ where $r(\mathbf{p}_\psi,\mathbf{u})$ represents the instantaneous reward defined in , whereas $J(\mathbf{p}_{\psi'})$ accounts for the future rewards. The index of the new state, given $\mathbf{Z}$, $\hat{\mathbf{H}}$, $\mathbf{p}_\psi$ and $\mathbf{u}$, is $\psi'$. The corresponding belief $\mathbf{p}_{\psi'}$ is derived as  \[eq:p\_psi\_prime\] $$\begin{aligned} &\mathbf{p}_{\psi'}(i) = \frac{\mathbb{P}(\mathbf{Z} = \mathbf{z}, \hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathrm{X} = e_i, \mathbf{u}) \mathbf{p}_\psi(i)}{\mathbb{P}(\mathbf{Z} = \mathbf{z}, \hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathbf{p}_\psi, \mathbf{u})},\\ &\mathbb{P}(\mathbf{Z} = \mathbf{z}, \hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathbf{p}_\psi, \mathbf{u}) = \sum_{j = 1}^n \mathbb{P}(\mathbf{Z} = \mathbf{z}, \hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathrm{X} = e_j, \mathbf{u}) \mathbf{p}_\psi(j),\end{aligned}$$ where $\mathbf{p}_\psi(i)$ is the $i$-th entry of $\mathbf{p}_\psi$ and $\mathrm{X}$ is the state of the underlying system without time index. Note that $\sum_{i = 1}^n \mathbf{p}_{\psi'}(i) = 1$. Equation  is the equivalent of in the long term and can be evaluated as described in Section \[sec:tracking\]. The expectation in Equation  is taken with respect to the measurements $\mathbf{Z}$ and the channel conditions $\hat{\mathbf{H}}$ and can be rewritten by definition as  $$\begin{aligned} \label{eq:expectation} &\mathbb{E}_{\mathbf{Z},\hat{\mathbf{H}}}[(\bullet) | \mathbf{p}_\psi,\mathbf{u}] = \int \int (\bullet) \mathbb{P}(\mathbf{Z} = \mathbf{z}, \hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathbf{p}_\psi, \mathbf{u}) \ \mbox{d}\mathbf{z} \ \mbox{d}\hat{\mathbf{h}}.\end{aligned}$$ Note that with our formulation, the instantaneous reward $(1-\lambda) \Delta(\mathbf{p}_\psi) + \lambda c(\mathbf{u})$ in Equation  does not depend upon $\mathbf{Z}$ or $\hat{\mathbf{H}}$, and thus can be moved outside the expectation term:  $$\begin{aligned} \begin{split} \label{eq:K_k_revised} &K(\mathbf{p}_\psi,\! \mathbf{u}) = (1-\lambda) \Delta(\mathbf{p}_\psi) \! +\! \lambda c(\mathbf{u})\! +\! \mathbb{E}[J(\mathbf{p}_{\psi'}) | \mathcal{F}_k]. \end{split}\end{aligned}$$ In summary, from -, we can rewrite the policy improvement step as  \[eq:problem\] $$\begin{aligned} \min_{\mathbf{u}} \{\lambda c(\mathbf{u})\! +\! \mathbb{E}[J(\mathbf{p}_{\psi'}) | \mathcal{F}_k]\}\end{aligned}$$ $$\begin{aligned} {2} \shortintertext{s.t.:} & \eqref{eq:p_psi_prime}, \qquad \eqref{eq:expectation} , \qquad \mathbf{u} = [N_1^{\mathbf{u}},\ldots, N_S^{\mathbf{u}}]^{\rm T}, \\ & N^{\mathbf{u}} \triangleq \sum_{s = 1}^S N_s^{\mathbf{u}} \leq N_{\rm tot}, \qquad N_s^{\mathbf{u}} \geq 0, \qquad \forall s = 1,\ldots,S.\end{aligned}$$ The constraint $N^{\mathbf{u}} \leq N_{\rm tot}$ was introduced in Section \[sec:system\_model\]. The constraint $N_s^{\mathbf{u}} \geq 0$ ensures that we have a non-negative number of measurements. Note that, because of the term $c(\mathbf{u})$ (which is a non-decreasing function of $N^{\mathbf{u}}$) in the objective function, the constraint $N^{\mathbf{u}} \leq N_{\rm tot}$ is not satisfied with equality, in general (a particular case in which the equality holds is for $\lambda = 0$). Analysis {#sec:analysis} ======== Determining the *optimal* solution described in the previous section requires a challenging numerical evaluation. In particular, there are two main issues involved: - I1) must be solved for every belief [@Zhou2001]; - I2) minimizing Problem  is computationally demanding. In particular, the optimization involves a combinatorial problem with multi-dimensional integrals. In this section, we propose solutions to both of these problems. In particular, I1) is handled in Subsection \[subsec:sol\_I1\], in which we introduce bounds which can be efficiently computed on a small subset of the belief space, whereas we deal with I2) in Subsection \[subsec:sol\_I2\] by reducing the complexity multi-dimensional integrals and introducing a sub-optimal scheme. Concavity Properties -------------------- We first introduce some preliminary results on the concavity properties of the reward function. \[thm:K\_conc\] Consider a set of $n$ beliefs $\mathbf{b} = [\mathbf{b}^{(1)},\ldots,\mathbf{b}^{(n)}]$ such that a generic belief $\mathbf{p}_\psi$ can be written as  $$\begin{aligned} \label{eq:p_k_k_v_i_b_k_k} &\mathbf{p}_\psi = \sum_{i = 1}^n \alpha_i \mathbf{b}^{(i)},\\ &\sum_{i = 1}^n \alpha_i = 1, \quad \alpha_i \geq 0,\quad \forall i = 1,\ldots,n \end{aligned}$$ where $\alpha_i$ is a constant. Then, function $K(\cdot)$ is lower bounded by  $$\begin{aligned} \label{eq:K_concave} K(\mathbf{p}_\psi,\mathbf{u}) \geq r(\mathbf{p}_\psi,\! \mathbf{u}) \! +\! \sum_{i = 1}^n \alpha_i (K(\mathbf{b}^{(i)},\mathbf{u}) \! -\! r(\mathbf{b}^{(i)},\! \mathbf{u})). \end{aligned}$$ See  \[proof:lower\_bound\]. Several different techniques for defining the vector $\mathbf{b}$ can be found in the literature [@Shani2013] (e.g., in our numerical evaluation we will apply Levejoy’s grid method [@Lovejoy1991]). A consequence of Theorem \[thm:K\_conc\] is derived in the following corollary. \[corol:J\_concave\] The function  $$\begin{aligned} J(\mathbf{p}_\psi) - (1-\lambda)\Delta(\mathbf{p}_\psi) \end{aligned}$$ is concave in every entry of $\mathbf{p}_\psi$. See  \[proof:J\_concave\]. Note that the cost term $c(\mathbf{u})$ is not included in the previous corollary since it does not depend upon $\mathbf{p}_\psi$. Issue I1) {#subsec:sol_I1} --------- Using the results of Corollary \[corol:J\_concave\], we introduce a lower and an upper bound to the reward $R_{\mu^\star}$. In Section \[sec:numerical\_results\] we will show that these bounds are tight and can be used as approximation to $R_{\mu^\star}$. ### Lower Bound {#subsubsec:lower_bound} From Corollary \[corol:J\_concave\], we have ($\alpha_i$ and $\mathbf{b}^{(i)}$ are defined in Theorem \[thm:K\_conc\]):  $$\begin{aligned} \label{eq:J_concave} J(\mathbf{p}_\psi) \geq (1-\lambda)\Delta(\mathbf{p}_\psi) \!+\! \sum_{i = 1}^n \alpha_i (J(\mathbf{b}^{(i)}) \!-\! (1 \!-\! \lambda)\Delta(\mathbf{b}^{(i)})).\end{aligned}$$ In general, two different beliefs $\mathbf{p}_{\psi^\prime}$ and $\mathbf{p}_{\psi^{\prime\prime}}$ can be written as a combination of different vectors $\mathbf{b}^\prime$ and $\mathbf{b}^{\prime\prime}$, respectively. We denote by $\mathcal{B}$ the set which contains all the vectors $\mathbf{b}^\prime$, $\mathbf{b}^{\prime\prime}$, etc., so that *every* belief $\mathbf{p}_\psi$ can be written as a linear combination of the elements of $\mathcal{B}$.[^5] A lower bound to the optimal performance can be obtained as follows. The optimal $J(\mathbf{p}_\psi)$ is only computed in all the elements of $\mathcal{B}$, whereas, in all other states, $J(\mathbf{p}_\psi)$ is approximated with the right-hand side of . Using this approximation at every step of the value iteration algorithm [@Bertsekas2005], we obtain a lower bound to $R_{\mu^\star}$, denoted by $\tilde{R}_{\mu^\star}$. Note that, while the original $J(\mathbf{p}_\psi)$ should be computed in an infinite state space, $\mathcal{B}$ is a finite set and its size can be defined according to the desired precision. ### Upper Bound {#subsubsec:upper_bound} Using , it is also possible to derive an upper bound to $R_{\mu^\star}$. In particular, since $J(\mathbf{p}_\psi) - (1-\lambda)\Delta(\mathbf{p}_\psi)$ is concave, it is upper bounded by a piece-wise linear function composed of its tangent curves (see  \[fig:up\_low\] for a graphic interpretation) [@Krishnamurthy2002]. While, in the general case, the tangents can be computed in arbitrary points of the belief space, it becomes numerically easier to derive them for the values in $\mathcal{B}$, so that lower and upper bounds can be simultaneously evaluated. Bounds-Based Policy {#subsec:BBP} ------------------- So far, we have described only techniques for computing bounds to the long-term reward $R_{\mu^\star}$ and in particular we introduced $\tilde{R}_{\mu^\star}$. However, while computing $R_{\mu^\star}$ requires us to evaluate the optimal policy $\mu^\star$ in all the belief space, when we compute $\tilde{R}_{\mu^\star}$, $\mu^\star$ is specified only in the subset of beliefs $\mathcal{B}$. We formally denote this difference by writing $R_{\mu^\star}$ and $\tilde{R}_{\mu_{\mathcal{B}}^\star}$. In general, given $\mu_{\mathcal{B}}^\star$, we do not know the optimal policy in all the remaining states. Therefore, since we need to explicitly define a policy for every belief, we extend $\mu_{\mathcal{B}}^\star$ with a probabilistic approach and derive the Bounds-Based Policy (BBP) as follows (we denote $\mathbf{u}_\psi^\star$ as the solution of  when the belief is $\mathbf{p}_\psi$):  $$\begin{aligned} \label{eq:prob_policy} \mathbb{P}(\mathbf{U} = \mathbf{u} | \mathbf{p}_\psi) = \begin{cases} \delta_{\mathbf{u},\mathbf{u}_\psi^\star}, \quad &\mbox{if } \mathbf{p}_\psi \in \mathcal{B}, \\ \sum_{i = 1}^n \alpha_i\delta_{\mathbf{u},\mathbf{u}_\psi^\star}, \quad &\mbox{otherwise}, \end{cases} \quad \mbox{(BBP)}\end{aligned}$$ where $\alpha_i$ and $\mathbf{b}^{(i)}$ are defined as in Theorem \[thm:K\_conc\], $\mathbf{U}$ is the policy random variable and $\delta_{\cdot,\cdot}$ is the Kronecker delta function. In the first case, if $\mathbf{p}_\psi \in \mathcal{B}$, with probability equal to $1$, $\mathbf{U}$ assumes the value $\mathbf{u}_\psi^\star$, which is the solution of  and is enclosed in $\mu_{\mathcal{B}}^\star$. Otherwise, the second case can be obtained as a combination of the policies computed in $\mathcal{B}$ and it is always less than $1$. We will further discuss BBP in the numerical evaluation. Note that multiple choices of $\alpha_i$ may exist. In our case, we define them using Lovejoy’s strategy [@Lovejoy1991]. Issue I2) {#subsec:sol_I2} --------- Issue I2) is composed of two steps which we analyze sequentially in Subsections \[subsubsec:multi\_dim\_integrals\] and \[subsubsec:iterative\_opt\]. ### Multi-Dimensional Integrals {#subsubsec:multi_dim_integrals} In order to simplify I2), we need to separate the multi-dimensional integrals. This can be done optimally as follows. First, focus on the measurements in a single slot and enumerate every measurement from $1$ to $N^{\mathbf{u}} = \sum_{s = 1}^S N_s^{\mathbf{u}}$ with an index $\nu$ (i.e., $\mathrm{z}^{(1)},\ldots,\mathrm{z}^{(N^{\mathbf{u}})}$) such that there is a one-to-one mapping with each pair $(u,s)$. The term $\mathrm{z}^{(\nu)}$ coincides with $\mathrm{z}_{u,s}$ (with the Markov formulation we neglect the temporal index $k$). The pdf of $\mathrm{z}^{(\nu)}$ given the underlying state of the system $e_i$ and the underlying estimate channel $\hat{h}^{(\nu)}$ is $f_{\mathrm{Z}_s}(\mathrm{z}^{(\nu)} | e_i, \hat{h}^{(\nu)})$ (defined according to Section \[subsec:channel\]). \[thm:belief\_simplified\] Given $\mathbf{p}_\psi$, the new belief $\mathbf{p}_{\psi'}$ can be equivalently computed as  $$\begin{aligned} \label{eq:p_k_k_recur} &\mathbf{p}_{\psi'}(i) = \frac{\mathbb{P}(\hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathrm{X} = e_i, \mathbf{u}) \mathbf{p}_{\psi'}(i|N^{\mathbf{u}})}{\sum_{j = 1}^n \mathbb{P}(\hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathrm{X} = e_j, \mathbf{u}) \mathbf{p}_{\psi'}(j|N^{\mathbf{u}})}, \end{aligned}$$ where $\mathbf{p}_{\psi'}(i | \nu)$ is recursively defined as[^6]  $$\begin{aligned} \label{eq:x_j_F_kv} \begin{split} &\mathbf{p}_{\psi'}(i | \nu) \triangleq \begin{cases} \!\!\frac{\textstyle f_{\mathrm{Z}_s}(\mathrm{z}^{(\nu)} | e_i, \hat{h}^{(\nu)}) \mathbf{p}_{\psi'}(i|\nu-1)}{\textstyle\sum_{j = 1}^n f_{\mathrm{Z}_s}(\mathrm{z}^{(\nu)} | e_j, \hat{h}^{(\nu)}) \mathbf{p}_{\psi'}(j|\nu-1)}, \!\!\!\! & \mbox{if } \nu \geq 1, \\ \mathbf{p}_{\psi}(i), \!\!\!\! & \mbox{if } \nu = 0. \end{cases} \end{split} \end{aligned}$$ See  \[proof:belief\_simplified\]. With the previous theorem, instead of considering all the measurements together, we iteratively compute a partial belief for every new measurement $\nu$ exploiting the old partial belief at stage $\nu-1$ as in Equation . This allows us to decompose the $N^{\mathbf{u}}$-dimensional integral of the measurements in Equation  in $N^{\mathbf{u}}$ separate uni-dimensional integrals *without* performance loss. So far, Theorem \[thm:belief\_simplified\] has been presented as a method to simplify the integral of the measurements. However, when we approximate $\mathbb{P}_{\mathrm{h}}$ as in Equation , then the same technique can be extended to the channel to simplify the corresponding integral. This is possible because the temporal correlation of the channel has been entirely contained in a separate random variable $\mathbf{C}$. We give the extension of Theorem \[thm:belief\_simplified\] in the next corollary. \[corol:belief\_simplified\] Given $\mathbf{p}_\psi$, the new belief $\mathbf{p}_{\psi'}$ can be equivalently computed as  $$\begin{aligned} &\mathbf{p}_{\psi'}(i) = \frac{\mathbf{a}_{\psi'}(i|S)}{\sum_{j = 1}^n \mathbf{a}_{\psi'}(j|S)}, \end{aligned}$$ $\mathbf{a}_{\psi'}(i | s)$ is recursively defined as  $$\begin{aligned} \begin{split} &\mathbf{a}_{\psi'}(i | s) \triangleq \begin{cases} \!\!\frac{\textstyle \prod_{\iota = 1}^{m} f_{\mathrm{C}_{\iota,s}}(\mathrm{c}_{\iota,s} | e_i, N_s^{\mathbf{u}}) \mathbf{a}_{\psi'}(i|s-1)}{\textstyle\sum_{j = 1}^n \prod_{\iota = 1}^{m} f_{\mathrm{C}_{\iota,s}}(\mathrm{c}_{\iota,s} | e_j, N_s^{\mathbf{u}}) \mathbf{a}_{\psi'}(i | s-1)}, \!\!\!\! & \mbox{if } s \geq 1, \\ \mathbf{p}_{\psi'}(i | N^{\mathbf{u}}), \!\!\!\! & \mbox{if } s = 0 \end{cases} \end{split} \end{aligned}$$ and $\mathbf{p}_{\psi'}(i | \nu)$ is given in  where we replace the terms $f_{\mathrm{Z}_s}(\mathrm{z}^{(\nu)} | e_i, \hat{h}^{(\nu)})$ with  $$\begin{aligned} f_{\mathrm{Z}_s,\mathrm{H}_s}(\mathrm{z}^{(\nu)},\hat{h}^{(\nu)} | e_i) = f_{\hat{\mathrm{H}}_s}(\hat{\mathrm{h}}^{(\nu)} | e_i) f_{\mathrm{Z}_s}(\mathrm{z}^{(\nu)} | e_i, \hat{h}^{(\nu)}). \end{aligned}$$ With Corollary \[corol:belief\_simplified\], all the integrals in  can be reduced to simpler uni-dimensional integrals, so that I2) is partially solved. ### Iterative Optimization {#subsubsec:iterative_opt} Another issue of I2) is related to the combinatorial nature of the problem. In particular, in , $\frac{\prod_{s=1}^{S-1} (N^{\mathbf{u}}+s)}{(S-1)!}$ possible combinations are available for a fixed $N^{\mathbf{u}}$. In order to reduce this number, we define a sub-optimal greedy approach which leads to local minima. The idea consists in dividing the $N^{\mathbf{u}}$ samples in smaller subsets and optimally allocate these subsets. In particular, define an ordered vector of $L$ integers $[N^{(1)},\ldots,N^{(L)}]$ such that $\sum_{\ell = 1}^L N^{(\ell)} = N^{\mathbf{u}}$. Then, starting with $N^{(1)}$, compute the optimal policy $\mathbf{u}^1$ (e.g., $\mathbf{u}^1 = [1, 0, 0]^{\rm T}$ with $S = 3$ and $N^{(1)} = 1$) and store it. Then, consider $N^{(1)}+N^{(2)}$ and find $\mathbf{u}^{(2)}$ by solving the problem using the partial policy $\mathbf{u}^{(1)}$ and optimizing only the choice of the remaining $N^{(2)}$ sensors to use (e.g., considering the previous example, the possible choices are $\mathbf{u}^{(2)} = [2, 0, 0]^{\rm T}, [1, 1, 0]^{\rm T}, [1, 0, 1]^{\rm T}$ with $N^{(2)} = 1$). Repeat the procedure for every $\ell = 1,\ldots,L$. At the last step, the policy is fully specified by $\mathbf{u} = \mathbf{u}^{(L)}$. Numerical Results {#sec:numerical_results} ================= We consider a WBAN based on the KNOW-ME system described in [@Thatte2011] composed of two accelerometers (ACC1 and ACC2) and an electrocardiography sensor (ECG), which track the current activity of a subject (sitting, standing, running, walking). Different costs are associated with the data reception by different sensors: ACC1 is located inside the fusion center (which, e.g., can be a mobile phone in a pocket), thus the reception of a sample from ACC1 is energy efficient, whereas ACC2 and ECG are on-body sensors and the data reception from these requires more energy. ACC2 provides good quality measurements, but also experiences bad channel conditions (e.g., it is a sensor on the wrist, subject to movements and thus its channel changes significantly over time), thus it is likely that its measurements are highly corrupted by noise. In our example, the feature extracted from ACC2 can be interpreted as a measure of the temporal periodicity of the movement of the arm where the sensor is placed on. The electrocardiography can be placed in the chest, thus it experiences less movements than ACC2 and is characterized by a better channel. We assume that no features are extracted from ECG. Finally, since ACC1 is located inside the fusion center, no communication channel is considered for this sensor. **Parameters.** If not otherwise stated, we use the following parameters [@Thatte2011]: up to $N_{\rm tot} = 6$ measurements per slot, $n = 4$ states of the system (sitting, standing, running, walking), $S = 3$ sensors (ACC1, ACC2, ECG), a transition probability matrix  $$\begin{aligned} \mathbf{T} = \kbordermatrix{ & {\rm sit} & {\rm stand} & {\rm run} & {\rm walk} \\ & 0.6 & 0.1 & 0 & 0.3 \\ & 0.2 & 0.4 & 0.1 & 0.3 \\ & 0 & 0.1 & 0.3 & 0.6 \\ & 0.4 & 0 & 0.3 & 0.3 }\end{aligned}$$ and a cost function $c(\mathbf{u}_k) = \sum_{s \in \{{\rm ACC1}, {\rm ACC2}, {\rm ECG}\}} \delta_s N_s^{\mathbf{u}_k}$, with $\boldsymbol{\delta} = [0.58, 0.776, 1]$ and $|\mathcal{B}| = 35$. The channel is characterized by $\sigma_{\rm ch} = 0.05$, $\sigma_{\rm noise} = 0.05$ and the effects of different path losses are already enclosed in the pdf $f_{\mathrm{H}_{s}^{\rm (i.i.d.)}}(\mathrm{h}| e_i)$. A single feature is considered (i.e., $m = 1$) and is extracted from ACC2. The densities of the measurements, channels and features are represented in s \[fig:z\_distro\_ACC1\]-\[fig:Pk\_distro\] (as in [@Thatte2011; @Smith2011] we used Gaussian pdfs for the measurements and Gamma pdfs for the channels).  \[fig:z\_distro\_ACC1\] is referred to ACC1 and thus is not influenced by any channel value (the generated measurements are always perfectly “received” by the FC). Note that all the pdfs are Gaussian-distributed with different means and variances. In this case, it is almost impossible to distinguish between sitting/standing or running/walking. Instead, for the sensors ACC2 and ECG, as can be seen in  \[fig:z\_distro\_ACC2\_ECG\], the worse the channel, the more degraded the received signal and thus the more difficult to distinguish the four densities.[^7] Note that all the densities are Gaussian distributed even when the channel gets worse.  \[fig:h\_distro\] shows the channel pdfs for the different states of the system. For ACC2, in the “run” and “walk” states, the average channel gains are lower than for the others, so it is more likely to receive degraded measurements. Finally, the distributions of the feature of ACC2 are represented in  \[fig:Pk\_distro\]. When more samples are extracted from the same sensor (i.e., higher $N_{{\rm ACC2}}^{\mathbf{u}_{k-1}}$), it becomes easier to distinguish the underlying state of the system. From the previous discussion, we can conclude that a lot of different trade-offs exist, thus determining, a priori, which is the best sensor to use is not an easy task. **Bounds.** First, we discuss the bounds we derived in Section \[subsec:sol\_I1\] for a simple case. For numerical tractability, we set $n = 2$ (“sit” and “stand”), $\mathbf{T}[{\rm sit},{\rm stand}] = 6/9$, $\mathbf{T}[{\rm stand},{\rm sit}] = 1/2$, $\sigma_{\rm ch} = 0.001$, $N_{\rm max} = 3$ and all the other parameters as defined before. Since, theoretically, the belief space is continuous, it is not possible to find numerically the *true* optimal reward $R_{\mu^\star}$, and a discrete approximate approach with a sufficiently large number of quantization levels is required. Therefore, for computing $R_{\mu^\star}$, we used Levejoy’s grid method [@Lovejoy1991] with $200$ levels (i.e., $201$ beliefs), whereas for the upper and lower bounds we reduced the levels to $5$ (i.e., $|\mathcal{B}| = 6$ in Section \[subsubsec:lower\_bound\]). Thus, all the previous $201$ beliefs can now be written as a linear combination of $6$ beliefs as defined in Theorem \[thm:K\_conc\]. We generated our results for different values of the weighting $\lambda \in [0,1]$ and computed the corresponding long-term reward (see  \[fig:upper\_lower\]). It can be noted that the bounds are extremely tight in all the region and that the lower bound approximates the optimal strategy very well. Because of this, in the following we will always approximate the optimal reward $R_{\mu^\star}$, with its lower bound computed using Levejoy’s approximation with a lower amount of beliefs, denoted by $\tilde{R}_{\mu^\star}$.[^8] **Sub-optimal strategies.** We now compare $\tilde{R}_{\mu^\star}$ with other strategies.  \[fig:sub\_opt\] shows the boundaries of the Pareto regions for the different techniques (we artificially lengthen the beginning of each curve for graphical comparison). The axes are the energy cost $c(\mathbf{u}_k)$ (defined in ) and the probability of incorrect state prediction. The points with lower (higher) MSE (which is strictly related to the prediction error probability) are obtained for lower (higher) values of the weight $\lambda$. We consider the lower bound to the optimal policy $\tilde{R}_{\mu^\star}$, its greedy approximation, and the reward of three policies obtained by choosing the same sensor all the time. The curve referred to $\mu^\star$ always dominates the others, since the $\min$ operation in  is solved optimally. We also plot $\tilde{R}_{\rm greedy}$, in which we used the approximation described in Section \[subsubsec:iterative\_opt\] with $N^{(1)} = \ldots = N^{(L)} = 1$ for solving the $\min$ in . It is interesting to note that, even if the greedy policy uses a simpler approach to solve , it achieves almost optimal performance. Moreover, it is extremely easy to compute, thus it may be considered as a good alternative to $\mu^\star$. We also remark that the performance of the greedy approach can be further improved by choosing larger $N^{(m)}$ (see Section \[subsubsec:iterative\_opt\]). Finally, we derived the three policies which use the same sensor all the time. In general, these are strongly sub-optimal, especially if ACC1 or ECG are considered. Since it is also possible to extract a feature from ACC2, choosing it provides better performance, even if it does not achieve $\tilde{R}_{\mu^\star}$. **Measurements and channel.** In this paragraph, we want to describe the importance of using measurements and the channel state jointly to obtain high performance (see s \[fig:strategy\] and \[fig:bars\]). In addition to $\mu^\star$, we introduce the strategies $\mu_{\rm ms}^\star$ and $\mu_{\rm ch}^\star$, which indicate the optimal policies obtained by using the measurements only and the channels only, respectively. Thus, for example, when $\mu_{\rm ms}^\star$ is used, even if the channels are available, they are not used for improving the tracking performance.  \[fig:strategy\] shows the boundaries of the Pareto regions for the three cases. The main difference among these can be noted when the prediction error probability is the dominating term (upper left-side of the figure). In this region, $\mu_{\rm ms}^\star$ and $\mu_{\rm ch}^\star$ cannot reach MSEs as low as $\mu^\star$, which encloses both the advantages of the other two schemes. Instead, if the energy cost is the dominating factor, the three approaches are similar since using the most energy efficient sensors is sufficient to achieve energy efficiency. An additional comparison of the three schemes is shown in  \[fig:bars\], which represents the average usage time of every sensor for different values of $\lambda$. When $\lambda = 1$, the system chooses $N_{\rm ACC1}^{\mathbf{u}} = 1$, $N_{\rm ACC2}^{\mathbf{u}} = 0$ and $N_{\rm ECG}^{\mathbf{u}} = 0$ all the time so that to consume only $\delta_{\rm ACC1}$ in every slot (we remind that $\delta_{\rm ACC1}$ is the lowest among the three costs). Also, since the only concern is the energy cost, all policies coincide in this case. However, for $\lambda = 0$, the three schemes behave significantly differently. With $\mu^\star$, the first sensor ACC1 is never used since it does not have any communication channel to exploit as additional source of information and the quality of its measurements is comparable with the others. Similarly, $\mu_{\rm ch}^\star$ does not use ACC1 because there is no channel and its tracking system is based on the channel only. Finally, $\mu_{\rm ms}^\star$ uses all the sensors with different percentages in order to exploit and combine the measurements of every sensor. For the other values of $\lambda$, we obtain intermediate situations between the two. In particular, as $\lambda$ increases, the role of ACC1 becomes dominant until the situation in which ACC1 is used all the time. **Channel errors.** Changing the estimation error $\sigma_{\rm ch}$ significantly affects the tracking performance. When $\sigma_{\rm ch}$ increases, in addition to obtain a worse estimation of the channel, the received measurements are also degraded, and thus they become less informative. This can be clearly seen in  \[fig:z\_distro\_ACC2\_ECG\], as previously explained. Note that, when $\sigma_{\rm ch}$ increases significantly, using ACC1 becomes the only choice to infer information about the underlying system.  \[fig:sigma\_h\] shows the boundary of the Pareto regions for different $\sigma_{\rm ch}$. When $\sigma_{\rm ch} = 0.001$, the estimation errors are almost negligible, thus this is a lower bound to the performance of the system. For higher $\sigma_{\rm ch}$, the MSE quickly increases, making the system unusable. **Number of samples.** When $N_{\rm max}$ is small, the system performance is limited and reaching very low prediction error probabilities is not possible. In  \[fig:change\_N\], we represent the boundaries of the Pareto regions for different values of $N_{\rm max}$ (as before, we lengthen the beginning of each curve for graphical purposes). Every case can be approximately seen as a sub-case of $N_{\rm max} = 15$ obtained for different values of $\lambda$. Indeed, the higher the value of $\lambda$, the lower the average number of used sensors per slot (we remark that even if $N_{\rm max}$ is fixed, a policy may use $N^{\mathbf{u}} \leq N_{\rm max}$ samples per slot, if necessary). It is interesting to note that the improvement obtained from $N_{\rm max}$ to $N_{\rm max}+1$ decreases with $N_{\rm max}$ (e.g., going from $N_{\rm max} = 1$ to $N_{\rm max} = 2$ provides a much larger improvement than going from $N_{\rm max} = 5$ to $N_{\rm max} = 6$). However, the energy costs increase linearly with the number of measurements, therefore a very high energy cost must be accrued to achieve low MSEs. **Probabilistic policy.** Finally, we discuss the bounds-based policy we introduced in Equation . For clarity, we remark that $\tilde{R}_{\mu_{\mathcal{B}}^\star} = \tilde{R}_{\mu^\star} \leq R_{\mu^\star} \leq R_{\rm BBP}$. The first equality is by definition, the first inequality comes from Subsection \[subsubsec:lower\_bound\] and the last inequality is by definition of the optimal policy. Nevertheless, BBP is computed using $\mu_{\mathcal{B}}^\star$, which is the only policy (partially) available when we compute $\tilde{R}_{\mu_{\mathcal{B}^\star}}$. However, differently from the lower and upper bounds, BBP is a policy which can implemented in a real system.  \[fig:simul\] shows the boundaries of the Pareto regions of the lower bound and of BBP. It can seen that the two are quite close in a large region. However, if very strict constraints are imposed on the accuracy (abscissa values), then it may be necessary to compute the optimal scheme, which in turn may require a much higher computational cost. Conclusions {#sec:conclusions} =========== We set up an active sensing problem in which sensor measurements and communication channel characteristics are used jointly to improve the system performance. Energy costs at the fusion center and estimation quality are handled together so as to characterize the Pareto regions of the system. We set up a POMDP framework and converted it to an equivalent MDP for solving. Exploiting the structural properties of the model we reduced the problem complexity and bounded the system performance. In particular, we decomposed the tracking update formula in a subset of simpler tasks which can be easily handled and, moreover, we used the concavity properties of the cost-to-go function to introduce bounds to the long-term reward function and define a probabilistic sub-optimal policy. We built our numerical evaluation according to the specifications found in the literature and we proved the importance of considering the channel as an additional source of information in a WBAN scenario. Acknowledgments =============== This research has been funded in part by the following grants and organizations: AFOSR FA9550-12-1-0215, NSF CNS-1213128, NSF CCF-1410009, NSF CPS-1446901, ONR N00014-09-1-0700, ONR N00014-15-1-2550, and Fondazione Ing. Aldo Gini. Proof of Theorem \[thm:K\_conc\] {#proof:lower_bound} ================================ We first introduce the following proposition. For every step $I = 1,2\ldots$ of the value iteration algorithm [@Bertsekas2005 Vol. II, Sec. 4.3.1],  $$\begin{aligned} \label{eq:proof_K_Im1} K^{(I)}\left( \mathbf{p}_\psi \circ \frac{[a_1,\ldots,a_n]}{\sum_{j = 1}^n \mathbf{p}_\psi(j)a_j}, \mathbf{u} \right) \sum_{j = 1}^n \mathbf{p}_\psi(j)a_j\end{aligned}$$ is concave in $\mathbf{p}_\psi$, where $a_j$ is a non-negative constant and $\circ$ is the Hadamard product. The proof is by induction over the steps of the value iteration algorithm. At step $I = 1$, we have  $$\begin{aligned} K^{(1)}(\mathbf{p}_\psi,\mathbf{u}) = r(\mathbf{p}_\psi,\mathbf{u}) = (1-\lambda)\Delta(\mathbf{p}_\psi) + \lambda c(\mathbf{u}).\end{aligned}$$ Since, on the right-hand side, only $\Delta(\cdot)$ depends upon $\mathbf{p}_\psi$, to prove the concavity we focus on the term $\Delta(\cdot)$:  \[eq:proof\_Delta\_1\] $$\begin{aligned} &\Delta\left( \mathbf{p}_\psi \circ \frac{[a_1,\ldots,a_n]}{\sum_{j = 1}^n \mathbf{p}_\psi(j)a_j} \right)\sum_{j = 1}^n \mathbf{p}_\psi(j)a_j \\ &= \Bigg(1-\sum_{i = 1}^n\left(\frac{\mathbf{p}_\psi(i)a_i}{\sum_{j = 1}^n \mathbf{p}_\psi(j)a_j}\right)^2\Bigg)\sum_{j = 1}^n \mathbf{p}_\psi(j)a_j \\ &= \sum_{j = 1}^n \mathbf{p}_\psi(j)a_j - \frac{\sum_{j = 1}^n (\mathbf{p}_\psi(j)a_j)^2}{\sum_{j = 1}^n \mathbf{p}_\psi(j)a_j}.\label{eq:proof_Delta_1_3}\end{aligned}$$ To prove that the previous term is concave, we compute its second order derivative with respect to $\mathbf{p}_\psi(i)$:  $$\begin{aligned} \frac{\partial^2 \eqref{eq:proof_Delta_1_3}}{\partial \mathbf{p}_\psi(i)^2} = -2 a_i \frac{ \left(\sum_{j = 1 \atop j \neq i}^n a_j \mathbf{p}_\psi(j)\right)^2 + \sum_{j = 1 \atop j \neq i}^n \left(a_j \mathbf{p}_\psi(j)\right)^2}{\left(\sum_{j = 1}^n a_j \mathbf{p}_\psi(j)\right)^3},\end{aligned}$$ which is always smaller than or equal to zero, thus  holds for $I = 1$. Now, assume that  holds for a generic $I-1$. At step $I$ we have  $$\begin{aligned} K^{(I)}(\mathbf{p}_\psi,\mathbf{u}) = r(\mathbf{p}_\psi,\mathbf{u}) + \mathbb{E}_{\mathbf{Z},\hat{\mathbf{H}}}[J^{(I-1)}(\mathbf{p}_{\psi'}) | \mathbf{p}_\psi, \mathbf{u}].\end{aligned}$$ We consider the two terms separately. The first term $r(\mathbf{p}_\Psi,\mathbf{u})$ coincides with $K^{(1)}(\cdot)$ and thus is concave when evaluated at $\frac{\mathbf{p}_\psi(i)a_i}{\sum_{j = 1}^n \mathbf{p}_\psi(j)a_j}$ and multiplied by $\sum_{j = 1}^n \mathbf{p}_\psi(j)a_j$. The second term can be expressed as in Equation   $$\begin{aligned} \label{eq:proof_E_J_Im1} \begin{split} &\mathbb{E}[J^{(I-1)}(\mathbf{p}_{\psi'}) | \mathbf{p}_\psi, \mathbf{u}] = \mathbb{E}[\min_{\mathbf{u}'} \{ K^{(I-1)}(\mathbf{p}_{\psi'}, \mathbf{u}') \} | \mathbf{p}_\psi, \mathbf{u}] \\ & = \int \min_{\mathbf{u}'} \{ K^{(I-1)}(\mathbf{p}_{\psi'}, \mathbf{u}') \} \mathbb{P}(\mathbf{Z} = \mathbf{z}, \hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathbf{p}_\psi, \mathbf{u}) \ \mbox{d}\mathbf{z} \ \mbox{d}\hat{\mathbf{h}}, \end{split}\end{aligned}$$ where $\mathbf{p}_{\psi'}$ is derived in Equation . Note that the term $\mathbb{P}(\mathbf{Z} = \mathbf{z}, \hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathbf{p}_\psi, \mathbf{u})$ can be moved inside the $\min$-operator. Using the inductive hypothesis and defining $a_i \triangleq \mathbb{P}(\mathbf{Z} = \mathbf{z}, \hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathrm{X} = e_i, \mathbf{u})$, we have that every argument of the $\min$-operation is concave, thus  is concave and the thesis is proved. With the previous proposition, it is straightforward to also show that $K^{(I)}\left( \mathbf{p}_\psi, \mathbf{u} \right) - r(\mathbf{p}_\psi,\mathbf{u})$ is concave for every $I = 1,2,\ldots$, which is equivalent to . Proof of Corollary \[corol:J\_concave\] {#proof:J_concave} ======================================= We want to prove that  $$\begin{aligned} J(\mathbf{p}_\psi) \geq (1-\lambda)\Delta(\mathbf{p}_\psi) \!+\! \sum_{i = 1}^n \alpha_i (J(\mathbf{b}^{(i)}) \!-\! (1 \!-\! \lambda)\Delta(\mathbf{b}^{(i)})),\end{aligned}$$ where $\mathbf{b}^{(i)}$ and $\alpha_i$ are defined as in Theorem \[thm:K\_conc\]. By definition,  $$\begin{aligned} J(\mathbf{p}_\psi) \!-\! (1\!-\!\lambda)\Delta(\mathbf{p}_\psi) = \min_{\mathbf{u}} \{K(\mathbf{p}_\psi,\mathbf{u}) \!-\! (1\!-\!\lambda)\Delta(\mathbf{p}_\psi) \}\end{aligned}$$ and using Theorem \[thm:K\_conc\], the right-hand side can be lower bounded by  $$\begin{aligned} \begin{split} &\min_{\mathbf{u}} \{K(\mathbf{p}_\psi,\mathbf{u}) \!-\! (1\!-\!\lambda)\Delta(\mathbf{p}_\psi) \} \geq \min_{\mathbf{u}} \{r(\mathbf{p}_\psi,\mathbf{u}) \\ &+\! \sum_{i = 1}^n\! \alpha_i (K(\mathbf{b}^{(i)},\!\mathbf{u}) \! -\! r(\mathbf{b}^{(i)},\! \mathbf{u})) \!-\! (1\!-\!\lambda)\Delta(\mathbf{p}_\psi) \}. \end{split}\label{eq:proof_corol_2}\end{aligned}$$ The terms $r(\mathbf{p}_\psi,\mathbf{u}) - (1\!-\!\lambda)\Delta(\mathbf{p}_\psi)$ can be reduced to $\lambda c(\mathbf{u})$ and the term $-\sum_{i = 1}^n \alpha_i r(\mathbf{b}^{(i)},\! \mathbf{u})$ can be simplified as  $$\begin{aligned} -\sum_{i = 1}^n \alpha_i r(\mathbf{b}^{(i)},\! \mathbf{u}) = -\sum_{i = 1}^n \alpha_i (1-\lambda)\Delta(\mathbf{b}^{(i)}) - \lambda c(\mathbf{u})\end{aligned}$$ because $\sum_{i = 1}^n \alpha_i = 1$ in Theorem \[thm:K\_conc\]. Combining the previous expression and , we obtain  $$\begin{aligned} \begin{split} &J(\mathbf{p}_\psi) \!-\! (1\!-\!\lambda)\Delta(\mathbf{p}_\psi) \geq \min_{\mathbf{u}} \{\lambda c(\mathbf{u}) \\ &+ \sum_{i = 1}^n \alpha_i (K(\mathbf{b}^{(i)},\mathbf{u}) \! -\! (1-\lambda)\Delta(\mathbf{b}^{(i)})) - \lambda c(\mathbf{u}) \}, \end{split}\end{aligned}$$ which coincides with  and concludes the proof. Proof of Theorem \[thm:belief\_simplified\] {#proof:belief_simplified} =========================================== First, we show by induction over $\nu$ that $\mathbf{p}_{\psi'}(i | \nu)$ defined as in  is equivalent to (for notation simplicity we neglect the dependencies on the channel in the pdfs $f_{Z_s}(\cdot)$)  $$\begin{aligned} \label{eq:proof_x_k_F_kv} &\bar{\mathbf{p}}_{\psi'}(i | \nu) = \frac{\textstyle \prod_{w=1}^{\nu} f_{\mathrm{Z}_s}(\mathrm{z}^{(w)} | e_i) \mathbf{p}_{\psi}(i)}{\textstyle\sum_{j = 1}^n \prod_{w=1}^{\nu} f_{\mathrm{Z}_s}(\mathrm{z}^{(w)} | e_j) \mathbf{p}_{\psi}(j)}.\end{aligned}$$ For $\nu = 1$, and  coincide. Assume that they coincide for a generic index $\nu > 1$. Then, for $\nu+1$ substitute  in  and obtain  $$\begin{aligned} & \mathbf{p}_{\psi'}(i | \nu+1) = \frac{\textstyle f_{\mathrm{Z}_s}(\mathrm{z}^{(\nu+1)} | e_i) \mathbf{p}_{\psi'}(i|\nu)}{\textstyle\sum_{j = 1}^n f_{\mathrm{Z}_s}(\mathrm{z}^{(\nu+1)} | e_j) \mathbf{p}_{\psi'}(j|\nu)} \\ & = \frac{\textstyle f_{\mathrm{Z}_s}(\mathrm{z}^{(\nu+1)} | e_i) \bar{\mathbf{p}}_{\psi'}(i|\nu)}{\textstyle\sum_{j = 1}^n f_{\mathrm{Z}_s}(\mathrm{z}^{(\nu+1)} | e_j) \bar{\mathbf{p}}_{\psi'}(j|\nu)} \\ & = \frac{\textstyle f_{\mathrm{Z}_s}(\mathrm{z}^{(\nu+1)} | e_i) \frac{\textstyle \prod_{w=1}^{\nu} f_{\mathrm{Z}_s}(\mathrm{z}^{(w)} | e_i) \mathbf{p}_{\psi}(i)}{\textstyle\sum_{\ell = 1}^n \prod_{w=1}^{\nu} f_{\mathrm{Z}_s}(\mathrm{z}^{(w)} | e_\ell) \mathbf{p}_{\psi}(\ell)}}{\textstyle\sum_{j = 1}^n f_{\mathrm{Z}_s}(\mathrm{z}^{(\nu+1)} | e_j) \frac{\textstyle \prod_{w=1}^{\nu} f_{\mathrm{Z}_s}(\mathrm{z}^{(w)} | e_j) \mathbf{p}_{\psi}(j)}{\textstyle\sum_{\ell = 1}^n \prod_{w=1}^{\nu} f_{\mathrm{Z}_s}(\mathrm{z}^{(w)} | e_\ell) \mathbf{p}_{\psi}(\ell)}} \nonumber\\ & = \frac{\textstyle \prod_{w=1}^{\nu+1} f_{\mathrm{Z}_s}(\mathrm{z}^{(w)} | e_i) \mathbf{p}_{\psi}(i)}{\textstyle\sum_{j = 1}^n \prod_{w=1}^{\nu+1} f_{\mathrm{Z}_s}(\mathrm{z}^{(w)} | e_j) \mathbf{p}_{\psi}(j)} = \bar{\mathbf{p}}_{\psi'}(i | \nu+1).\end{aligned}$$ Thus, we proved that $\equiv$. Then, substitute  for $v = N^{\mathbf{u}}$ in  to obtain  $$\begin{aligned} \label{eq:proof_p_psi_prime} \mathbf{p}_{\psi'}(i) = \frac{\mathbb{P}(\hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathrm{X} = e_i, \mathbf{u}) \prod_{v=1}^{N^{\mathbf{u}}} f_{\mathrm{Z}_s}(\mathrm{z}^{(v)} | e_i) \mathbf{p}_{\psi}(i)}{\sum_{j = 1}^n \mathbb{P}(\hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathrm{X} = e_j, \mathbf{u}) \prod_{v=1}^{N^{\mathbf{u}}} f_{\mathrm{Z}_s}(\mathrm{z}^{(v)} | e_j) \mathbf{p}_{\psi}(j)}.\end{aligned}$$ The product $\prod_{v=1}^{N^{\mathbf{u}}} f_{\mathrm{Z}_s}(\mathrm{z}^{(v)} | e_i)$ can be rewritten using the definition of $\mathrm{z}^{(v)}$:  $$\begin{aligned} &\prod_{v=1}^{N^{\mathbf{u}}} f_{\mathrm{Z}_s}(\mathrm{z}^{(v)} | e_i) = \prod_{s = 1}^S \prod_{u = 1}^{N_s^{\mathbf{u}}} f_{\mathrm{Z}_s}(\mathrm{z}_{u,s} | e_i).\end{aligned}$$ When we explicitly write the dependencies on the channel gains, the previous formula becomes equivalent to  and can be rewritten as  $$\begin{aligned} \label{eq:proof_P_Z} &\mathbb{P}(\mathbf{Z} = \mathbf{z} | \mathrm{X}_k = e_i, \hat{\mathbf{H}} = \hat{\mathbf{h}}, \mathbf{u}).\end{aligned}$$ Combining  and , we finally obtain  $$\begin{aligned} \mathbf{p}_{\psi'}(i) & = \frac{\mathbb{P}(\hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathrm{X} = e_i, \mathbf{u}) \mathbb{P}(\mathbf{Z} = \mathbf{z} | \mathrm{X}_k = e_i, \hat{\mathbf{H}} = \hat{\mathbf{h}}, \mathbf{u}) \mathbf{p}_{\psi}(i)}{\sum_{j = 1}^n \mathbb{P}(\hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathrm{X} = e_j, \mathbf{u}) \mathbb{P}(\mathbf{Z} = \mathbf{z} | \mathrm{X}_k = e_i, \hat{\mathbf{H}} = \hat{\mathbf{h}}, \mathbf{u}) \mathbf{p}_{\psi}(j)} \\ & = \frac{\mathbb{P}(\mathbf{Z} = \mathbf{z}, \hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathrm{X}_k = e_i, \mathbf{u}) \mathbf{p}_{\psi}(i)}{\sum_{j = 1}^n \mathbb{P}(\mathbf{Z} = \mathbf{z}, \hat{\mathbf{H}} = \hat{\mathbf{h}} | \mathrm{X}_k = e_i, \mathbf{u}) \mathbf{p}_{\psi}(j)},\end{aligned}$$ which coincides with  and concludes the proof. [^1]: For ease of notation, in the following we use $\mathbb{P}(\cdot)$ also to refer to probability density functions. [^2]: In the remainder of the paper, we always use  when we deal with the channel probabilities. [^3]: Iideally, $f_{\mathrm{C}_{\iota,s}}(\mathrm{c}_{\iota,s,k} | e_i, N_s^{\mathbf{u}_{k-1}})$ would become a single Dirac delta function when a lot of channel measurements were collected. [^4]: $R_\mu$ can also be redefined using a discount factor instead of $\frac{1}{K}$ if the main focus is on the initial time slots. All our results can be straightforwardly extended to such a case. [^5]: We remark that, even if the minimum size of $\mathcal{B}$ is $n$, it may be more convenient to use $|\mathcal{B}| > n$. [^6]: Note that $\mathbf{p}_{\psi'}(i | \nu)$ implicitly depends upon $\hat{h}^{(1)},\ldots,\hat{h}^{(\nu)}$. [^7]: When the channel gain is $1$ and $\sigma_{\rm ch} = 0.05$, then there is an uncertainty of $5\%$ in the channel estimation. As the channel gain decreases, the relative uncertainty increases. [^8]: According to Subsection \[subsec:BBP\], we remark that we should formally write $R_{\mu^\star}$ and $\tilde{R}_{\mu_{\mathcal{B}}^\star}$ (i.e., when computing the lower bound the policy is computed only in a subset of the belief space). In the following, we omit the subscript for notation clarity.
--- abstract: | In this paper, we study the productions of the newly detected states $D_{sJ}(3040)$ and $D_J(3000)$ observed by BABAR Collaboration and LHCb Collaboration. We assume these states to be the $D_s(2P)$ and $D(2P)$ states with the quantum number $J^P=1^+$ in our work. The results of improved Bethe-Salpeter method indicate that the semi-leptonic decays via $B_s$ and $B$ into $D_{sJ}(3040)$ and $D_J(3000)$ have considerable branching ratios, for example, Br($\overline{B}_s^0 \rightarrow D{_{sJ}^+}(3040)e^-\overline{\nu}{_e}$)=$5.79\times10^{-4}$, Br($\overline{B}^0\rightarrow D_{J}^+(3000)e^-\overline{\nu}{_e}$)=$2.63\times10^{-4}$, which shows that these semi-leptonic decays can be accessible in experiments. [**Keywords:**]{} $D_{sJ}(3040)$; $D_J(3000)$; Semi-leptonic Decay; Improved Bethe-Salpeter Method. address: | $^1$Department of Physics, Harbin Institute of Technology, Harbin, 150001\ $^2$School of Electrical & Information Engineering, Beifang University of Nationalities, Yinchuan, 750021 author: - 'Si-chen Li$^{[1]}$, Yue Jiang$^{[1]}$[^1], Tian-hong Wang$^{[1]}$, Qiang Li$^{[1]}$, Zhi-hui Wang$^{[2]}$, Guo-Li Wang$^{[1]}$' title: 'Semi-leptonic Production of $D_{sJ}(3040)$ and $D_J(3000)$ in $B_s$ and $B$ Decays' --- INTRODUCTION ============ The studys of charmed and charmed-strange mesons have made great progress in recent years, which intrigues great deal of interests in revealing their properties. More and more new resonances have been observed in experiments. For example, in charmed-strange family, $D_{s1}^*(2700)^{\pm}$ was reported by Belle Collaboration through the cascaded decay $B^+\rightarrow \overline{D}^0D_{s1}\rightarrow \overline{D}^0 D^0 K^+$ and identified as a $1^-$ assignment [@1], and $D_{sJ}^*(2860)^{\pm}$ was discovered by BABAR Collaboration in $D_{sJ}(2860)\rightarrow D^0K^+, D^+K_s^0$ [@2], which is very likely to be $3^-$ state. In charmed family, $D(2550)$, $D(2600)$, $D(2750)$, $D(2760)$ were observed by BaBar Collaboration with analysis of helicity distribution [@3]. ($D(2550)$, $D(2600)$) are tentatively identified as 2S doublet $(0^-, 1^-)$ while $D(2750)$ and $D(2760)$ are $1D$ doublet ($2^-$, $3^-$) [@3.1]. Recently, two new resonances have been detected experimentally with masses around $3000$ MeV, $D_{sJ}$(3040)$^+$ was observed in the $D^*K$ invariant mass spectrum in inclusive $e^+e^-$ collision by BABAR [@4], which is a good candidate as the radial excitation of $D_{s1}(2460)^+$ [@5]. In $D^{+}\pi^{-}$ and $D^{0}\pi^{+}$ mass spectra, $D_J(3000)^0$ was observed by LHCb Collaboration [@6], which could be interpreted as the radial excitation of $D_{1}(2430)^0$, and their masses and full widths are [@4; @6] $$\begin{aligned} \begin{aligned} m_{D_{sJ}(3040)^{+}}=\left(3044\pm8^{+30}_{-5}\right)\mathrm{MeV},\\ \varGamma_{D_{sJ}(3040)^{+}} =\left(239\pm35^{+46}_{-42}\right)\mathrm{MeV},\\ m_{D_{J}(3000)^{0}}=\left(2971.8\pm8.7\right)\mathrm{MeV},\\ \varGamma_{D_{J}(3000)^{0}} =\left(188.1\pm44.8\right)\mathrm{MeV}.\\ \end{aligned}\end{aligned}$$ Regarding to the topic of radial excited states of $D_s$ and $D$ mesons, several works have been done about their mass spectra and strong decays [@7; @7.1; @7.2; @7.3]. One thing drawing our attention is that no other heavy-light $2P$ state has been confirmed by experiment except charmonium and bottomonium, which means the study of charmed and charmed-strange $2P$ states will enlarge our knowledge of bound states and deepen the understanding of nonperturbative QCD. We notice that $D_{sJ}(3040)$ and $D_J(3000)$, assumed to be radial excitation of $D_{s1}(2460)$ and $D_{1}(2430)$ in recent studies, can be produced via the semileptonic decays of $B_{s}$ and $B$, which are different from the observed production processes. Previous studies show that semi-leptonic decays could be a good platform to produce charmed and charmed-strange mesons, for instance, the process of $B_{s}\rightarrow D_{s1}(2460)l\overline\nu_{l}$ has been calculated through relativistic quark model based on the quasipotential approach [@7.5], three point QCD sum rule methods [@7.6], QCD sum rules under HQET [@7.7], constituent quark meson model [@7.8], and instantaneous Bethe-Salpeter method [@8]. The same order $10^{-3}$ of the results in various models indicates that semi-leptonic decays have considerable branching ratios. In addition, the study of semi-leptonic decay provide an extra source of information for the determination of CKM matrix elements and the relativistic quark dynamics inside heavy-light mesons. In this paper, we explore the production of $D_{sJ}(3040)$ and $D_J(3000)$ by the improved B-S(Bethe-Salpeter) method, and give the results of form factors as well as branching ratios. The rest of this paper is organized in the following arrangements. In section 2 we deduce the formulation of semi-leptonic decay. The hadronic matrix elements of production are given in section 3, numerical results and discussions are presented in section 4. THE FORMULATIONS OF SEMI-LEPTONIC DECAY ======================================= We take ${\overline{B}^{0}_{s}}\rightarrow D^{+}_{sJ}(3040) l^{-}_{}\overline\nu_{l}$ as an example to illustrate this type of process. The feynman diagram of this semi-leptonic decay is drawn in figure 1. The amplitude of ${\overline{B}^{0}_{s}}\rightarrow D^{+}_{sJ}(3040) l^{-}_{}\overline\nu_{l}$ is [@8] $$T=\frac{G_F}{\sqrt{2}}V_{cb}\overline{u}(p_l)\gamma ^{\xi }(1-\gamma _5)\nu (p_{\nu _{l}})\left\langle D_{sJ}^+(3040)(P_f) |J_\xi | \overline{B}_{s}^0(P)\right\rangle ,$$ where $V_{cb}$ is the CKM matrix element, $G_F$ is the fermi constant, $J_\xi =V_\xi -A_\xi $ is the charged weak current, in which $V_\xi =\overline{c}\gamma_\xi b $, $A_\xi=\overline{c} \gamma_\xi\gamma_5b$, $P$ and $P_f$ are the momenta of the initial meson $\overline{B}_{s}^0$ and final meson $D_{sJ}^+(3040)$ respectively. Thus the square of the amplitude is:\ $$|T|^2=\frac{G_F^2}{2}|V_{bc}|^2l^{\xi\xi'} h_{\xi\xi'},$$ where the leptonic tensor could be simplified as: $$l^{\xi\xi'}=8\left(p_{\nu{_l}}^{\xi} p_{l}^{\xi' } +p_{l}^{\xi }p_{v_{l}}^{\xi'}-p_{v_l}p_{l}g^{\xi \xi' }+i\epsilon^{\xi \xi' \alpha \beta }p_{1\alpha }p_{2\beta } \right),$$ and hadronic tensor is defined as: $$h_{\xi\xi'}=\left\langle \overline{B}_{s}^0(P) |J _\xi ^{ \dagger} | D_{sJ}^+(3040)(P_f)\right\rangle \left\langle D_{sJ}^+(3040)(P_f) |J_{\xi'} | \overline{B}_{s}^0(P)\right\rangle,$$ which can be described as form factors. Explicit forms are present in next subsection. HADRONIC MATRIX ELEMENT OF SEMI-LEPTONIC DECAY ============================================== The calculation of hadronic matrix element is model-dependent. In this paper, we determine the hadronic matrix element through the instantaneous Bethe-Salpeter method with Mandelstam formalism. As a relativistic quark model, the instantaneous Bethe-Salpeter method has been applied in many transitions among heavy-light mesons. More details about instantaneous Bethe-Salpeter equation are given in Appendix A. Regarding to the classification of heavy-light meson, the heavy-light mesons can be classified in doublets based on the total angular momentum of the light quark $s_l$. We can categorize the heavy mesons into several doublets, for example, the S doublet is $(0^+, 1^+)$ with $s_l = \frac{1}{2}$ , and the T doublet is $(1^+, 2^+)$ with $s_l = \frac{3}{2}$ , thus the $1^+$ states can be labeled as $P_1^{1/2}$ and $P_1^{3/2}$. But in our method, we solved the Salpeter equation and obtained the wave functions of the $^3P_1$ and $^1P_1$ states, whose forms are given in Appendix B, then the physical states are mixtures of the $^3P_1$ and $^1P_1$: $$\label{mixing} \begin{aligned} &\left|\frac{3}{2} \right \rangle = {\cos{\theta}}\left|^1P_1\right\rangle+ {\sin {\theta}}\left |^3P_1\right\rangle ,\\ &\left|\frac{1}{2} \right \rangle = -{\sin{\theta}}\left|^1P_1\right\rangle +{\cos {\theta}} \left|^3P_1\right\rangle. \end{aligned}$$ In the heavy quark limit, which is $m_Q\rightarrow \infty $, the mixing angle $\theta\approx 35.3^{\circ}$ [@9]. $D_{sJ}(3040)$ is assumed to be the radial excitation of $D_{s1}(2460)$ in this paper, which is $P_1^{1/2}$ state. The partner has not been discovered yet, which is correspondent to $P_1^{3/2}$ state. By the B-S method with the instantaneous approach, the hadronic matrix element can be written as the overlapping integral over the initial and final B-S wave functions [@8]: $$\begin{aligned} \nonumber \left\langle D_{sJ}^+\left(P_f\right) \left(^1{ P}_1\right)|J_{\xi} | \overline{B}_s^0\left(P\right) \right\rangle&&=i \int\frac{{\rm d}^4q}{(2\pi )^4} {\rm{Tr}} \left[ \overline{\chi}_{D_{sJ}}(P_f, P, q_1)(\alpha_1 \slashed{P} + \slashed{q}-m_s ) \gamma_\xi (1-\gamma_5) \chi_{B_s^0}(P, q) \right]\\ \nonumber&&=\int\frac{{\rm d}\vec{q}}{(2\pi)^3}{\rm{Tr}}\left[ \overline{\varphi}_{1^{+}}^{++}\left(^1{ P}_1\right)\left(\vec{q}_1\right)\gamma_\xi \left(1-\gamma_5\right) {\varphi}_{0^{-}}^{++}\left(\vec{q}\right) \frac{ \slashed{P}}{M}\right]\\ &&=\epsilon_\mu \left(t_1P_\xi P^\mu +t_2P_{f\xi} P^\mu +t_3g^{\; \;\mu} _{ \xi} +t_4\epsilon ^{\; \; PP_1\mu }_\xi\right),\end{aligned}$$ $$\begin{aligned} \hspace{-0.7in} \nonumber \left\langle D_{sJ}^+(P_f) \left(^3{ P}_1\right)|J_{\xi}| \overline{B}_s^0(P) \right\rangle&&=\int\frac{{\rm d}\vec{q}}{(2\pi)^3}{\rm{Tr}}\left[\overline{\varphi}_{1^{+}}^{++}\left(^3{ P}_1\right)\left(\vec{q}_1\right)\gamma_\xi \left(1-\gamma_5\right) {\varphi}_{0^{-}}^{++}(\vec{q}) \frac{ \slashed{P}}{M}\right] \\ &&=\epsilon_\mu \left(t_5P_\xi P^\mu +t_6P_{f\xi} P^\mu +t_7g^{\; \;\mu} _\xi +t_8\epsilon ^{\; \; PP_1\mu }_\xi\right),\end{aligned}$$ where $\vec{q}$ and $\vec{q}_1$ are relative three-momentum between the quark and anti-quark for initial state and final state. $t_1$ to $t_8$ are the form factors, which are given in Appendix C. The wave functions we adopt above are for $^1P_1$ and $^3P_1$ states. Due to the mixture of physical states, the form factors for $P^{1/2}$ and $P^{3/2}$ states are given as: $$\begin{aligned} \begin{aligned} x_{i+4}= t_i\cos\theta +t_{i+4}\sin\theta ,\\ x_i=-t_i\sin\theta +t_{i+4}\cos\theta , \end{aligned}\end{aligned}$$ where $i=1, 2, 3, 4.$ Another thing we should notice is that the masses of $^1P_1$ and $^3P_1$ are different from $P^{1/2}$ and $P^{3/2}$. There is also a mixture between them and the relation is given as [@11]: $$\begin{aligned} \begin{aligned} &m^2_{^1P_1}=m^2_{\frac{1}{2}} \sin^2\theta +m^2_{\frac{3}{2}} \cos ^2 \theta,\\ &m^2_{^3P_1}=m^2_{\frac{1}{2}} \cos^2\theta +m^2_{\frac{3}{2}} \sin ^2 \theta. \end{aligned}\end{aligned}$$ By giving the form factors, the width of semi-leptonic decay is $$\begin{aligned} \begin{aligned} \varGamma =&\frac{G_F^2V_{cb}^2M^3}{32\pi ^3} \int \frac{p_l}{E_l} {\rm d} \vec{p}_l \int \frac{p_f}{E_f}{\rm d}\vec{p}_f \left \{ 2\alpha \left( \frac{y}{M^2}\right ) +\beta_{++} \left[ 4\left (2x \left (1-\frac{M_f^2}{M^2}+y\right )-4x^2-y\right ) \right. \right. \\ &\left. \left. +\frac{m_l^2}{M^2}\left (8x+4\frac{M_f^2}{M^2}-3y-\frac{m_l^2}{M^2}\right )\right]+\left(\beta_{\pm }+\beta_{\mp}\right)\frac{m_l^2}{M^2}\left(2-4x+y-2\frac{M_f^2}{M^2}\right)\right.\\ &\left.+ \beta_{--} \frac{m_l^2}{M^2}\left(y-\frac{m_l^2}{M^2}\right) + 2\gamma \left[y\left(1-4x+y-\frac{M_f^2}{M^2}\right)+ \frac{M_l^2}{M^2}\left(1+y-\frac{M_f^2}{M^2}\right) \right] \right \}, \end{aligned}\end{aligned}$$ where $M_f$ and $M$ are masses of the final and initial meson respectively, $m_l$ is the mass of the corresponding lepton. $\alpha$, $\beta_{\pm \pm}$ and $\gamma$ are coefficients as functions of the form factors: $$\begin{aligned} &x=\frac{E_l}{M}, y=\frac{\left(p-p_f\right)^2}{M^2},\\ &\alpha =x_3^2+x_4^2 M^2 p_f^2,\\ &\beta_{++} =p_f^2 \frac{\left(x_1+x_2\right)^2}{4 M_f^2}+\frac{\left(2ME_f-M^2-M_f^2\right) x_4^2}{4}+\frac{x_3^2}{4 M_f^2}+\left(\frac{ME}{M_f^2}-1\right)\frac{\left(x_1+x_2\right)x_3}{2M},\\ &\beta_{+-} =\beta_{-+}=p_f^2\frac{(x_1+x_2)(x_1-x_2)}{4M_f^2}+\frac{\left(M^2-M_f^2\right)}{4}-\left(x_1+\frac{x_2EM}{M_f^2}\right)\frac{x_3}{2M}-\frac{x_3^2}{4M_f^2},\\ &\beta_{--} =p_f^2\frac{\left(x_1-x_2\right)^2}{4M_f^2}-\frac{\left(2ME+M_f^2+M^2\right)x_4^2}{4}+\frac{x_3^2}{4M_f^2}+\left(1+\frac{ME}{M_f^2}\right)\frac{\left(x_2-x_1\right)x_3}{2M},\\ &\gamma=-x_3 x_4. \end{aligned}$$ NUMERICAL RESULTS AND ANALYSIS ============================== form factors ------------ In our model, the input parameters of calculation are chosen as following: $\lambda$ =0.21 GeV$^2$, $\varLambda _{\rm QCD}$=0.27 GeV, a=e=2.71, $\alpha $=0.06 GeV, $m_b$=4.96 GeV, $m_s$=0.50 GeV, $m_c$=1.62 GeV, $m_d$=0.311 GeV, which are the best results to fit the mass spectrum of related mesons [@12]. For semi-leptonic decay, we also need CKM matrix elements: $V_{bc}$=0.0406, and the lifetime of initial meson $\tau _{B_{s0}}=1.469\times 10^{-12}$ s, the masses of $m_{B^0}$=5279.58 MeV and $m_{B^0_s}$=5366.77 MeV are taken from PDG [@13]. We notice that the partners of $D_{sJ}(3040)$ and $D_J(3000)$ are not discovered yet, the masses required in our calculation are taken as 3022.3 MeV and 2913.8 MeV for $D_{s}(2P^{3/2}_{1})$ and $D(2P^{3/2}_{1})$ respectively. Varying all the input parameters simultaneously within $\pm$ 5% of the central values, we obtain the uncertainties of branching ratios. To show the numerical results of wave functions explicitly, we plot the $^1P_1$ and $^3P_1$ state for $D_s(2P)$ meson in figure 2. We can see that $^1P_1$ and $^3P_1$ states share the same shape. As an example, The form factors $x_1$ to $x_4$ are shown in figure 3, where $t=(P-P_f)^2=M^2+M_f^2-2ME_f$ and $t_m$ is the maximum of $t$. branching ratios ---------------- for $D_{sJ}(3040)$ {#for-d_sj3040 .unnumbered} ------------------ In table , we show the branching ratios of semi-leptonic production of $D_{s}(2P)^{+}$. Generally, the cases of $e$ and $\mu$ are 2 orders of magnitude larger than the case of $\tau$ due to the phase space. We also notice that the branching ratios of $\overline{B}_s^0\rightarrow D{_{sJ}^+}(P{_{1}^{3/2}})l^-\overline{\nu}{_l}$ are 10 times larger than $\overline{B}_s^0\rightarrow D{_{sJ}^+}(P{_{1}^{1/2}})l^-\overline{\nu}{_l}$. Ref [@14] calculate the same process via covariant light-front quark model. The result in Ref [@15] is obtained through modified harmonic-oscillator light-front wave function () and light-front quark model associated within HQET (). We can see that our results are well consistent with the light-front quark model associated within HQET but show a little discrepancy with the other two results. All these results indicate that more theoretical researches should be done in the future. [cccccc]{} ------------------------------------------------------------------------ &[**ours**]{}&[**[@14]**]{}&[ **[@15]**]{}&[ **[@15]**]{}\ ------------------------------------------------------------------------ $\overline{B}_s^0\rightarrow D{_{sJ}^+}(3040)e^-\overline{\nu}{_e}$ & $(5.79^{+2.1}_{-2.0})\times 10^{-4}$ & ${}$ & $(2.49^{+0.4}_{-0.4})\times 10^{-4}$& ${5.6\times 10^{-4}}$\ ------------------------------------------------------------------------ $\overline{B}_s^0\rightarrow D{_{sJ}^+}(P{_{1}^{3/2}})e^-\overline{\nu}{_e}$&$(2.34^{+1.30}_{-1.04})\times 10^{-3}$&${}$&$(2.42^{+0.07}_{-0.14})\times 10^{-3}$& ${1.24\times 10^{-3}}$\ ------------------------------------------------------------------------ $\overline{B}_s^0\rightarrow D{_{sJ}^+}(3040)\mu^- \overline{\nu}{_\mu }$&$(5.77^{+2.15}_{-2.07})\times 10^{-4}$&$(3.5^{+1.1}_{-1.0})\times 10^{-4}$&$(2.46^{+0.4}_{-0.42})\times 10^{-4}$& ${5.6\times 10^{-4}}$\ ------------------------------------------------------------------------ $\overline{B}_s^0\rightarrow D{_{sJ}^+}(P{_{1}^{3/2}})\mu^- \overline{\nu}{_\mu }$&$(2.36^{+1.28}_{-1.06})\times 10^{-3}$&$(4.0^{+0.4}_{-0.5})\times 10^{-3}$&$(2.39^{+0.07}_{-0.13})\times 10^{-3}$& ${1.24\times 10^{-3}}$\ ------------------------------------------------------------------------ $\overline{B}_s^0\rightarrow D{_{sJ}^+}(3040)\tau^- \overline{\nu}{_\tau }$&$(4.07^{+1.95}_{-1.74})\times 10^{-6}$&$(9.9^{+4.4}_{-3.5})\times 10^{-6}$&$(5.2^{+0.4}_{-0.5})\times 10^{-6}$\ ------------------------------------------------------------------------ $\overline{B}_s^0\rightarrow D{_{sJ}^+}(P{_{1}^{3/2}})\tau^- \overline{\nu}{_\tau }$&$(3.49^{+2.39}_{-1.78})\times 10^{-5}$&$(9.7^{+0.8}_{-0.8})\times 10^{-5}$&$(0.43^{+0}_{-0.01})\times 10^{-6}$\ Due to the lack of data of $D_s$(2$P$) state, as a comparison, we give the information about $1P$ state with $J^P=1^+$. The branching ratio of cascaded decay Br$(B_s^0\rightarrow D_{s1}(2536)^-\mu ^+\nu _{\mu })\times $Br$(D_{s1}(2536)^-\rightarrow D^{*-}K^0_s$ )=$(2.5\pm0.7)\times 10^{-3}$, and the branching ratio of strong decay is $0.85\pm0.12$ [@13], so the branching ratio of semi-leptonic decay into $1P$ state is $2.94^{+1.44}_{-1.09}\times 10^{-3}$. The corresponding first radial excitation of $D_{s1}(2536)^-$ is $D_{s1}(P_1^{3/2})^-$, whose production rate via semi-leptonic decay is $2.34\times 10^{-3}$ in our method [@8], this may imply that our results are reliable. Although the production ratio of $D_{sJ}(3040)$ is very small in $\overline{B}^0_{s}$ semi-leptonic decay, considering that the LHCb experiment will produce more than $10^6$ $B_s$ mesons per running year [@15], the branching ratios of $\overline{B}^0_s\rightarrow D_{sJ}(3040)^{+} e^{-}\overline\nu_{e}$ around $10^{-4}$ are considerable, and are accessible in the current $B_s$ decay data. So the semi-leptonic approach has a promising prospect in producing $D_{sJ}(3040)$. for $D_J(3000)$ {#for-d_j3000 .unnumbered} --------------- [cccccc]{} ------------------------------------------------------------------------ &[**ours**]{}&[**[@15]**]{}\ ------------------------------------------------------------------------ $\overline{B}^0\rightarrow D_J(3000)^+ e^-\overline{\nu}{_e}$ & $(2.63^{+0.33}_{-0.68})\times 10^{-4}$ & $({2.57^{+0.39}_{-0.44})\times 10^{-4}}$\ ------------------------------------------------------------------------ $\overline{B}^0\rightarrow D(2P{_{1}^{3/2}}){^+}e^-\overline{\nu}{_e}$&$(2.62^{+0.64}_{-0.50})\times 10^{-4}$&$({2.72^{+0.02}_{-0.11})\times 10^{-3}}$\ ------------------------------------------------------------------------ $\overline{B}^0\rightarrow D_J(3000)^+\mu^- \overline{\nu}{_\mu }$&$(2.38^{+0.60}_{-0.42})\times 10^{-4}$&$(2.54^{+0.38}_{-0.44})\times 10^{-4}$\ ------------------------------------------------------------------------ $\overline{B}^0\rightarrow D(2P{_{1}^{3/2}}){^+}\mu^- \overline{\nu}{_\mu }$&$(2.42^{+0.57}_{-0.46})\times 10^{-4}$&$(2.69^{+0.02}_{-0.11})\times 10^{-3}$\ ------------------------------------------------------------------------ $\overline{B}^0\rightarrow D_J(3000)^+ \tau^- \overline{\nu}{_\tau }$&$(1.81^{+0.54}_{-0.30})\times 10^{-6}$&$(5.2^{+0.4}_{-0.5})\times 10^{-6}$\ ------------------------------------------------------------------------ $\overline{B}^0\rightarrow D(2P{_{1}^{3/2}}){^+}\tau^- \overline{\nu}{_\tau }$&$(4.44^{+0.76}_{-0.59})\times 10^{-6}$&$(0.603^{+0}_{-0.02})\times 10^{-4}$\ In table , the results of $\overline{B}^0\rightarrow D^{+}(2P) l^{-}_{}\overline\nu_{l}$ are presented. Our results show that the branching ratios into two doublets are of the same order of $ 10^{-4}$ for $e$ and $\mu$, $10^{-6}$ for $\tau$. While the results from light front quark model [@15] are the same of $10^{-4}$ for $2P^{1/2}_{1}$ state, but one order of magnitude smaller than ours for $2P^{3/2}_{1}$ state. To give some clues for this discrepancy, we list the results of $\overline{B}^0\rightarrow D^{+}(1P) l^{-}_{}\overline\nu_{l}$ as the comparison. In table , we give the cascaded decay of $D(1P)$ states, in which the $D_1(2430)$ and $D_1(2420)$ are $D(1P_1^{1/2})$ and $D(1P_1^{3/2})$ respectively. Considering that the strong decays of $D_1$ state are dominant channels at around $67\%$ due to the isospin symmetry, one thing we should notice in table is that for $D_1(1P^{3/2}_1$) and $D_1(1P^{1/2}_1$), the branching ratios of semi-leptonic productions are almost the same of $4.5\times 10^{-3}$ in experiment. Our results are consistent with this data. If the behaviors of $2P$ states are similar to $1P$ states, our results seem to be more reasonable. [cccccc]{} ------------------------------------------------------------------------ &[**ours**]{}&[**exp[@13]**]{}\ ------------------------------------------------------------------------ ${\rm Br}(\overline{B}^0\rightarrow D_1(2430)^- l^+\overline{\nu}{_l})\times {\rm Br}(D_1(2430)^- \rightarrow \overline{D}^{*0}\pi^{-})$ & $3.92^{+0.30}_{-0.39}\times 10^{-3}$ & ${(3.1\pm 0.9)\times 10^{-3}}$\ ------------------------------------------------------------------------ ${\rm Br}(\overline{B}^0\rightarrow D_1(2420)^- l^+\overline{\nu}{_l})\times {\rm Br}( D_1(2420)^-\rightarrow \overline{D}^{*0}\pi^{-})$&$5.51^{+0.07}_{-0.14}\times 10^{-3}$&${(2.80\pm0.28)\times 10^{-3}}$\ Similar with $B^0_s\rightarrow D^{+}_{s}(2P) l^{-}_{}\overline\nu_{l}$, the branching ratios are large enough to be observed in experiment, so we suggest that the LHCb and Belle Collaboration carry out the study of semi-leptonic decays above. The possible sources of the uncertainty on the results may come from these following factors: (1) The spin partners of $D_J(3000)$ and $D_{sJ}(3040)$ are not detected experimentally yet. In our work, the masses of $D(2P^{3/2}_{1})$ and $D_{s}(2P^{3/2}_{1})$ are assumed to be around 3000 MeV and 2913 MeV. It is one of the important sources of uncertainty. (2) $P^{1/2}$ and $P^{3/2}$ states are mixture of $^1P_1$ and $^3P_1$ states. The mixing equation we use in this paper is determined by the mixing angle, and this angle we use is derived from heavy-quark limit, which deviates from the realistic mixing angle, especially for the higher radial excitations [@16]. That is another possible way for the uncertainty to be increased. These sources show that there are a lot of researches to be done in the future to reduce the uncertainty and make the prediction more precise. for $3P$ states {#for-3p-states .unnumbered} --------------- Although no $3P$ state of $D_s$ or $D$ meson has been observed in experiment yet, we give a very preliminary prediction in our method. The masses we used are 3421 MeV and 3427 MeV for $D_s(3^1P_1)$ and $D_s(3^3P_1)$ states, 3215 MeV and 3220 MeV for $D(3^1P_1)$ and $D(3^3P_1)$ states, which are predicted in our model. The mixing angles $\theta\approx 35.3^{\circ}$. The results are given in table . [cccccc]{} ------------------------------------------------------------------------ &[**Br**]{}&&&[**Br**]{}\ ------------------------------------------------------------------------ $\overline{B}_s^0\rightarrow D_s(3P{_{1}^{1/2}})^+ e^-\overline{\nu}{_e}$ & $(7.24^{+2.65}_{-2.18})\times 10^{-6}$ && $\overline{B}^0\rightarrow D(3P{_{1}^{1/2}})^+ e^-\overline{\nu}{_e}$& $(2.35^{+0.29}_{-0.28})\times 10^{-6}$\ ------------------------------------------------------------------------ $\overline{B}_s^0\rightarrow D_s(3P{_{1}^{3/2}}){^+}e^-\overline{\nu}{_e}$&$(2.70^{+0.40}_{-0.31})\times 10^{-4}$ & & $\overline{B}^0\rightarrow D(3P{_{1}^{3/2}}){^+}e^-\overline{\nu}{_e}$&$(3.48^{+0.15}_{-0.12})\times 10^{-4}$\ ------------------------------------------------------------------------ $\overline{B}_s^0\rightarrow D_s(3P{_{1}^{1/2}})^+\mu^- \overline{\nu}{_\mu }$&$(7.32^{+2.69}_{-2.21})\times 10^{-6}$ & & $\overline{B}^0\rightarrow D(3P{_{1}^{1/2}})^+\mu^- \overline{\nu}{_\mu }$&$(2.36^{+0.29}_{-0.28})\times 10^{-6}$\ ------------------------------------------------------------------------ $\overline{B}_s^0\rightarrow D_s(3P{_{1}^{3/2}}){^+}\mu^- \overline{\nu}{_\mu }$&$(2.68^{+0.40}_{-0.31})\times 10^{-4}$ & & $\overline{B}^0\rightarrow D(3P{_{1}^{3/2}}){^+}\mu^- \overline{\nu}{_\mu }$&$(3.47^{+0.14}_{-0.12})\times 10^{-4}$\ ------------------------------------------------------------------------ $\overline{B}_s^0\rightarrow D_s(3P{_{1}^{1/2}})^+ \tau^- \overline{\nu}{_\tau }$&$(7.36^{+2.33}_{-2.09})\times 10^{-10}$ && $\overline{B}^0\rightarrow D(3P{_{1}^{1/2}})^+ \tau^- \overline{\nu}{_\tau }$ &$(7.35^{+0.85}_{-0.87})\times 10^{-9}$\ ------------------------------------------------------------------------ $\overline{B}_s^0\rightarrow D_s(3P{_{1}^{3/2}}){^+}\tau^- \overline{\nu}{_\tau }$&$(1.62^{+0.18}_{-0.14})\times 10^{-7}$ && $\overline{B}^0\rightarrow D(3P{_{1}^{3/2}}){^+}\tau^- \overline{\nu}{_\tau }$ &$(1.17^{+0.06}_{-0.05})\times 10^{-6}$\ In table , the branching ratios of $3P$ states are much lower than those of $2P$ states, which presents challenges in current experiment. In addition, we see an interesting result that two mixing $3P$ states of $D$ meson show discrepancy in semi-leptonic decay of $\overline{B}^0$, which needs more data and researches to give a more precise result. SUMMARY ======= The accumulative data of charmed and charmed-strange mesons are becoming more and more abundant with the running of colliders. The study of higher radial excitation in charmed and charmed-strange families is becoming a intriguing field. Two of the newly detected states are $D_{sJ}(3040)^+$ and $D_J(3000)^0$, which are very likely to be $D_{s}(2P)$ and $D(2P)$ states. The productions of these states in experiment are the inclusive $e^+e^-$ interaction and $D\pi$ channel. Under the instantaneous Bethe-Salpeter framework, we have studied the branching ratios of semi-leptonic decays into $D_{sJ}(3040)$ and $D_J(3000)$. Our results indicate that the semileptonic production from $B_s$ and $B$ can be a good platform to produce considerable amount of $D_{sJ}(3040)$ and $D_J(3000)$, so we urge that relevant experiment groups could focus on these channels. Those phenomenological investigations are important to further experimentally study of $2P$ state of $D_s$ and $D$ meson. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant Nos. 11505039, 11575048, 11405004 and 11405037, and in part by PIRS of HIT Nos. Q201504, B201506, A201409, and T201405. Appendix A. Instantaneous Bethe-Salpeter equation {#appendix-a.-instantaneous-bethe-salpeter-equation .unnumbered} ================================================= We define the B-S wavefunction as: $$\renewcommand\theequation{A.1} \chi _P(q)=\int d^4 x \; exp(iq\cdot x)\left\langle 0 | T[\psi_1(\alpha_2x) \overline{\psi}_2(-\alpha _1x) ] |P , \beta \right\rangle,$$ where $\chi_P(q)$ is the B-S wavefunction of the relevant bound state. $\beta$ is the index other than momentum, $\alpha_1=\frac{m_1}{m_1+m_2}$, $\alpha_2=\frac{m_2}{m_1+m_2}$, $q=\alpha_2 p_1 - \alpha_1 p_2$, $p_1$, $p_2$ and $m_1$, $m_2$ are the momenta and constituent masses of the quark and anti-quark, respectively. $P$ is the momentum of the initial state while $\beta$ is the quantum index to identify the state other than momentum. $q_P$ denotes $\frac{q\cdot P}{\sqrt{P^2}}$ and $q_\perp=q_{P_\perp}=q-\frac{q\cdot P}{P^2}P$. The B-S equation in momentum space can be written as: $$\renewcommand\theequation{A.2} \left(\slashed{p}_1-m_1\right)\chi _P(q)\left(\slashed{p}_2+m_2\right)=i\int \frac{d^4k}{(2\pi)^4}V(P, k, q)\chi _P(k).$$ In the instantaneous approximation, the integral kernel takes a simple form: $$\renewcommand\theequation{A.3} V(P, k, q)=V(|k-q|).$$ Three-dimensional wavefunction can be written as: $$\renewcommand\theequation{A.4} \varphi(q^\mu _{p{_\perp}}) =i\int \frac{dq_p}{2\pi}\chi _P(q).$$ Thus, the B-S equation can be rewritten as: $$\renewcommand\theequation{A.5} \chi _P(q)=S_1(p_1)\eta (q_{P_{\perp}})S_2(p_2),$$ where $$\eta (q_{P^\mu _{\perp}})=\int\frac{d^3k_{P_\perp}}{(2\pi )^3}V(k^\mu _{P_\perp}, q^\mu _{P_\perp})\varphi (k^\mu _{P_\perp}).$$ The full Salpeter equation takes the form: $$\renewcommand\theequation{A.6} \begin{aligned} \left(M-\omega _{1p}-\omega _{2p}\right)\varphi ^{++}(q_{P_\perp})&=\Lambda^+_1(P_{1p_{\perp}}) \eta (q_{P_{\perp}}) \Lambda_2^+ (P_{2p_{\perp}}),\\ (M+\omega _{1p}+\omega _{2p})\varphi ^{--}(q_{P_\perp})&=-\Lambda^-_1(P_{1p_{\perp}}) \eta (q_{P_{\perp}}) \Lambda^-_2 (P_{2p_{\perp}}),\\ \varphi ^{+-}(q_{P_\perp})&=0,\\ \varphi ^{-+}(q_{P_\perp})&=0,\\ \varphi ^{\pm \pm}(q_{P_\perp})&=\Lambda^{\pm}_1(q_{P_\perp}) \frac{\slashed{P}}{M }\varphi(q_{P_\perp}) \frac{\slashed{P}}{M} \Lambda^{\pm}_2(q_{P_\perp}). \end{aligned}$$ In order to do the numerical integral, we need the explicit form of integral kernel. In this work, we choose the Cornell potential, which was widely used in this interaction. The Cornell potential is the sum of a linear scalar interaction and a vector interaction. $$\renewcommand\theequation{A.7} \begin{aligned} V(q)&=V_s(q)+V_v(q)\gamma ^0\otimes \gamma _0,\\ V_s(q)&=-\left(\frac{\lambda }{\alpha }+V_0\right)\delta ^3(q)+\frac{\lambda }{\pi ^2}\frac{1}{(q^2+\alpha ^2)^2},\\ V_v(q)&=-\frac{2}{3\pi ^2}\frac{\alpha _s(q)}{q^2+\alpha ^2},\\ \alpha _s(q)&=\frac{12\pi}{33-2n_f}\frac{1}{log(a+q^2/\Lambda^2_{QCD})}.\\ \end{aligned}$$ where $\alpha _s(q)$ is the running coupling constant, $\lambda$ is the string constant, $a$ and $\alpha$ are phenomenal parameters we introduce to avoid divergences when $q^2$ $\sim$ $\Lambda^2_{QCD}$ and $q^2$ $\sim$ $0$, $V_0$ is a constant in our model to fit the data. Appendix B. Wavefunctions for different states {#appendix-b.-wavefunctions-for-different-states .unnumbered} ============================================== In this section, we introduce the wavefunctions for different states. B.1 Wave function for $^1S_0$ {#b.1-wave-function-for-1s_0 .unnumbered} ----------------------------- The general form of $^1S_0$ state: $$\renewcommand\theequation{B.1.1} \varphi(0^-)(\vec{q})=M\left[\frac{\slashed{P}}{M}f_1(\vec{q})+f_2(\vec{q})+\frac{\slashed{q}_\perp}{M}f_3(\vec{q})+\frac{\slashed{P}\slashed{q}_\perp}{M^2}f_4(\vec{q})\right]\gamma _5.$$ Due to the constrains equations in full Salpeter equation, we have the condition $\varphi^{+-}_{0^-}$=$\varphi^{-+}_{0^-}$=0, Thus $$\renewcommand\theequation{B.1.2} f_3(\vec{q})=\frac{f_2(\vec{q})M(\omega _2-\omega _1)}{m_1 \omega _2+m_2 \omega_1 }, f_4(\vec{q})=-\frac{f_1(\vec{q})M(\omega _2+\omega _1)}{m_1 \omega _2+m_2 \omega_1 }.$$ Therefore, there are only two independent wavefunctions $f_1(\vec{q})$ and $f_2(\vec{q})$. The relativistic positive wavefunction could be written as $$\renewcommand\theequation{B.1.3} \varphi^{++}(^1S_0)(\vec{q})=a_1\left[\frac{a_2\slashed{P}}{M}+\frac{a_3\slashed{q}_\perp}{M}+\frac{a_4\slashed{q}_\perp\slashed{P}}{M^2}+1\right]\gamma ^5,$$ where $$\begin{aligned} &a_1=\frac{M}{2}\left(f_1(\vec{q})+f_2(\vec{q})\frac{\omega_1 + \omega_2 }{m_1+m_2}\right), \quad a_2=\frac{m_1+m_2}{\omega_1 + \omega_2},\\ &a_3=-M\frac{\omega_1 - \omega_2}{m_1\omega_2 + m_2\omega_1}, \quad a_4=M\frac{m_1+m_2}{m_1\omega_2 + m_2\omega_1}. \end{aligned}$$ B.2 Wave function for $^1P_1$ {#b.2-wave-function-for-1p_1 .unnumbered} ----------------------------- The general form of $^1P_1$ state: $$\renewcommand\theequation{B.2.1} \varphi (^1P_1)(\vec{q}_f)=q_{f\perp}\cdot \varepsilon \left[g_1(\vec{q}_f)+g_2(\vec{q}_f)\frac{P_f}{M_f}+g_3(\vec{q}_f)\slashed{q}_{f\perp}+\frac{\slashed{P}_f\slashed{q}_{f\perp}}{M_f^2}g_4(\vec{q}_f)\right]\gamma _5.$$ Constrains equations result in $$\renewcommand\theequation{B.2.2} g_3(\vec{q}_f)=-\frac{\omega'_1 - \omega'_2}{m'_1\omega'_2 + m'_2\omega'_1}g_1(\vec{q}_f), \quad g_4(\vec{q}_f)=-\frac{(\omega'_1 + \omega'_2)M_f}{m'_1\omega'_2 + m'_2\omega'_1}g_2(\vec{q}_f).$$ Thus the relativistic wavefunction is $$\renewcommand\theequation{B.2.3} \begin{aligned} \varphi^{++}(^1P_1)(\vec{q}_f)=&\frac{q_{f\perp} \cdot \varepsilon}{2} \left[g_1(\vec{q}_f)+\frac{\omega'_1 + \omega' _2}{m'_1+m'_2}g_2(\vec{q}_f)\right]\left[1+\frac{m'_1+m'_2}{\omega'_1 +\omega'_2}\frac{\slashed{P}_f}{M_f}-\frac{\omega'_1- \omega'_2 }{m'_1\omega'_2+m'_2\omega'_1}\slashed{q}_{f\perp} \right.\\ &\left. +\frac{m'_1+m'_2}{m'_1\omega '_2+m'_2\omega'_1}\frac{\slashed{q}_{f\perp} \slashed{P}_f}{M_f} \right]\gamma^5. \end{aligned}$$ The Dirac conjugate form is: $$\renewcommand\theequation{B.2.4} \overline{\varphi}^{++}(^1P_1)(\vec{q}_f)=-\frac{\varepsilon \cdot q_{f\perp}}{2} a_5 \gamma ^5 \left(1+a_7\frac{\slashed{P}_f}{M}_f+a_8\slashed{q}_{f\perp}+a_9\frac{\slashed{P}_f\slashed{q}_{f\perp}}{M}_f\right),$$ where $$\begin{aligned} &a_5=g_1(\vec{q}_f)+g_2(\vec{q}_f)\frac{w'_1+w'_2}{m'_1+m'_2}, \quad a_7=\frac{m'_1+m'_2}{w'_1+w'_2},\\ &a_8=-\frac{w'_1+w'_2}{m'_1w'_2+m'_2w'_1}, \qquad \quad a_9=\frac{m'_1+m'_2}{m'_1w'_2+m'_2w'_1}. \end{aligned}$$ B.3 Wave function for $^3P_1$ {#b.3-wave-function-for-3p_1 .unnumbered} ----------------------------- In the same way, we have the wavefunction of $^3P_1$ state: $$\renewcommand\theequation{B.3.1} \begin{aligned} \varphi^{++}(^3P_1)(\vec{q}_f)=&\frac{i}{2M_f}\left[h_1(\vec{q}_f)+\frac{\omega' _1+\omega' _2}{m'_1+m'_2}h_2(\vec{q}_f)\right]\left[1+\frac{m'_1+m'_2}{\omega '_1+\omega '_2}\frac{\slashed{P}_f}{M}_f-\frac{\omega' _1-\omega' _2}{m'_1\omega '_2+m'_2\omega '_1}\slashed{q}_{f\perp}\right.\\ &\left.+\frac{m'_1+m'_2}{m'_1\omega '_2+m'_2\omega'_1}\frac{\slashed{q}_{f\perp} \slashed{P}_f}{M_f}i\epsilon _{\nu \lambda \rho \sigma }\gamma ^\nu P^\lambda_f q^\rho_{f\perp} \epsilon ^\sigma \right], \end{aligned}$$ and it’s Dirac conjugate $$\renewcommand\theequation{B.3.2} \overline{\varphi}^{++}(^3P_1)(\vec{q}_f)=-\frac{i}{2M_f}a_6\epsilon _{\nu \lambda \rho \sigma} \gamma ^\nu P^\lambda_f q_{f\perp}^\rho \varepsilon ^\sigma \left(1+a_7\frac{\slashed{P}_f}{M_f}+a_8 \slashed{q}_{f\perp}+a_9\frac{\slashed{P}_f\slashed{q}_{f\perp}}{M_f}\right),$$ where $$\begin{aligned} &a_6=h_1(\vec{q}_f)+h_2(\vec{q}_f)\frac{w'_1+w'_2}{m'_1+m'_2}. \end{aligned}$$ Appendix C. The form factor {#appendix-c.-the-form-factor .unnumbered} =========================== In this section, we present the form factors in semi-leptonic decay of $B^0_{s}$ into $D_s(2P)$ state. For the process of $D_J(2P)$, the form factors are the same. $$\begin{aligned} t_1=&&\frac{a_1a_5}{2M^2M_{f1}^2}(2\alpha ^2E_{f1}(E_{f1}^2a_9M+a_2E_{f1}a_8MM_{f1}+a_4E_{f1}a_9 P_{f1}\cdot q+a_3a_8M_{f1}P_{f1}\cdot q)+2\alpha E_{f1}\\ &&\times (-MM_{f1}+2E_{f1}M(E_{f1}a_9+a_2a_8M_{f1})X+ a_4E_{f1}(M_{f1}+2a_9P_{f1}\cdot q)X+a_3(a_7(P_{f1}\cdot q+E_{f1}^2X)\\ &&+a_8M_{f1}(q_\perp^2 +P_{f1}\cdot q)))+ E_{f1}(2(-MM_{f1}+a_3a_7P_{f1}\cdot q+a_3a_8M_{f1}q_\perp^2)X- E_{f1}(a_3E_{f1}a_7\\ &&+E_{f1}a_9M+a_4M_{f1}+a_2a_8MM_{f1}+a_4a_9P_{f1}\cdot q)q_\perp^2 Y+ (a_3E_{f1}a_7+E_{f1}a_9M+a_4M_{f1}\\ &&+a_2a_8MM_{f1}+a_4a_9P_{f1}\cdot q)q_\perp^2 Z),\\ % t_2=&&\frac{a_1a_5}{2M^2M_{f1}^2}E_{f1}(-2\alpha M(\alpha E_{f1}a_9+a_2(a_7+\alpha a_8M_{f1}))+2\alpha a_4a_9q_\perp^2- 2(a_2a_7M+\alpha (a_3E_{f1}a_7\\ &&+2E_{f1}a_9M+a_4M_{f1}+2a_2a_8MM_{f1}+a_4a_9P_{f1}\cdot q)-a_4a_9 q_\perp^2)X+ (a_3E_{f1}a_7+E_{f1}a_9M+a_4M_{f1}\\ &&+a_2a_8MM_{f1}+a_4a_9P_{f1}\cdot q)q_\perp^2 Y),\\ % t_3=&&-\frac{a_1a_5(a_3E_{f1}a_7+E_{f1}a_9M+a_4M_{f1}+a_2a_8MM_{f1}+a_4a_9P_{f1}\cdot q)q_\perp^2 Z}{2MM_{f1}},\\ % t_4=&&\frac{ia_1a_5(a_9(\alpha a_4E_{f1}+M)+a_3(a_7+\alpha a_8M_{f1}))q_\perp^2Z}{2M^2M_{f1}},\\ % t_5=&&\frac{a_1a_6}{2M^2M_{f2}^2}(2\alpha ^2E_{f2}(a_2E_{f2}a_9MM_{f2}^2+a_8MM_{f2}^3+CE_{f2}^2a_9P_{f1}\cdot q+a_4E_{f2}a_8M_{f2}P_{f1}\cdot q +E_{f2}\\ &&(E_{f2}-M_{f2})(E_{f2}+M_{f2})(a_3E_{f2}a_9+a_4a_8M_{f2})X)+E_{f2}(2(a_7MM_{f2}^2+a_8MM_{f2}P_{f1}\cdot q\\ &&-a_3P_{f1}\cdot q(M_{f2}+a_9P_{f1}\cdot q)+a_3a_9M_{f2}^2 q_\perp^2)X +E_{f2}(M_{f2}(a_3E_{f2}+a_4a_7M_{f2}-M(E_{f2}a_8+a_2a_9M_{f2}))\\ &&+a_3E_{f2}a_9P_{f1}\cdot q) q_\perp^2 Y)-(M_{f2}(a_3E_{f2}+a_4a_7M_{f2}-M(E_{f2}a_8+a_2a_9M_{f2}))+a_3E_{f2}a_9 P_{f1} \cdot q)\\ &&q_\perp^2 Z+ \alpha (E_{f2}(2(-a_3M_{f2}P_{f1}\cdot q+a_8MM_{f2}P_{f1}\cdot q-a_3a_9(P_{f1}\cdot q)^2+a_3a_9M_{f2}^2q_\perp^2\\ &&+M_{f2}(-a_3E_{f2}^2+E_{f2}^2a_8M+2a_2E_{f2}a_9MM_{f2}+a_8MM_{f2}^2+a_4E_{f2}a_8P_{f1}\cdot q)X\\ &&+a_7M_{f2}^2(M-a_4E_{f2}X)) +E_{f2}(-E_{f2}+M_{f2})(E_f2+M_{f2})(a_3E_{f2}a_9+a_4a_8M_{f2})q_\perp^2 Y)\\ &&+(E_{f2}-M_{f2})(E_{f2}+M_{f2})(a_3E_{f2}a_9+a_4a_8M_{f2})q_\perp^2 Z)),\\ % t_6=&&\frac{a_1a_6}{2M^2M_{f2}^2}(-2\alpha ^2E_{f2}^2M(a_2E_{f2}a_9+a_8M_{f2})-2a_3M_{f2}q_\perp^2+2M(a_7P_{f2}\cdot q+a_8M_{f2}q_\perp^2)\\ &&-2(a_7(MM_{f2}^2+a_4E_{f2}P_{f2} \cdot q)-P_{f2} \cdot q(a_2E_{f2}a_9M+a_3M_{f2}-a_8MM_{f2}+a_3a_9P_{f2}\cdot q)\\ &&+a_3a_9M_{f2}^2 q_\perp^2)X+2\alpha (E_{f2}^3(a_4a_7-a_2a_9M)X+a_8MM_{f2}(P_{f2} \cdot q-M_{f2}^2X)-E_{f2}^2(a_7M+(-a_3M_{f2}\\ &&+a_8MM_{f2}+a_3a_9P_{f2} \cdot q)X)+E_{f2}(a_2a_9M(P_{f2} \cdot q-M_{f2}^2X)+a_4a_8M_{f2}(q_\perp^2 - 2 P_{f2} \cdot qX)))\\ &&+\alpha E_{f2}(E_{f2}-M_{f2})(E_{f2}+M_{f2})(a_3E_{f2}a_9+a_4a_8M_{f2})q_\perp^2 Y- E_{f2}(M_{f2}(a_3E_{f2}+a_4FM_{f2}\\ &&-M(E_{f2}a_8+a_2a_9M_{f2}))+a_3E_{f2}a_9P_{f2} \cdot q)q_\perp^2Y),\\ % t_7=&&\frac{a_1a_6}{2MM_{f2}^2}(2\alpha ^2E_{f2}M(E_{f2}-M_{f2})(E_{f2}+M_{f2})(a_2E_{f2}a_9+a_8M_{f2})-(a_4a_7-a_2a_9M)(2(P_{f2} \cdot q)^2\\ &&-M_{f2}^2 q_\perp^2(2+Z))+E_{f2}(-2a_7MP_{f2} \cdot q+ q_\perp^2 (-a_8MM_{f2}(2+Z)+a_3(a_9P_{f2} \cdot q Z\\ &&+M_{f2}(2+Z))))+\alpha(E_{f2}^2(2a_7M-a_3a_9 q_\perp^2 Z)+E_{f2} M_{f2}(-2a_7MM_{f2}+2a_3 P_{f2} \cdot q- 4a_8MP_{f2} \cdot q\end{aligned}$$ $$\begin{aligned} &&+a_3a_9M_{f2} q_\perp^2 Z)+a_4a_8M_{f2}(-2(P_{f2} \cdot q)^2+M_{f2}^2 q_\perp^2(2+Z))+E_{f2}^2(-4a_2a_9MP_{f2} \cdot q\\ &&+a_4(2a_7P_{f2} \cdot q-a_8M_{f2} q_\perp^2(2+Z))))), \\ t_8=&&-\frac{a_1a_6}{2M^2M_{f2}^2}i(2E_{f2}(a_4a_7P_{f2} \cdot q+M_{f2}(a_2M+a_4a_8(\alpha P_{f2} \cdot q + q_\perp^2))+E_{f2}(a_7M+a_3a_9(\alpha P_{f2} \cdot q \\ &&+ q_\perp^2)))(\alpha +X)+(a_8MM_{f2}+\alpha E_{f2}(a_3E_{f2}a_9+a_4a_8M_{f2})-a_3(M_{f2}+a_9 P_{f2} \cdot q)) q_\perp^2 Z),\end{aligned}$$ where $E_{f1}$ and $E_{f2}$ are the energies of $^1P_1$ and $^3P_1$ states, $M_{f1}$ and $M_{f2}$ are the masses of $^1P_1$ and $^3P_1$ states. $X=\frac{q\cos \theta}{|\vec{P}_f|}$,$Y=\frac{-1+3\cos^2\theta}{|\vec{P}_f|}$,$Z=-1+\cos^2\theta$. [99]{} Brodzicka, J., et al. Physical review letters 100.9(2008): 092001 Aubert, Bernard, et al. Physical Review D 80.9(2009): 092003 del Amo Sanchez, Pablo, et al. Physical Review D 80.9(2009):092003 Z.G.Wang, Phys.Rev.D 83,014009 (2011) B. Aubert et al. (BABAR Collaboration), Phys. Rev. D 80, 092003 (2009) Chen, Bing, Deng-Xia Wang, and Ailin Zhang. Physical Review D 80.7 (2009): 071502. R. Aaij et al. (LHCb Collaboration), J. High Energy Phys. 09 (2013) 145. Li, De-Min, Peng-Fei Ji, and Bing Ma. The European Physical Journal C 71.3 (2011): 1-15. Sun, Yuan, Xiang Liu, and Takayuki Matsuki. Physical Review D 88.9 (2013): 094020. Wang, Zhi-Gang. Physical Review D 88.11 (2013): 114003. Ebert, D., R. N. Faustov, and V. O. Galkin. The European Physical Journal C 66.1-2 (2010): 197-206. Ebert, D.,Faustov, R. N., Galkin, V. O. Physical Review D, 61(1) (1999), 014016 Aliev, T. M., Azizi, K., Ozpineci, A. The European Physical Journal C, 51(3)(2007), 593-599 Huang, M. Q. Physical Review D, 69(11) (2004),114015 Zhao, S. M., Liu, X., Li, S. J. The European Physical Journal C, 51(3)(2007), 601-606. Yue, Jiang, et al. Chinese physics C 37.1 (2013): 013101 Fu, Hui-feng, et al. Journal of High Energy Physics 2011.6 (2011): 1-24. Jiang Y, Wang G L, Wang T, et al. International Journal of Modern Physics A, 2013, 28(21): 1350110. Wang, Zhi-Hui, et al. Journal of Physics G: Nuclear and Particle Physics 39.8 (2012): 085006. Particle Data Group. Chinese physics. C, High energy physics and nuclear physics 38.9 (2014): 090001. Li, Gang, Feng-Lan Shao, and Wei Wang. Physical Review D 82.9 (2010): 094031. Xu, Hao, et al. Physical Review D 90.9 (2014): 094017. Sun, Zhi-Feng, and Xiang Liu. Physical Review D 80.7 (2009): 074037. [^1]: jiangure@hit.edu.cn
--- abstract: | The canonical trace on the reduced $C^*$-algebra of a discrete group gives rise to a homomorphism from the K-theory of this $C^*$-algebra to the real numbers. This paper studies the range of this homomorphism. For torsion free groups, the Baum-Connes conjecture together with Atiyah’s $L^2$-index theorem implies that the range consists of the integers. We give a direct and elementary proof that if $G$ acts on a tree and admits a homomorphism $\alpha$ to another group $H$ whose restriction $\alpha|_{G_v}$ to every stabilizer group of a vertex is injective, then $$\operatorname{tr}_G(K(C_r^*G))\subset \operatorname{tr}_H(K(C_r^*H)).$$ This follows from a general relative Fredholm module technique. Examples are in particular HNN-extensions of $H$ where the stable letter acts by conjugation with an element of $H$, or amalgamated free products $G=H*_U H$ of two copies of the same groups along a subgroup $U$. MSC: 19K (primary); 19K14, 19K35, 19K56 (secondary) author: - | Thomas Schick[^1]\ FB Mathematik — Uni Münster\ Einsteinstr. 62 — 48149 Münster, Germany\ date: 'Last compiled: , last edited [Nov 17, 2000 ]{}or later' title: 'The trace on the K-theory of group $C^*$-algebras' --- Introduction ============ Let $G$ be a discrete group. All discrete groups considered in this paper are assumed to be countable. The trace $\operatorname{tr}_G\colon {\mathbb{C}}G\to{\mathbb{C}}\colon \sum_{g\in G} \lambda_g g\mapsto \lambda_1$ (where $1$ is the neutral element of $G$) extends to a trace on the reduced $C^*$-algebra of $G$ and therefore gives rise to a homomorphism $$\operatorname{tr}_G\colon K_0(C^*_{r}G)\to{\mathbb{R}}.$$ If $G$ is torsion free, we have the commutative diagram $$\begin{CD} K_0(BG) @>A>> K_0(C^*_{r}G)\\ @VV{\operatorname{ind}_G}V @VV{\operatorname{tr}_G}V\\ {\mathbb{Z}}@>>> {\mathbb{R}}, \end{CD}$$ where $A$ is the Baum-Connes assembly map. The Baum-Connes conjecture says that $A$ is an isomorphism. We denote by $\operatorname{ind}_G$ Atiyah’s $L^2$-index, which coincides with the ordinary index [@Atiyah(1976)] and therefore takes values in the integers. Surjectivity of $A$ of course implies that $\operatorname{tr}_G$ is also integer valued. We will denote this consequence of the Baum-Connes conjecture as the *trace conjecture*. It implies by a standard argument that there are no nontrivial projections in $C^*_{r}G$. The trace conjecture was verified directly for free groups using a special Fredholm module, which can be assigned to free groups, cf. e.g. [@Effros(1989)]. Based on the ideas of this proof we get the following result. \[Ktraceprop\] Let $H, G$ be discrete countable groups and assume $$\operatorname{tr}_H(K(C^*_rH))\subset A\subset{\mathbb{R}}.$$ Let $\Omega$ and $\Delta$ be sets with commuting $G$-action from the left and $H$-action from the right such that $\Omega$ and $\Delta$ are free $H$-sets. Let $\Omega=\Omega'\cup X$ and assume $\Delta$ and $\Omega'$ are free $G$-sets and $X$ consists of $1\le r<\infty$ $H$-orbits. Assume there is a bijective right $H$-map $\phi\colon \Delta\to\Omega$. Suppose that for every $g\in G$ the set $$R_g:=\{ x\in \Delta\mid \phi(gx)\ne g\phi(x)\}$$ is contained in the union of finitely many $H$-orbits of $\Delta$. Then $$\operatorname{tr}_G(K(C^*_rG))\subset \frac{1}{r} A.$$ It remains now of course to give interesting examples of Theorem \[Ktraceprop\]. For this, we use results of Dicks-Schick [@Dicks-Schick(2000)] and the following definition: Let $G$ be a group which acts on a tree. We say $G$ is *subdued* by a group $H$ if there is a homomorphism $\alpha\colon G\to H$ whose restriction $\alpha|_{G_v}$ to the stabilizer group $G_v:=\{g\in G\mid gv=v\}$ is injective for each vertex $v$ of the tree. Remember that one can translate between groups acting on trees and fundamental groups of graphs in such a way that conjugacy classes of vertex stabilizers correspond to vertex groups. \[ex:subdue\] 1. \[item:HNN\] If $G=H*_U H$, then $G$ is the fundamental group of a graph consisting of two vertices joined by an edge. Hence there is an action of $G$ on a tree such that the stabilizer group of every vertex is conjugate to one of the two copies of $H$. Using the obvious projection $G\to H$ which is the identity on both factors we see that $G$ is subdued by $H$. 2. \[item:amalg\] Assume $U{<}H$ and $g\in H$. Let $\phi\colon U\to U^g$ be given by conjugation with $g$. Let $G$ be the HNN-extension of $H$ along $\phi$. Then $G$ is the fundamental group of a graph of groups with one vertex (the vertex group being $H$) and therefore acts on a tree such that each vertex stabilizer is conjugate to $H$. We define a homomorphism $\alpha\colon G\to H$ such that the restriction of $\alpha$ to $H$ is the identity and maps the stable letter $t$ to $g$. This implies that $G$ is subdued by $H$. \[heredtrace\] Assume $G$ is the fundamental group of a graph of groups which is subdued by another group $H$. Then $$\operatorname{tr}_{G}\left( K_0(C^*_{r}[G])\right) \subset \operatorname{tr}_{H} \left(K_0(C^*_{r}[H])\right) .$$ In particular, this applies to the situations of Example \[ex:subdue\]. If $\operatorname{tr}_H(K_0(C^*_r[H]))\subset {\mathbb{Z}}$ then this implies the trace conjecture for $G$. Baum and Connes conjecture [@Baum-Connes(1982) p.32] that for groups with torsion the range should be contained in ${\mathbb{Q}}$. Our results support also this assertion (Baum and Connes make in fact a more precise and stronger conjecture, which however was disproved by Roy [@Roy(1998)]). \[remark:otherway\] In some cases, it is possible to derive the conclusions of Theorem \[heredtrace\] from elaborate K-theory calculations. One can use the exact sequence for the fundamental group of a graph of groups [@Pimsner(1986) Theorem 18]. In some cases elementary properties of the trace then imply Theorem \[heredtrace\]. One might hope to give a general treatment of the range of the traces in these cases as is done for certain HNN-extensions in [@Pimsner(1985)] and [@Exel(1987)]. However, even in the case of HNN-extensions, in general those results are difficult to interpret and it is not clear that [@Pimsner(1985)] or [@Exel(1987)] implies Example \[ex:subdue\] \[item:HNN\]. Moreover, observe that Pimsner uses deep KK-theoretic methods to derive the exact sequence for a graph of groups. In contrast, our derivation is elementary. To apply Theorem \[heredtrace\], essentially we have to know the trace conjecture for $H$. The obvious sufficient condition is that $H$ fulfills the Baum-Connes conjecture, but it is also enough that $H$ is a subgroup of such a group. If $H$ satisfies the Baum-Connes conjecture with coefficients, then the same is true for each group $G(v)$ (since they are subgroups of $H$), hence by [@Oyono-Oyono(1998)] $G$ fulfills the Baum-Connes conjecture, and the statement of Theorem \[heredtrace\] follows also immediately from this fact. However, these arguments do not apply to the Baum-Connes conjecture without coefficients. Lafforgue proves that cocompact discrete subgroups e.g. of $Sl_3({\mathbb{R}})$ or $Sp(n,1)$ satisfy the Baum-Connes conjecture without coefficients [@Lafforgue(1998)]. However, it is unknown whether the Baum-Connes conjecture with coefficients is true for these groups. Therefore, the consequences of Theorem \[heredtrace\] are not included in the knowledge about the Baum-Connes conjecture for a non-cocompact subgroup $H$ of $Sl_3({\mathbb{R}})$ or $Sp(n,1)$ contained in a cocompact torsion-free subgroup. In particular, for such an $H$ every $H*_U H$ fulfills the trace conjecture, but it is not clear whether it fulfills the Baum-Connes conjecture. The method described in [@Effros(1989)] was used by Linnell [@Linnel(1993); @Linnell(1998)] to prove the Atiyah conjecture about the integrality of $L^2$-Betti numbers for free groups, and starting with this for a lot of other groups. We investigate the Atiyah conjecture and obtain generalizations of Linnell’s results in [@Schick(1999); @Dicks-Schick(2000)]. The trace conjecture for the K-theory of group $C^*$-algebras {#sec:trace} ============================================================= In this section we prove Theorem \[Ktraceprop\] and Theorem \[heredtrace\]. We first show how Theorem \[heredtrace\] follows from Theorem \[Ktraceprop\]. The method for this is developed by Dicks and Schick in [@Dicks-Schick(2000)]. For the convenience of the reader we repeat the easy proof of the special case we are concerned with here. Assume $G$ acts on the tree $T$ with set of vertices $V$ and set of edges $E$. We choose an arbitrary $v_0 \in V$. Let $\{\ast\}$ be a trivial $G$-set. Let $\tilde\phi\colon V \to E\,\vee\,\{\ast\}$ denote the map which assigns to each $v \in V$ the last edge in the $T$-geodesic from $v_0$ to $v$, where this is taken to be $\ast$ if $v = v_0$. By Julg-Valette [@Julg-Valette(1984)], $\tilde\phi$ is bijective, and, for all $v \in V$, $g \in G$, we have $\tilde\phi(gv) = g\tilde\phi(v)$ if and only if $v$ is not in the $T$-geodesic from $v_0$ to $g^{-1}v_0$. Define now $\Delta:=V\times H$, $\Omega':=E\times H$ and $\Omega:=(E{\amalg}\{\ast\})\times H$. We define the $G$ and the $H$-action on $\Delta$ and $\Omega'$ be setting $$g(x,u)h:= (gx,\alpha(g)uh)\qquad\forall g\in G,\;x\in T\; u,h\in H.$$ Since the restriction of $\alpha$ to each stabilizer group is injective, this is a free $G$- and of course also a free $H$-action, and they commute. Extend the action to $\Omega$ by $g(\ast,u)h=(\ast,uh)$ for $g\in G$ and $u,h\in H$. We define $\phi\colon \Delta=V\times H\to\Omega=(E{\amalg}\{\ast\})\times H$ by $$\phi(v,u)= (\tilde\phi(v),u).$$ Since $\tilde\phi$ is bijective, the same is true for $\phi$. Moreover, for fixed $g\in G$ we have $\phi(g(v,u))=(\phi(gv),\alpha(g)u) = (g\phi(v),\alpha(g)u)= g\phi(v,u)$ exactly if $\phi(gv)= g\phi(v)$. Therefore, the set $R_g$ of Theorem \[Ktraceprop\] is contained in the union of the finitely many $H$-orbits of $\Delta$ determined by the $T$-geodesic from $v_0$ to $g^{-1}v_0$. Theorem \[heredtrace\] now follows from Theorem \[Ktraceprop\]. We are now going to prove Theorem \[Ktraceprop\]. We use the language of Hilbert $A$-modules (for a $C^*$-algebra $A$), compare e.g. [@Blackadar(1986) Section 13]. \[defofE\] Let $T$ be a free set of generators of the free $H$-set $\Delta$. Then we can form the Hilbert $C^*_rH$-module $E:=l^2(T){\otimes}_{\mathbb{C}}C^*_r H$. Of course, $E$ is nothing but the Hilbert sum of copies of $C^*_rH$ indexed by $T$ (in the sense of Hilbert $C^*_rH$-modules). Moreover, $E{\otimes}_{C^*_r H} l^2 H\cong l^2(\Delta)$, and $E$ is in a natural way a subset of $l^2(\Delta)$ because $C^*_rH\subset l^2 H$. The module $E$ as a subset of $l^2(\Delta)$ does not depend on the choice of the basis $T$. Let ${{\mathcal{B}}}(E)$ be the set of bounded adjointable Hilbert $C^*_rH$-module homomorphisms. The map $${{\mathcal{B}}}(E)\to {{\mathcal{B}}}(l^2(\Delta))\colon A\mapsto A_\Delta:=A{\otimes}1$$ is an injective algebra homomorphism by [@Blackadar(1986) p. 111], therefore an isometric injection of $C^*$-algebras [@Kadison-Ringrose(1983) 4.1.9]. Observe that the image of ${{\mathcal{B}}}(E)$ commutes with the right action of $H$ on $l^2(\Delta)$ and therefore is contained in the corresponding von Neumann algebra with its canonical trace. We will prove in Lemma \[inclusion\] that ${\mathbb{C}}G\subset {{\mathcal{B}}}(E)$. It follows that the closure of ${\mathbb{C}}G$ in ${{\mathcal{B}}}(E)$ and ${{\mathcal{B}}}(l^2(\Delta))$ coincides. Since $\Delta$ is a free $G$-set, this closure is isomorphic to $C^*_rG$. \[inclusion\] We have a natural inclusion ${\mathbb{C}}G\subset{{\mathcal{B}}}(E)$ which extends the action of $G$ on ${\mathbb{C}}\Delta\subset E$, in particular it is compatible with the injection ${\mathbb{C}}G\subset{{\mathcal{B}}}(l^2\Delta)$. First observe that $E$ is a closure of the algebraic tensor product of $l^2(T)$ and $C^*_rH$. Moreover, ${\mathbb{C}}[T]$ is dense in $l^2(T)$ and ${\mathbb{C}}H$ is dense in $C^*_rH$. Therefore ${\mathbb{C}}\Delta={\mathbb{C}}[T]{\otimes}_{\mathbb{C}}{\mathbb{C}}H$ is a dense subset of $E$. Now fix $g\in G$. Since $\Delta=\bigcup_{t\in T} tH$, where the union is disjoint, for each $t\in T$ we get unique elements $t_{g,t}\in T$, $h_{g,t}\in H$ such that $gt=t_{g,t}h_{g,t}$. Since the actions of $G$ and $H$ commute, $gt=gt'h$ implies $t=t'$. The map $\alpha_g\colon T\to T\colon t\mapsto t_{g,t}$ therefore is a bijection. Pick $$x=\sum_{t\in T}tv_t, \;x'=\sum_{t\in T}t v'_t\in {\mathbb{C}}[T]{\otimes}_{{\mathbb{C}}} {\mathbb{C}}H, \quad\text{with }v_t,v'_t\in {\mathbb{C}}H.$$ By linearity we get $$gx=\sum_{t\in T} t_{g,t}h_{g,t}v_t\quad\text{and}\quad gx'=\sum_{t\in T} t_{g,t}h_{g,t}v'_t.$$ As is the convention in the theory of Hilbert $A$-modules, all of our inner products are linear in the second variable. Taking now the $C^*_rH$-valued inner product we get (adjoint and products of elements in ${\mathbb{C}}H\subset C^*_r H$ are taken in the sense of the $C^*$-algebra) $$\begin{split} {\langle x,x' \rangle}_{C^*_rH} & = \sum_{t\in T}v_t^*v_t' = \sum_{t\in T} (h_{g,t}v_t)^* (h_{g,t}v_t')\\ &= \sum_{t\in T} (h_{g,\alpha_g^{-1}(t)}v_{\alpha_g^{-1}(t)})^* (h_{g,\alpha_g^{-1}(t)}v'_{\alpha_g^{-1}(t)}) = {\langle gx,gx' \rangle}_{C^*_rH} \end{split}$$ (here we used that $H$ acts unitarily, i.e. $h_{g,t}^*=h_{g,t}^{-1}$). Hence $G$ acts $C^*_rH$-isometrically and this action extends to $E$. By linearity we get an $*$-algebra homomorphism ${\mathbb{C}}G\to{{\mathcal{B}}}(E)\to {{\mathcal{B}}}(l^2\Delta)$. The composition is injective, therefore the same is true for the first map. The above reasoning implies in the same way: For $n\in{\mathbb{N}}$ we have canonical injections of $*$-algebras $$M_n({\mathbb{C}}G)\subset M_n(C^*_r G)\subset {{\mathcal{B}}}(E^n)\subset {{\mathcal{B}}}(l^2\Delta^n)^H,$$ where ${{\mathcal{B}}}(l^2\Delta^n)^H$ denotes the operators which commute with the right action of $H$. We have to compute the $G$-trace of operators in $M_n(C^*_r G)$, and we want to express this in terms of the $H$-trace of a suitable other operator. For this end, we repeat the following definition: \[deftrH\] An operator $A\in{{\mathcal{B}}}(E)$ is of $H$-trace class if its image $A_{l^2\Delta}\in{{\mathcal{B}}}(l^2(\Delta))^H$ is of $H$-trace class in the sense of the von Neumann algebra (compare e.g. [@Schick(1998c) 2.1]), i.e. if (with ${\left\lvertA\right\rvert}=\sqrt{A^*A}$) $$\sum_{t\in T} {\langle t,{\left\lvertA_{l^2\Delta}\right\rvert}t \rangle}_{l^2(\Delta)}< \infty.$$ Then also $ \sum_{t\in T} {\langle t,A_{l^2\Delta}t \rangle}_{l^2(\Delta)}$ converges and we set $$\operatorname{tr}_H(A):= \sum_{t\in T} {\langle t,A_{l^2\Delta}t \rangle}_{l^2(\Delta)}.$$ For $A=(A_{ij})\in{{\mathcal{B}}}(E^n)=M_n({{\mathcal{B}}}(E))$ we set $$\operatorname{tr}_H(A)= \sum_{i=1}^n \operatorname{tr}_H(A_{ii}),$$ if ${\left\lvertA_{l^2\Delta}\right\rvert}$ of $H$-trace class (with the obvious definition for this). \[computetrH\] Let $A,B,C\in{{\mathcal{B}}}(E^n)$ and $A$ be of $H$-trace class. The trace class operators form an ideal inside ${{\mathcal{B}}}(E^n)$ and we have $\operatorname{tr}_H(AB)=\operatorname{tr}_H(BA)$. Moreover $$\begin{aligned} {\left\lvert\operatorname{tr}_H(A+B)\right\rvert} \le & \operatorname{tr}_H({\left\lvertA\right\rvert}) + \operatorname{tr}_H({\left\lvertB\right\rvert})\\ \operatorname{tr}_H({{\left\lvertCAB\right\rvert}})\le & {\left\lVertC\right\rVert}\cdot{\left\lVertB\right\rVert}\cdot\operatorname{tr}_H({\left\lvertA\right\rvert}). \end{aligned}$$ These are standard properties of the (von Neumann) trace, compare [@Schick(1998c) 2.3], [@Dixmier(1969) Théorème 8 and Corollaire 2 on p. 106]. Set $S:=\phi(T)$. This is an $H$-basis for the free $H$-set $\Omega$. Similarly to $E$ we can build $F:=l^2(S){\otimes}_{\mathbb{C}}C^*_rH\subset l^2(\Omega)$. If $S':=S\cap\Omega'$ and $S'':=S\cap X$ (i.e. $S'$ is an $H$-basis for $\Omega'$ and $S''$ is an $H$-basis for $X$) then with $F':=l^2(S'){\otimes}_{\mathbb{C}}C^*_rH$ and $F'':=l^2(S''){\otimes}_{\mathbb{C}}C^*_rH$ we get a direct sum decomposition of Hilbert $C^*_rH$-modules $$F= F'\oplus F''.$$ As in the case of $E$ we get an canonical inclusion $${\mathbb{C}}G\subset C^*_rG\subset {{\mathcal{B}}}(F')\subset {{\mathcal{B}}}(F)\subset {{\mathcal{B}}}( l^2\Omega)$$ (we extend the action of $C^*_rG$ to all of $F$ by setting it zero on $F''$). This composition is a non-unital $*$-algebra homomorphism. Corresponding statements hold for matrices. Denote the image of $A\in M_n(C^*_rG)$ in ${{\mathcal{B}}}(E^n)$ with $A_\Delta$ and in ${{\mathcal{B}}}(F^n)$ with $A_\Omega$. We therefore have $A_\Omega=A_{\Omega'}\oplus 0$ with the obvious notation. The bijection $\phi\colon \Delta\to\Omega$ induces a unitary map of Hilbert spaces $\phi\colon l^2(\Delta)^n\to l^2(\Omega)^n$. Since $\phi\colon \Delta\to \Omega$ is $H$-equivariant, the same is true for the unitary map. Moreover, we get a Hilbert $C^*_rH$-module unitary map $\phi\colon E^n\to F^n$. One key observations is now (this is an extension of the corresponding observation in the classical proof for the free group): \[GtoHlemma\] Suppose $A\in M_n(C^*_r G)\subset {{\mathcal{B}}}(E^n)$ is such that the Hilbert $C^*_r H$-module morphism $A_\Delta - \phi^*A_\Omega \phi\colon E^n\to E^n$ is of $H$-trace class. Then $$\operatorname{tr}_{G}(A)= \frac{1}{r} \operatorname{tr}_{H}( A_\Delta - \phi^*A_\Omega \phi) .$$ Observe that $\phi$ is diagonal and traces are the sum over the diagonal entries. Therefore we may assume that $n=1$. Since $\Delta$ is a free $G$-module, for every $x\in \Delta$ (which we identify with the element of $l^2(\Delta)$ which is $1$ at $x$ and zero everywhere else) we have $${\langle x,A_\Delta x \rangle}_{l^2(\Delta)} = \operatorname{tr}_{G}(A)$$ (simply identify $Gx$ with $G$ and the left and right hand side become identical). Similarly, since $A_\Omega= A_{\Omega'}\oplus 0$ on $l^2(\Omega')^n\oplus l^2(X)^n$ $${\langle \phi x,A_\Omega \phi x \rangle}_{l^2(\Omega)}=\begin{cases} \operatorname{tr}_{G}(A); & \text{if }\phi(x)\in\Omega'\\ 0; & \text{if } \phi(x)\in X .\end{cases}$$ Moreover $$\begin{split} \operatorname{tr}_{H}( A_\Delta - \phi^*A_\Omega \phi) & = \sum_{t\in T} {\langle t,A_\Delta t \rangle}_{l^2(\Delta)} - {\langle t,\phi^* A_\Omega \phi(t) \rangle}_{l^2(\Delta)} \\ & = \sum_{\phi(t)\in X\cap S''} \operatorname{tr}_{G}(A) = r\operatorname{tr}_{G}(A) \end{split}$$ since ${\left\lvertX\cap S''\right\rvert}$ is the number of $H$-orbits in $X$, i.e. $r$. All other summands cancel each other out. Because $K(C^*_rG)$ is generated by projections $P\in M_n(C^*_rG)\subset M_n({{\mathcal{N}}}G)$ and the trace we have to compute is exactly $\operatorname{tr}_G(P)$, we are tempted to apply Lemma \[GtoHlemma\] to such a $P$. A problem is that it is hard to check whether the trace class condition is fulfilled in general. To circumvent these difficulties recall the following fact (compare e.g. [@Connes(1994) III.3, Proposition 3]): \[holomclosed\] Let $B$ be a $C^*$-algebra and $U\subset B$ a dense $*$-subalgebra that is closed under holomorphic functional calculus. Then the inclusion induces an isomorphism $$K(U)\cong K(B).$$ In particular, if $B=C^*_rG$ and $U$ is closed under holomorphic functional calculus and contains ${\mathbb{C}}G$, then the ranges of the canonical trace applied to $K(C^*_rG)$ and $K(U)$ coincide. As algebra $U$ we will use the closure under holomorphic functional calculus of ${\mathbb{C}}G\subset{{\mathcal{B}}}(E)$. This of course fulfills the conditions of Proposition \[holomclosed\]. It remains to check: \[Utrclass\] Let $x\in M_n(U)\subset M_n(C^*_r G)\subset {{\mathcal{B}}}(E^n)$, where $U$ is the closure under holomorphic functional calculus of ${\mathbb{C}}G$ in $C^*_rG$. Then $x-\phi^*x\phi\colon E\to E$ is of $H$-trace class and $C^*_rH$-compact. Start with $g\in G\subset{\mathbb{C}}G$. Since $R_g$ as defined in Theorem \[Ktraceprop\] is contained in finitely many $H$-orbits, $gt=\phi^*g\phi t$ for all but a finitely many $t\in T$. In particular $g-\phi^*g\phi$ is zero outside the $C^*_rH$-submodule of $E$ spanned by this finite number of elements of $T$, and is nonzero only on the complement, which is isomorphic to $(C^*_rH)^N$ for some $N\in{\mathbb{N}}$. Since $\operatorname{id}\colon C^*_rH^N\to C^*_rH^N$ is of finite rank in the sense of Hilbert $C^*_rH$-module morphisms, the same is true for $g-\phi^*g\phi$. Finite rank operators form a subspace, therefore the same is true if we replace $g$ by $v\in{\mathbb{C}}G\subset {{\mathcal{B}}}(E)$. Passage to finite matrices does preserve the finite rank property. Finite rank implies $H$-trace class and $C^*_rH$-compactness. In particular, all operators in $M_n({\mathbb{C}}G)$ give rise to $C^*_r H$-compact operators, which also are of $H$-trace class. The map $x\mapsto x-\phi^*x\phi$ is norm continuous and the compact operators form a closed ideal, therefore $x-\phi^*x\phi$ is compact even for arbitrary $x\in M_n(C^*_r G)$. Assume $A\in M_n(C^*_r H)$ and $0\ne\xi\notin\operatorname{Spec}(A)$. Since the homomorphism $A\mapsto A_\Delta$ is unital, we get $$\left((\xi-A)^{-1}\right)_\Delta= (\xi-A_\Delta)^{-1}.$$ Similarly $\left((\xi-A)^{-1}\right)_{\Omega'} = (\xi-A_{\Omega'})^{-1}$. Consequently $$\left( (\xi-A)^{-1}\right)_\Omega = \left( (\xi-A)^{-1}\right)_{\Omega'}\oplus 0 = (\xi-A_\Omega)^{-1} - \xi^{-1}P$$ where $P\colon F\to F$ is the projection onto $F''$. Note that $(\xi-A_\Omega)^{-1}$ acts by multiplication with $\xi^{-1}$ on $F''$. Here we need the assumption $\xi\ne 0$. Since $T''$ is finite, $P$ is of finite rank as Hilbert $C^*_rH$-module morphism. Suppose now $f$ is a function that is holomorphic in a neighborhood of $\operatorname{Spec}(A)$. Let $\Gamma$ be a loop around $\operatorname{Spec}(A)$ and choose $\Gamma$ so that it does not meet $0\in{\mathbb{C}}$. Then $$\begin{split} f(A)_\Delta = & \left(\int_\Gamma f(\xi)(\xi-A)^{-1}\;d\xi\right)_\Delta = \int_\Gamma f(\xi)(\xi-A_\Delta)^{-1}\;d\xi\\ f(A)_\Omega = & \int_\Gamma f(\xi)\left((\xi-A_\Omega)^{-1}-\xi^{-1}P\right)\;d\xi. \end{split}$$ For $u,v\in{{\mathcal{B}}}(E)$ and $\xi\notin\operatorname{Spec}(u)\cup\operatorname{Spec}(v)$ we have $$\begin{split} (\xi-u)^{-1}-(\xi-v)^{-1} = &(\xi-u)^{-1}(\xi-v-(\xi-u))(\xi-v)^{-1}\\ = & (\xi-u)^{-1}(u-v)(\xi-v)^{-1}. \end{split}$$ Therefore $$\begin{gathered} f(A)_\Delta-\phi^*f(A)_\Omega\phi =\phi^*P\phi\underbrace{\int_{\Gamma} f(\xi)\xi^{-1}\;d\xi}_{\in{\mathbb{C}}}\\ + \int_\Gamma \underbrace{f(\xi)(\xi-A_\Delta)^{-1}}_{=:f_\Delta(\xi)} \underbrace{(A_\Delta-\phi^*A_\Omega\phi)}_{=:A_0} \underbrace{(\xi-\phi^*A_\Omega\phi)^{-1}}_{=:f_\Omega(\xi)}\;d\xi. \end{gathered}$$ As a consequence of Lemma \[computetrH\] we have $$\begin{split} \operatorname{tr}_H {\left\lvert\int_\Gamma f_\Delta(\xi) A_0 f_\Omega(\xi)\;d\xi\right\rvert} \le & \int_\Gamma \operatorname{tr}_H({\left\lvert f_\Delta(\xi) A_0 f_\Omega(\xi)\right\rvert})\; d\xi \\ \le & \int {\left\lVertf_\Delta(\xi)\right\rVert}\cdot {\left\lVertf_\Omega(\xi)\right\rVert} \cdot \operatorname{tr}_H({\left\lvertA_0\right\rvert}). \end{split}$$ Since the operator valued functions $f_\Delta$ and $f_\Omega$ are norm-continuous, $f(A)_\Delta-\phi^*f(A)_\Omega)\phi$ is of $H$-trace class if $A_0$ is of $H$-trace class, in particular if $A\in M^n({\mathbb{C}}G)$. This concludes the proof. We determine now the range of the trace on the dense and holomorphically closed subalgebra $U$ of $C^*_r G$. Since the trace class condition is fulfilled, it only remains to calculate $\operatorname{tr}_H(P_\Delta-\phi^*P_\Omega\phi)$ for a projection over $U$, and Theorem \[Ktraceprop\] follows from Lemma \[GtoHlemma\]. \[traceindex\] Let $E$ be the Hilbert $C^*_rH$-module introduced in Definition \[defofE\] and $P,Q\in {{\mathcal{B}}}(E)$ be projections such that $P-Q$ is of $H$-trace class and compact in the sense of Hilbert $C^*_rH$-module morphisms. Then $$w:=\left( (PE\oplus QE), 1,\left(\begin{smallmatrix}0 & PQ\\ QP & 0\end{smallmatrix}\right)\right)$$ is a Kasparov triple (in the sense of [@Blackadar(1986) 17.1.1]) representing an element in $KK({\mathbb{C}},C^*_rH)\cong K_0(C^*_rH)$ and $$\operatorname{tr}_H(P-Q) = \operatorname{ind}_H(w) = \operatorname{tr}_H([w]),$$ where $\operatorname{tr}_H(P-Q)$ is to be understood in the sense of Definition \[deftrH\], whereas $\operatorname{tr}_H([w])$ is the canonical trace defined on $K_0(C^*_rH)$. Using Lemma \[computetrH\] and the fact that $P^2=P$, $Q^2=Q$, $(1-P)P=0$, and $\operatorname{tr}_H(XY)=\operatorname{tr}_H(YX)$, we conclude $$\begin{split} \operatorname{tr}_H(P-Q) & = \operatorname{tr}_H(P^2(P-Q)) + \operatorname{tr}_H((1-P)(P-Q^2))\\ &= \operatorname{tr}_H(P(P-Q)P) - \operatorname{tr}_H(Q(1-P)Q)\\ & = \operatorname{tr}_H(P- PQP) - \operatorname{tr}_H(Q-QPQ). \end{split}$$ Let $\alpha_P\colon PE\to PE$ be the orthogonal projection with image $PE\cap\ker(QP)$, and $\alpha_Q\colon QE\to QE$ the orthogonal projection with image $QE\cap \ker(PQ)$. Observe that $\alpha_P=(P-PQP)\alpha_P$. Therefore $\alpha_P$ is of $H$-trace class, since the same is true for $P-PQP$. In the same way we see that $\alpha_Q$ is of $H$-trace class. Set $$\begin{split} T_0 &:= \operatorname{id}_{PE}-PQP -\alpha_P\colon PE\to PE\\ T_1 &:= \operatorname{id}_{QE}-QPQ -\alpha_Q\colon QE\to QE. \end{split}$$ Then $$\label{eq:single_out_kernel} \begin{split} \operatorname{tr}_H(P-Q) &= \operatorname{tr}_H(P-PQP)-\operatorname{tr}_H(Q-QPQ)\\ &= \operatorname{tr}_H(T_0)-\operatorname{tr}_H(T_1) +\operatorname{tr}_H(\alpha_P)-\operatorname{tr}_H(\alpha_Q). \end{split}$$ Now $QP\colon PE\to QE$ is a bounded operator with adjoint $PQ\colon QE\to PE$, and $$\label{eq:ker_formula} \begin{split} \operatorname{tr}_H(\alpha_P) &= \dim_H(\ker(QP\colon PE\to QE))\\ \operatorname{tr}_H(\alpha_Q) &= \dim_H(\ker(PQ\colon QE\to PE))\\ &= \dim_H(\operatorname{coker}(QP\colon PE\to QE)). \end{split}$$ For a complemented submodule $X$, one defines $\dim_H(X):=\operatorname{tr}_H(\operatorname{pr}_X)$, where $\operatorname{pr}_X$ is the orthogonal projection onto $X$. Since $QP\alpha_P=0$ and $PQ\alpha_Q=0$, and the latter implies $0=(PQ\alpha_Q)^*=\alpha_Q QP$, we have $$\begin{gathered} QPT_0= QP(\operatorname{id}_{PE}-PQP-\alpha_P)\\ =\operatorname{id}_{QE} QP-QPQ^2P-\alpha_Q QP = T_1QP.\end{gathered}$$ Moreover, $\ker(QP\colon PE\to QE)\subset \ker(T_0)$, since $QP (Px)=0$ implies $\alpha_P(Px)=Px$, and in the same way we conclude $\ker(PQ)=\ker((PQ)^*)\subset \ker(T_1)=\ker(T_1^*)$. It follows that $QP$ “conjugates” $T_0$ and $T_1$, and by [@Schick(1998c) Proposition 2.6] (which goes back to a corresponding result in [@Atiyah(1976) p. 67]) that $$\operatorname{tr}_H(T_0)=\operatorname{tr}_H(T_1).$$ Using Equation and Equation we arrive at $$\operatorname{tr}_H(P-Q) = \operatorname{ind}_H(QP\colon PE\to QE)$$ with the obvious definition of $\operatorname{ind}_H$. This is exactly the $H$-index in the graded sense of the operator $F:=\left(\begin{smallmatrix}0 & PQ\\ QP & 0\end{smallmatrix}\right)\colon PE\oplus QE\to PE\oplus QE$ (where $PE$ is the positive and $QE$ the negative part of the graded Hilbert $C^*_rH$-module $PE\oplus QE$). The only thing it remains to check is whether $w$ fulfills all the axioms of Kasparov triples. Since the action of ${\mathbb{C}}$ is unital and the operator is self adjoint, this amounts to check that $1-F^*F$ and $1-FF^*$ are compact in the sense of Hilbert $C^*_rH$-module morphisms. Now $F^*F=F^2=FF^*= \left( \begin{smallmatrix}PQP & 0\\ 0 & QPQ \end{smallmatrix}\right)$. Since $P-Q$ is compact, the same is true for $P(P-Q)P=P-PQP\colon E\to E$. Then also the composition with the inclusion of $PE$ into $E$ and the projection $P\colon E\to PE$ is compact. This operator coincides with $1-PQP\colon PE\to PE$. Similarly $1-QPQ\colon QE\to QE$ is compact. This concludes the proof. To finish the proof of Theorem \[Ktraceprop\] observe that by Proposition \[holomclosed\] it suffices to compute $\operatorname{tr}_G(P)$ if $P\in M_n(U)$ is a projection, where $U$ is the holomorphic closure of ${\mathbb{C}}G\subset C^*_rG$. Since $A\to A_\Delta$ and $A\to A_\Omega$ are $*$-algebra homomorphisms, $P_\Delta$ and $P_\Omega$ are projections. Now Lemma \[Utrclass\] implies that we can apply Lemma \[traceindex\] to $P_\Delta-\phi^*P_\Omega\phi$. By assumption $\operatorname{tr}_H(K_0(C^*_rH))\subset A$, therefore $\operatorname{tr}_H(P_\Delta - \phi^*P_\Omega\phi)\in A$. By Lemma \[GtoHlemma\] then $\operatorname{tr}_G(P)\in\frac{1}{r} A$, and this concludes the proof of Theorem \[Ktraceprop\]. We use the language of Hilbert modules and Kasparov triples only for convenience. Observe that we don’t use much more than the definition: the single theorem we use is that our Kasparov triples indeed give rise to K-theory elements, and this is not very deep. By [@Blackadar(1986) 17.5.5] $KK({\mathbb{C}},C^*_rG)$ and $K_0(C_r^*G)$ are isomorphic, but to construct the map much less is needed. (Essentially we only have to perturb $QP$ such that kernel and cokernel are finitely generated projective modules over $C^*_rG{\otimes}\mathbb{K}$.) Final remarks ============= We hope that Proposition \[Ktraceprop\] can be applied to more situations than the one described in Theorem \[heredtrace\]. However, in [@Schick(1999)] the situation where $H$ is trivial (and consequently $X$ is finite) is classified. It turns out that in this setting the assumptions of Theorem \[Ktraceprop\] can be fulfilled exactly if $G$ is a finite extension of a free group. But then one has a transfer homomorphism for the K-theory of the reduced $C^*$-algebras relating the trace for $G$ to the trace of the free subgroup of finite index. One easily computes the range of the trace using this (and the known trace conjecture for the free group). The range is $\frac{1}{d}{\mathbb{Z}}$ where $d$ is the smallest index of a free subgroup. Therefore it is not necessary to give details of the approach using Theorem \[Ktraceprop\] which gives the same result. *Acknowledgments*. I am very much indebted to Warren Dicks. Without his help, I was able to apply Theorem \[Ktraceprop\] only in very basic situations. Moreover, I thank Nigel Higson and John Roe who pointed out that the trace of a difference of two projections on a Hilbert space is an index and therefore an integer and suggested that the same should work in a more general setting, inspiring the proof of Lemma \[traceindex\]. I also thank the referee for useful comments, in particular for pointing out Remark \[remark:otherway\]. [10]{} : “[*Elliptic operators, discrete groups and von [N]{}eumann algebras*]{}”, Astérisque 32, 43–72 (1976) : “[*Geometric K-theory for [L]{}ie groups and foliations*]{}”, Preprint IHES and Brown University (1982), to appear in L’enseignement mathématique : “[*K-theory for operator algebras*]{}”, vol. 5 of [*M.S.R.I. Monographs*]{}, Springer (1986) : “[*Non commutative geometry*]{}”, Academic Press (1994) : “[*The Atiyah conjecture for $1$-relator groups and graphs of groups*]{}”, in preparation : “[*Les algèbre d’opérateurs dans l’espace Hilbertien (algèbres de von [N]{}eumann)*]{}”, Gauthier-Villars (1969) : “[*Why the circle is connected: an introduction to quantized topology*]{}”, Math. Intelligencer 11, 27–34 (1989) : “[*Rotation numbers for Automorphisms of $C^*$-algebras*]{}”, Pac. J. Math. 128, 31–89 (1987) : “[*$K$-theoretic amenability for $L_2(\mathbb{Q}_p)$ and the action on the associated tree*]{}”, J. of Functional Analysis 58, 194–215 (1984) : “[*Fundamentals of the theory of operator algebras, volume I: Elementary theory*]{}”, Pure and Applied Mathematics, Academic Press (1983) : “[*Une démonstration de la conjecture de Baum-Connes pour les groupes réductifs sur un corps $p$-adique et pour certains groupes discrts possédant la propriété (T)*]{}”, Preprint (1998) : “[*Division rings and group von [N]{}eumann algebras*]{}”, Forum Math. 5, 561–576 (1993) : “[*Analytic versions of the zero divisor conjecture*]{}”, in: [*Geometry and Cohomology in Group Theory*]{}, vol. 252 of [*London Math. Soc. Lecture Note Series*]{}, 209–248, Cambridge University Press (1998) : “[*La conjecture de [B]{}aum-[C]{}onnes pour les groupes agissant sur les arbres*]{}”, C.R. Acad. Sci. Paris, séries 1 326, 799–804 (1998) : “[*Ranges of traces on ${K}\sb 0$ of reduced crossed products by free groups*]{}”, in: [*Operator algebras and their connections with topology and ergodic theory (Buşteni, 1983)*]{}, 374–408, Springer (1985) : “[*${K}{K}$-groups of crossed products by groups acting on trees*]{}”, Invent. Math. 86, 603–634 (1986) : “[*The trace conjecture—a counterexample*]{}”, $K$-theory 17, 209-213 (1999) : “[*Integrality of $L^2$-Betti numbers*]{}”, Preprintreihe SFB 478 Münster, No. 73 (1999), Math. Ann.317, 727–750 (2000) : “[*$L^2$-index theorem for boundary manifolds*]{}”, preprint 1998, to appear in Pacific J. of Math. [^1]: e-mail: thomas.schick@math.uni-muenster.de\ www: http://math.uni-muenster.de/u/schickt/\ Fax: ++49 251/83 38370\ Author’s work funded by Deutscher Akademischer Austauschdienst
--- abstract: 'We study in momentum��-conserving systems, how nonintegrable dynamics may affect thermal transport properties. As illustrating examples, two one��-dimensional (1D) diatomic chains, representing 1D fluids and lattices, respectively, are numerically investigated. In both models, the two species of atoms are assigned two different masses and are arranged alternatively. The systems are nonintegrable unless the mass ratio is one. We find that when the mass ratio is slightly different from one, the heat conductivity may keep significantly unchanged over a certain range of the system size and as the mass ratio tends to one, this range may expand rapidly. These results establish a new connection between the macroscopic thermal transport properties and the underlying dynamics.' author: - Shunda Chen - Jiao Wang - Giulio Casati - Giuliano Benenti title: 'Non-��integrability and the Fourier heat conduction law' --- Introduction ============ The Fourier heat conduction law is an empirical law that describes how the heat current is sustained by the temperature gradient, i.e., $$j=-\kappa\nabla T, \label{Four}$$ where $j$ is the heat current, $\nabla T$ is the temperature gradient, and $\kappa$ is known as the thermal conductivity, which is a finite constant independent of the system size. However, not all systems obey the Fourier law. It is known that the transport properties are strongly affected by conservation laws [@Mazur69; @Zotos; @Ilievski; @Benenti]. In the extreme case that a system is integrable, the heat conductivity is a linear function of the system size. Even in the particular case in which the total momentum is the only conserved quantity, the heat conductivity may diverge as well. In particular, in one��-dimensional (1D) and two��-dimensional (2D) cases, since 1970 when Alder and Wainwright reported their findings [@Alder70], it has been realized that momentum conservation may lead to slow decay of time correlations so that transport is not diffusive and is characterized by diverging transport coefficients. For 1D momentum-��conserving systems, the heat conductivity generally depends on the system size $N$ in a power��-law manner: $\kappa\sim N^\alpha$. There is no general consensus on the numerical value of $\alpha$ and different theoretical models predict that $\alpha$ is $1/2$ if the interparticle interaction is symmetric and $1/3$ otherwise [@Lee0508; @DLLP0607; @Beijeren]. It is worth noting that these theoretical predictions equally apply to both fluids and lattices. On the other hand, a recent numerical study [@Zhong12] suggested that when the interparticle interactions are asymmetric, there is a significant difference between fluids and lattices. To summarize, for 1D systems, the heat conduction properties are believed to depend on integrability, momentum-��conservation, interaction symmetry, and the nature of fluids or lattices. For the particular case of 1D momentum��-conserving systems, which is the subject of the present paper, all analytical and numerical results so far available do not allow one to draw definite conclusions yet. This problem was analyzed with various 1D models in a recent study [@savin], where it was shown that the Fermi��-Pasta��-Ulam (FPU) chain with symmetric or asymmetric potential exhibits anomalous heat transport, which is consistent with other recent investigations [@dhar; @wang]. The plateau in the system size dependence of the heat conductivity found in [@chen] for the FPU model with a certain set of parameters turns out to be a finite size effect and, at larger $N$, the heat conductivity starts increasing again. In particular in [@wang] it was surmised that the value $1/3$ should be found asymptotically for very large system size, even though, in fact, a value of the exponent $\alpha= 0.15$ was numerically found (up to $N = 65536$). The results of [@savin] also led to an exponent $\alpha<1/3$ for the asymmetric FPU chain. In [@dhar], the value 1/3 was found for the same FPU model but in a different parameter range and for high temperatures. In the same paper, the possibility of a finite temperature phase transition was not ruled out. Finally, in [@savin] normal heat conductivity was reported for 1D momentum��-conserving systems with the Lennard��-Jones, Morse, and Coulomb potential. The overall picture is therefore far from being clear. *Rebus sic stantibus*, in order to gain a better understanding in such a complex situation, it might be convenient to consider the 1D diatomic hard��-point gas. Indeed, this is a clean and simple system of billiard type and, as such, it should reflect general properties since billiards have been found fundamental in understanding both classical and quantum dynamical systems. Moreover, an important feature of billiard��-type systems is that their dynamical properties do not depend on the temperature, which makes their analysis even more simplified. By analyzing the hard��-point gas, we show that close to the integrable, equal masses limit, the system exhibits normal heat conduction over longer and longer sizes as the integrable limit is approached. Asymptotically, however, the power law divergence of the thermal conductivity sets in with the power 1/3. To be more precise, we cannot exclude the possibility of a phase transition as the mass ratio is increased; however, our numerical evidence suggests that this possibility should be quite unlikely. The analysis of the diatomic Toda lattice confirm these conclusions. These results lead us to speculate that as one approaches the integrable limit, anomalous behavior is perhaps more general than so far expected [@Zhong12; @chen; @savin] even though it might be hard to detect in numerical simulations. 1D diatomic gas model ===================== After being initially proposed in 1986 [@Casati86], the 1D diatomic gas model has attracted increasing interest for investigating various aspects of 1D transport. The model consists of $N$ hard��-core point particles in one dimension with alternative mass $M$ and $m$ (for odd��- and even��-numbered particles, respectively). We fix the averaged particle number density to be unity so that $N$ refers to the length of the system as well. In order to measure the heat conductivity, two statistical thermal baths with different temperatures $T_L$ and $T_R$ are put into contact with the left and the right end of the system. When the first (last) particle collides with the left (right) side of the system, it is injected back with a new speed $|v|$ determined by the distribution [@heatbath] $$P_{L,R}(v) = \frac{|v|\mu_{1,N}}{k_B T_{L,R}}\exp\left( - \frac{v^{2} \mu_{1,N}}{2 k_B T_{L,R}}\right).$$ Here $\mu_{1}$ and $\mu_{N}$ are the masses of the first and the last particle and $k_B$ is the Boltzmann constant which is set to be unity throughout. In our simulations, each particle is given initially a random position uniformly distributed and a random velocity according to the Boltzmann distribution with temperature $T(x_i)=T_L+x_i (T_R-T_L)/N$ ($x_i$ is the position of the $i$th particle). Then the system is evolved by using an effective event-��driven algorithm [@Casati03]. After the system reaches the steady state, we compute the steady heat flux $j$ that crosses the system; i.e., the averaged energy exchanged in the unit time between a boundary particle and the heat bath, or that between any two neighboring particles. The heat conductivity is then measured, by assuming the Fourier law, as $\kappa \approx jN/(T_L-T_R)$. We set $T_L=6$ and $T_R=4$ so that the nominal temperature of the system is $T=5$. The heat conductivity at any other temperature $T'$ can be obtained through the scaling relation $\kappa (T') = \kappa(T) \sqrt{T'/T}$. We will focus on how the heat conductivity $\kappa$ depends on the system size $N$ and on the mass ratio $M/m$ (hereafter we set $m\equiv 1$). We emphasize that in our simulations, long enough integration times ($>10^8$) have been taken so that the relative errors of all the measured values of $\kappa$ are less than $1\%$. Now let us turn to the simulations results. First of all, if the mass ratio is unity then the system is integrable and, with the heat bath given by Eq. (2), the heat conductivity writes: $$\kappa_{\text {int}}=N\sqrt{\frac{2k^3_B}{m\pi}}/\left(\frac{1} {\sqrt{T_L}}+\frac{1}{\sqrt{T_R}}\right).$$ In Fig. 1(a) this result is compared with our simulations and the agreement is perfect. This can be considered as a numerical test. Now, we change the mass ratio to make it slightly larger than one \[see Fig. 1(a)\]; it can be seen that for small $N$ ($<10^2$), $\kappa$ follows its integrable limit case, but as $N$ is increased further, $\kappa$ tends to saturate and becomes constant for $N>10^4$. This could be taken as an empirical demonstration that at least for these mass ratios and for large enough system size, heat conduction is governed by the Fourier law, which is in clear contrast with existing theoretical and numerical predictions. (See for example Refs. [@DLLP0607; @Beijeren]). -.2cm![(Color online) (a) The heat conductivity $\kappa$ as a function of the system size $N$ in the 1D diatomic gas model. The two horizontal lines denote the saturation value of $\kappa_{GK}(N)$ \[Eq. (\[eq:GK\])\] at large $N$, for mass ratios $M=1.07$ and $1.1$. (b) Comparison between the numerically computed temperature profile and the analytic expression \[see Eq. (4)\] for $M=1.07$ at different system sizes.](fig1.eps "fig:") -.4cm The validity of the Fourier law also determines the internal temperature profile of the steady state. Indeed by assuming the Fourier law and equating the averaged local heat flux along the system, one obtains [@ADhar01] $$T(x)=\left[T_L^{3/2}\left(1-\frac{x}{N}\right) +T_R^{3/2}\frac{x}{N}\right]^{2/3}.$$ In Fig. 1(b), this prediction is compared with our simulations results for $M=1.07$. Numerically, the temperature of the $i$th particle is measured as the time average of its kinetic energy, i.e., $T(x_i)=\langle \mu_i v_i^2/ k_B \rangle$, with $\mu_i\in\{M,m\}$ and $v_i$ being its mass and velocity, respectively. It is seen that numerical results are in very good agreement, for $N>10^4$, with this theoretical prediction. -.2cm ![(Color online) (a) Correlation functions of the total heat current for the 1D diatomic gas model. The dotted line indicates the scaling $\sim t^{-1}$: A faster decay of the correlation function implies convergence of the heat conductivity in the thermodynamic limit. (b) The comparison of the heat conductivity obtained by using the Green��-Kubo formula \[Eq. (5)\] and by using the nonequilibrium setting. In both panels $M=1.07$.](fig2.eps "fig:") -.4cm We now turn to the linear response theory to check if this approach leads to consistent results thus confirming the validity of the Fourier law for large $N$. Based on the Green��-Kubo formula, which relates transport coefficients to the current time��-correlation functions, the heat conductivity of a 1D finite system can be expressed as [@Lepri03; @Prosen05] $$\kappa_{GK}(N)=\frac{1}{k_B T^2 N}\int_0^{\tau_{tr}}dt\langle J(0)J(t) \rangle. \label{eq:GK}$$ In this formula, $J\equiv\sum_i\mu_i v_i^3/2$ represents the total heat current and $\langle J(0)J(t)\rangle$ is its correlation function measured in the equilibrium state with the periodic boundary condition. The integration is truncated at time $\tau_{tr}$ which is suggested to assume the value of $\tau_{tr}=N/(2v_s)$ ($v_s$ is the sound speed of the system) [@Chen14]. To numerically compute $\kappa_{GK}(N)$, we consider isolated systems with periodic boundary conditions. The initial condition is randomly assigned with the constraints that the total momentum is zero and the total energy corresponds to $T=5$. The system is then evolved and after the equilibrium state is reached, we compute $\langle J(0)J(t)\rangle$ and the integral in Eq. (5). -.2cm![(Color online) (a) The heat conductivity $\kappa$ versus the system size $N$ for the 1D diatomic gas model. From top to bottom, the mass ratio $M$ is respectively $1.07$, $1.10$, $1.14$, $1.22$, $1.30$, $1.40$, the golden mean ($\approx1.618$), and $3$. The corresponding tangent $\alpha$ of the $\kappa$��-$N$ curve is given in (b) with the same symbols. In the inset we plot the turning point $N^\ast$, after which $\alpha$ starts growing with $N$, as a function of $M-1$. The best fitting (the dotted line) suggests $N^\ast = 54/(M-1)^{3.2}$.](fig3.eps "fig:"){width="1.01\columnwidth"} -.4cm The results for $M=1.07$ are presented in Fig. 2. It can be seen from Fig. 2(a) that for a large system ($N>10^4$), the correlation function changes slowly at short times ($t<10^2$), which reflects the fact that the system still mimics its integrable limit; however, from $t\sim 10^2$ to $10^3$, the correlation function undergoes a rapid decay and eventually, when $t> 10^3$, it begins to oscillate around zero. (The negative values of $\langle J(0)J(t)\rangle$ are not shown in this log��-log scale.) In Fig. 2(b), the dependence of $\kappa_{GK}$ on the system size is shown. It can be seen that $\kappa_{GK}$ agrees with $\kappa$ despite some deviations at small $N$. Next we consider the dependence on the mass ratio. By using the same nonequilibrium setting we have extensively investigated the system size dependence of $\kappa$ for the mass ratio ranging from 1.07 to 64. The results for $1.07\le M\le 3$ are shown in Fig. 3(a). A three-��stage process can be recognized : For small system sizes $\kappa \sim N$, similar to the integrable case. For large system sizes, $\kappa$ shows a tendency to $\sim N^{1/3}$. In between these two regimes, there appears an intermediate, bridging regime, where $\kappa$ changes at a lower rate (see particularly the cases of $M=1.22$ and $1.30$). Actually, in this intermediate regime, as $M$ is decreased, the conductivity $\kappa$ tends to be constant over a larger and larger interval. For $M\ge3$ instead (data not shown here) the dependence $\kappa\sim N^{1/3}$ appears more and more clearly in agreement with the existing theories [@DLLP0607; @Beijeren]. In order to better understand the dependence of $\kappa$ on $N$, along each curve provided in Fig. 3(a) we computed its tangent $\alpha(N)$ and plot the results in Fig. 3(b). Note that $\alpha(N)$ exhibits a non��-monotonic behavior and reaches a minimum at a certain system size $N^\ast$. Interestingly enough, the value of $N^\ast$ appear to grow very fast with decreasing $M$ \[see the inset in Fig. 3(b)\]. This result shows that a very small tangent $\alpha$, i.e., a Fourier��-like behavior of thermal conduction, can be observed over an increasingly large system size when the integrable limit is approached. At the same time, for $N>N^\ast$, anomalous behavior emerges gradually. The conclusion is that for any mass ratio different from unity the behavior $\kappa \sim N^{1/3}$ seems to always take place even though it cannot be detected numerically when the mass ratio approaches unity since in this limit $N^\ast$ becomes exceedingly large. On the other hand, based on our available data, the possibility that there is a phase transition around $M\approx 1.3$ can not be ruled out with certainty. -.2cm ![(Color online) (a) The heat conductivity measured in the nonequilibrium setting for the 1D diatomic Toda lattice with mass $M=1$ (the integrable case), $1.07$, $1.10$, $1.14$, $1.22$, $1.30$, $1.50$, and $2$. The dotted lines indicate, respectively, the ballistic behavior $\kappa\sim N$ and the power law best fitting to the case of $M=2$, $\kappa\sim N^\alpha$, with $\alpha=0.25$. The horizontal lines denote the saturated values of $\kappa_{GK}(N)$ for $M=1.07$ and $1.10$. (b) The corresponding tangent $\alpha$ of the $\kappa$��-$N$ curve with the same symbols. (c) The heat current correlation function for $M=1.10$ with $N=25600$, showing a decay faster than $\sim 1/t$.](fig4.eps "fig:")-.5cm 1D diatomic Toda chain ====================== The above described scenario in which the Fourier law appears in the “vicinity” of the integrable limit is not exclusive of the gas model. In the following we show that it is also the case for lattices. The model we consider here is a diatomic variant of the Toda lattice [@Lepri03; @Hatano98] with the Hamiltonian $$H=\sum_{i}\left[\frac{p_{i}^2}{2\mu_i}+U(x_{i}-x_{i-1})\right],$$ where the potential is $U(x)=\exp(-x)+x$, and the particles take masses $M$ and $m\equiv 1$ alternatively. As for the gas model, this system is integrable when the mass ratio is one. We measure the heat conductivity in both the nonequilibrium and equilibrium settings again, and find that the results turn out to agree with each other. In the nonequilibrium simulations, we couple the system to two Langevin heat baths [@Dharrev] with the temperature $T_L=1.2$ and $T_R=0.8$. The heat current is defined as $j\equiv\langle j_i\rangle$ with $j_i\equiv v_i{\partial}U(x_{i+1}-x_i) /{\partial x_i}$ [@TMai07]. In Fig. 4(a) the measured $\kappa$ for different values of $M$ is given. Again, for mass ratios close to unity, $\kappa$ is close to the integrable case when the system is small ($N<10^2$) but tends to a value which agrees with that obtained by using the Green��-Kubo formula for the large system’s size ($N>10^4$). For larger mass ratio (see the case of $M=2$) the heat conductivity is anomalous. Similarly to the hard-point gas model, the tangent $\alpha(N)$ exhibits a nonmonotonic behavior, with the minimum reached at a system size $N^*$ rapidly growing when the integrable limit $M=1$ is approached \[see Fig. 4(b)\]. With regard to the equilibrium simulations, we assume periodic boundary conditions, null total momentum and total energy corresponding to $T=1$. The total heat current is $J=\sum_i j_i$ and its correlation function for $M=1.1$ is shown in Fig. 4(c), where it exhibits a faster than $\sim 1/t$ decay as expected in the case of normal heat conduction. The overall emerging picture is the same as presented above for the gas model. This similarity is unlikely a coincidence due to the contrasting difference in the dynamics of the two systems; rather, it strongly suggests some general mechanisms in the heat conduction properties as one departs from the integrable limit. Summary and discussions ======================= We have shown that in two 1D momentum-��conserving paradigmatic systems, the heat conductivity can be independent of the system size over a considerably wide range. Such a Fourier��-like behavior appears as a quite general feature for lattice or gas models close to the integrable limit. Apart from theoretical implications in transport theory, our finding may have experimental relevance as well, because the system size over which the heat conductivity keeps constant, grows very fast as the system approaches its integrable limit. Our present understanding of the heat conduction problem is mainly based on numerical empirical evidence while rigorous analytical results are hard to obtain. Numerical analysis consists of steady-state, nonequilibrium simulations or of equilibrium simulations based on linear response theory and the Green��-Kubo formula. If both methods give reasonable evidence for the Fourier law and if, moreover, they lead to the same numerical value of the heat conductivity $\kappa$, then this has been generally considered as a conclusive evidence that the Fourier law is valid. This conclusion, however, could not be correct. As we have shown in this paper, the agreement between equilibrium and nonequilibrium simulations does not allow, *per se*, to draw any definite conclusion. Indeed this agreement might be a finite size effect and the Fourier law may appear to hold up to some system size $N$ after which anomalous behavior sets in. The main point is that we have no indications at all about the critical value of $N$ after which conductivity becomes anomalous. What we know from the numerical analysis of this paper is that this critical value seems to diverge rapidly as one approaches the integrable limit. This result is quite surprising to us and it is a feature which we do not understand yet. While it is natural to expect an initial ballistic behavior for larger and larger system sizes as one approaches the integrable limit, it is absolutely not clear why the value of $\kappa$ appears to saturate to a constant value and why this Fourier��-like behavior may persist in an increasingly wide range of the system size before entering the anomalous regime. Acknowledgements {#acknowledgements .unnumbered} ================ Useful discussions with Stefano Lepri are gratefully acknowledged. This work is supported by NSFC (Grants No. 11275159 and No. 11335006) and by MIUR-PRIN. [99]{} P. Mazur, Physica (Amsterdam) [**43**]{}, 533 (1969); M. Suzuki, [*ibid*]{}, [**51**]{}, 277 (1971). X. Zotos, F. Naef, and P. Prelovšek, Phys. Rev. B [**55**]{}, 11029 (1997). E. Ilievski and T. Prosen, Commun. Math. Phys. [**318**]{}, 809 (2013). G. Benenti, G. Casati, and J. Wang, Phys. Rev. Lett. **110**, 070604 (2013). B. J. Alder and T. E. Wainwright, Phys. Rev. A **1**, 18 (1970). G. R. Lee-Dadswell, B. G. Nickel, and C. G. Gray, Phys. Rev. E [**72**]{}, 031202 (2005); J. Stat. Phys. [**132**]{}, 1 (2008) L. Delfini, S. Lepri, R. Livi, and A. Politi, Phys. Rev. E [**73**]{}, 060201(R) (2006); J. Stat. Mech. P02007 (2007). H. van Beijeren, Phys. Rev. Lett. **108**, 180601 (2012). Y. Zhong, Y. Zhang, J. Wang, and H. Zhao, Phys. Rev. E **85**, 060102(R) (2012). A. V. Savin and Y. A. Kosevich, Phys. Rev. E [**89**]{}, 032102 (2014). S. G. Das, A. Dhar, and O. Narayan, J. Stat. Phys. [**154**]{}, 204 (2013). L. Wang, B. Hu, and B. Li, Phys. Rev. E [**88**]{}, 052112 (2013). S. Chen, Y. Zhang, J. Wang, and H. Zhao, arXiv: 1204.5933. G. Casati, Found. Phys. **16**, 51 (1986). J. L. Lebowitz and H. Spohn, J. Stat. Phys. **19**, 633 (1978); R. Tehver, F. Toigo, J. Koplik, and J. R. Banavar, Phys. Rev. E **57**, R17 (1998). G. Casati and T. Prosen, Phys. Rev. E **67**, 015203 (2003). A. Dhar, Phys. Rev. Lett. **86**, 3554 (2001). S. Lepri, R. Livi, and A. Politi, Phys. Rep. **377**, 1 (2003). T. Prosen and D. K. Campbell, Chaos **15**, 015117 (2005). S. Chen, Y. Zhang, J. Wang, and H. Zhao, Phys. Rev. E **89**, 022111 (2014). T. Hatano, Phys. Rev. E **59**, 1(R) (1999). A. Dhar, Adv. Phys. **57**, 457 (2008). T. Mai, A. Dhar, and O. Narayan, Phys. Rev. Lett. **98**, 184301 (2007).
--- abstract: 'Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background contents, we need not only to recognize their action categories, but also to localize the start time and end time of each instance. Many state-of-the-art systems use segment-level classifiers to select and rank proposal segments of pre-determined boundaries. However, a desirable model should move beyond segment-level and make dense predictions at a fine granularity in time to determine precise temporal boundaries. To this end, we design a novel **Convolutional-De-Convolutional** (CDC) network that places CDC filters on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-to-end manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but also significantly boosts the precision of localizing temporal boundaries. Finally, the CDC network demonstrates a very high efficiency with the ability to process 500 frames per second on a single GPU server. Source code and trained models are available online at <https://bitbucket.org/columbiadvmm/cdc>.' author: - Zheng Shou - Jonathan Chan - Alireza Zareian - Kazuyuki Miyazawa - 'Shih-Fu Chang' bibliography: - 'egbib.bib' title: 'CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos' --- Introduction ============ Recently, temporal action localization has drawn considerable interest in the computer vision community [@THUMOS14; @THUMOS15; @th1; @th2; @th3; @AN1; @AN2; @scnn_shou_wang_chang_cvpr16; @Richard_2016_CVPR; @stanford_cvpr16; @victor_eccv16; @fast_temporal_activity_cvpr16; @spoton_eccv16]. This task involves two components: (1) determining whether a video contains specific actions (such as diving, jump, ) and (2) identifying temporal boundaries (start time and end time) of each action instance. ![ Our framework for precise temporal action localization. Given an input raw video, it is fed into our CDC localization network, which consists of 3D ConvNets for semantic abstraction and a novel CDC network for dense score prediction at the frame-level. Such fine-granular score sequences are combined with segment proposals to detect action instances with precise boundaries. []{data-label="framework"}](framework-9-crop.pdf){width="50.00000%"} A typical framework used by many state-of-the-art systems [@AN1; @AN2; @th1; @th2; @th3] is fusing a large set of features and training classifiers that operate on sliding windows or segment proposals. Recently, an end-to-end deep learning framework called Segment-CNN (S-CNN) [@scnn_shou_wang_chang_cvpr16] based on 3D ConvNets [@3dcnn] demonstrated superior performances both in efficiency and accuracy on standard benchmarks such as THUMOS’14 [@THUMOS14]. S-CNN consists of a proposal network for generating candidate video segments and a localization network for predicting segment-level scores of action classes. Although the localization network can be optimized to select segments with high overlaps with ground truth action instances, the detected action boundaries are still retained and thus are restricted to the pre-determined boundaries of a fixed set of proposal segments. As illustrated in Figure \[framework\], our goal is to refine temporal boundaries from proposal segments to precisely localize boundaries of action instances. This motivates us to move beyond existing practices based on segment-level predictions, and explicitly focus on the issue of fine-grained, dense predictions in time. To achieve this goal, some existing techniques can be adapted: (1) Single-frame classifiers operate on each frame individually; (2) Recurrent Neural Networks (RNN) further take into account temporal dependencies across frames. But both of them fail to explicitly model the spatio-temporal information in raw videos. 3D CNN [@3dcnn; @scnn_shou_wang_chang_cvpr16] has been shown that it can learn spatio-temporal abstraction of high-level semantics directly from raw videos but loses granularity in time, which is important for precise localization, as mentioned above. For example, layers from $\tt conv1a$ to $\tt conv5b$ in the well-known C3D architecture [@3dcnn] reduce the temporal length of an input video by a factor of 8. In pixel-level semantic segmentation, de-convolution proves to be an effective upsampling method in both image [@Long_2015_CVPR; @Long_2016_PAMI] and video [@V2V] for producing output of the same resolution as the input. In our temporal localization problem, the temporal length of the output should be the same as the input video, but the spatial size should be reduced to 1x1. Therefore, we not only need to upsample in time but also need to downsample in space. To this end, we propose a novel **Convolutional-De-Convolutional** (CDC) filter, which performs convolution in space (for semantic abstraction) and de-convolution in time (for frame-level resolution) simultaneously. It is unique in jointly modeling the spatio-temporal interactions between summarizing high-level semantics in space and inferring fine-grained action dynamics in time. On top of 3D ConvNets, we stack multiple CDC layers to form our CDC network, which can achieve the aforementioned goal of temporal upsampling and spatial downsampling, and thereby can determine action categories and can refine boundaries of proposal segments to precisely localize action instances. In summary, this paper makes three novel contributions: \(1) To the best of our knowledge, this is the first work to combine two reverse operations (convolution and de-convolution) into a joint CDC filter, which simultaneously conducts downsampling in space and upsampling in time to infer both high-level action semantics and temporal dynamics at a fine granularity in time. \(2) We build a CDC network using the proposed CDC filter to specifically address precise temporal action localization. The CDC network can be efficiently trained end-to-end from raw videos to produce dense scores that are used to predict action instances with precise boundaries. \(3) Our model outperforms state-of-the-art methods in video per-frame action labeling and significantly boosts the precision of temporal action localization over a wide range of detection thresholds. Related work ============ **Action recognition and detection.** Early works mainly focus on simple actions in well-controlled environments and can be found in recent surveys [@survey1; @survey2; @survey3]. Recently, researchers have started investigating untrimmed videos in the wild and have designed various features and techniques. We briefly review the following that are also useful in temporal action localization: frame-level Convolutional Neural Networks (CNN) trained on ImageNet [@ILSVRC15] such as AlexNet [@alex], VGG [@Simonyan15], ResNet [@He_2016_CVPR], ; 3D CNN architecture called C3D [@3dcnn] trained on a large-scale sports video dataset [@sports1m] ; improved Dense Trajectory Feature (iDTF) [@dtf; @idtf] consisting of HOG, HOF, MBH features extracted along dense trajectories with camera motion influences eliminated; key frame selection [@Gan_2015_CVPR]; ConvNets adapted for using motion flow as input [@Simonyan14b; @Feichtenhofer_2016_CVPR; @TSN]; feature encoding with Fisher Vector (FV) [@FV1; @Oneata2] and VLAD [@VLAD1; @xu2015discriminative]. There are also studies on spatio-temporal action detection, which aim to detect action regions with bounding boxes over consecutive frames. Various methods have been developed, from the perspective of supervoxel merging [@tube; @walk; @Soomro_2016_CVPR], tracking [@learntrack; @tubedt; @apt; @Singh_2016_CVPR], object detection and linking [@humanfocus; @actiontubes; @gangyu; @tubedt; @apt], spatio-temporal segmentation [@ST-CNN; @Xu_2016_CVPR], and leveraging still images [@objectaction; @Sultani_2016_CVPR; @15000]. **Temporal action localization.** Gaidon [@actoms2; @actoms] introduced the problem of temporally localizing actions in untrimmed videos, focusing on limited actions such as “drinking and smoking” [@Laptev_2007] and “open door and sit down” [@Laptev_2009]. Later, researchers worked on building large-scale datasets consisting of complex action categories, such as THUMOS [@THUMOS14; @THUMOS15] and MEXaction2 [@mex1; @mex2; @mex3], and datasets focusing on fine-grained actions [@MPII; @Charades1; @Charades2] or activities of high-level semantics [@caba2015activitynet]. The typical approach used in most systems [@AN1; @AN2; @th1; @th2; @th3] is extracting a pool of features, which are fed to train SVM classifiers, and then applying these classifiers on sliding windows or segment proposals for prediction. In order to design a model specific to temporal localization, Richard and Gall [@Richard_2016_CVPR] proposed using statistical length and language modeling to represent temporal and contextual structures. Heilbron [@fast_temporal_activity_cvpr16] introduced a sparse learning framework for generating segment proposals of high recall. Recently, deep learning methods showed improved performance in localizing action instances. RNN has been widely used to model temporal state transitions over frames: Escorcia [@victor_eccv16] built a temporal action proposal system based on Long-Short Term Memory (LSTM); Yeung [@stanford_cvpr16] used REINFORCE to learn decision policies for a RNN-based agent; Yeung [@yeung2015every] introduced MultiTHUMOS dataset of multi-label annotations for every frame in THUMOS videos and defined a LSTM network to model multiple input and output connections; Yuan [@yuan_cvpr16] proposed a pyramid of score distribution feature at the center of each sliding window to capture the motion information over multiple resolutions, and utilized RNN to improve inter-frame consistency; Sun [@sssn_mm15] leveraged web images to train LSTM model when only video-level annotations are available. In addition, Lea [@ST-CNN] used temporal 1D convolution to capture scene changes when actions were being performed. Although RNN and temporal 1D convolution can model temporal dependencies among frames and make frame-level predictions, they are usually placed on top of deep ConvNets, which take a single frame as input, rather than directly modeling spatio-temporal characteristics in raw videos. Shou [@scnn_shou_wang_chang_cvpr16] proposed an end-to-end Segment-based 3D CNN framework (S-CNN), which outperformed other RNN-based methods by capturing spatio-temporal information simultaneously. However, S-CNN lacks the capability to predict at a fine time resolution and to localize precise temporal boundaries of action instances. \[soa\] **De-convolution and semantic segmentation.** Zeiler [@deconv_cvpr10] originally proposed de-convolutional networks for image decomposition, and later Zeiler and Fergus [@deconv_eccv14] re-purposed de-convolutional filter to map CNN activations back to the input to visualize where the activations come from. Long [@Long_2015_CVPR; @Long_2016_PAMI] showed that deep learning based approaches can significantly boost performance in image semantic segmentation. They proposed Fully Convolutional Networks (FCN) to output feature maps of reduced dimensions, and then employed de-convolution for upsampling to make dense, pixel-level predictions. The fully convolutional architecture and learnable upsampling method are efficient and effective, and thus inspired many extensions [@Noh_2015_ICCV; @hong2015decoupled; @Liu_2016_CVPR; @SegNet_2016_PAMI; @Lin_2016_CVPR; @Zheng_2015_ICCV; @deeplab_2015; @deeplab_2016; @YuKoltun2016]. Recently, Tran [@V2V] extended de-convolution from 2D to 3D and achieved competitive results on various voxel-level prediction tasks such as video semantic segmentation. This shows that de-convolution is also effective in the video domain and has the potential to be adapted for making dense predictions in time for our temporal action localization task. However, unlike the problem of semantic segmentation, we need to upsample in time but maintain downsampling in space. Instead of stacking a convolutional layer and a de-convolutional layer to conduct upsampling and downsampling separately, our proposed CDC filter learns a joint model to perform these two operations simultaneously, and proves to be more powerful and easier to train. Convolutional-De-Convolutional networks {#cdc} ======================================= The need of downsampling and upsampling {#build} --------------------------------------- C3D architecture, consisting of 3D ConvNets followed by three Fully Connected (FC) layers, has achieved promising results in video analysis tasks such as recognition [@3dcnn] and localization [@scnn_shou_wang_chang_cvpr16]. Further, Tran [@V2V] experimentally demonstrated the 3D ConvNets, from $\tt conv1a $ to $\tt conv5b $, to be effective in summarizing spatio-temporal patterns from raw videos into high-level semantics. Therefore, we build our CDC network upon C3D. We adopt from $\tt conv1a $ to $\tt conv5b $ as the first part of our CDC network. For the rest of layers in C3D, we keep $\tt pool5$ to perform max pooling in height and width by a factor of 2 but retain the temporal length. Following conventional settings [@3dcnn; @scnn_shou_wang_chang_cvpr16; @V2V], we set the height and width of the CDC network input to 112x112. Given an input video segment of temporal length $L$, the output data shape of $\tt pool5 $ is (512, $L/8$, 4, 4) . Now in order to predict the action class scores at the original temporal resolution (frame-level), we need to upsample in time (from $L/8$ back to $L$), and downsample in space (from 4x4 to 1x1). To this end, we propose the CDC filter and design a CDC network to adapt the FC layers from C3D to perform the required upsample and downsample operations. Details are described in Sections \[3.2\] and \[3.3\]. CDC filter {#3.2} ---------- In this section, we walk through a concrete example of adapting $\tt FC6$ layer in C3D to perform spatial downsampling by a factor of 4x4 and temporal upsampling by a factor of 2. For the sake of clarity, we focus on how a filter operates within one input channel and one output channel. ![Illustration of how a filter in $\tt conv6$, $\tt deconv6$, $\tt CDC6$ operates on $\tt pool5$ output feature maps (grey rectangles) stacked in time. In each panel, dashed lines with the same color indicate the same filter sliding over time. Nodes stand for outputs.[]{data-label="filter"}](filter-8-crop.pdf){width="46.00000%"} As explained in [@Long_2015_CVPR; @Long_2016_PAMI], the FC layer is a special case of a convolutional layer (when the input data and the kernel have the same size and there is no striding and no padding). So we can transform $\tt FC6$ into $\tt conv6$, which is shown in Figure \[filter\] (a). Previously, a filter in $\tt FC6$ takes a 4x4 feature map from $\tt pool5 $ as input and outputs a single value. Now, a filter in $\tt conv6$ can slide on $L/8$ feature maps of size 4x4 stacked in time and respectively output $L/8$ values in time. The kernel size of $\tt conv6$ is 4x4=16. Although $\tt conv6$ performs spatial downsampling, the temporal length remains unchanged. To upsample in time, as shown in Figure \[filter\] (b), a straightfoward solution adds a de-convolutional layer $\tt deconv6$ after $\tt conv6$ to double the temporal length while maintaining the spatial size. The kernel size of $\tt deconv6$ is 2. Therefore, the total number of parameters for this solution (separated $\tt conv6$ and $\tt deconv6$) is 4x4+2=18. However, this solution conducts temporal upsampling and spatial downsampling in a separate manner. Instead, we propose the CDC filter $\tt CDC6$ to jointly perform these two operations. As illustrated in Figure \[filter\] (c), a $\tt CDC6$ filter consists of two independent convolutional filters (the red one and the green one) operating on the same input 4x4 feature map. Each of these convolutional filters has the same kernel size as the filter in $\tt conv6$ and separately outputs one single value. So each 4x4 feature map results in 2 outputs in time. As the CDC filter slides on $L/8$ feature maps of size 4x4 stacked in time, this input feature volume of temporal length $L/8$ is upsampled in time to $L/4$, and its spatial size is reduced to 1x1. Consequently, in space this CDC filter is equivalent to a 2D convolutional filter of kernel size 4x4; in time it has the same effect as a 1D de-convolutional filter of kernel size 2, stride 2, padding 0. The kernel size of such a joint filter in $\tt CDC6$ is 2x4x4=32, which is larger than the separate convolution and de-convolution solution (18). Therefore, a CDC filter is more powerful for jointly modeling high-level semantics and temporal dynamics: each output in time comes from an independent convolutional kernel dedicated to this output (the red/green node corresponds to the red/green kernel); however, in the separate convolution and de-convolution solution, different outputs in time share the same high-level semantics (the blue node) outputted by one single convolutional kernel (the blue one). Having more parameters makes the CDC filter harder to learn. To remedy this issue, we propose a method to adapt the pre-trained $\tt FC6$ layer in C3D to initialize $\tt CDC6$. After we convert $\tt FC6$ to $\tt conv6$, $\tt conv6$ and $\tt CDC6$ have the same number of channels (4,096) and thus the same number of filters. Each filter in $\tt conv6$ can be used to initialize its corresponding filter in $\tt CDC6$: the filter in $\tt conv6$ (the blue one) has the same kernel size as each of these two convolutional filters (the red one and the green one) in the $\tt CDC6$ filter and thus can serve as the initialization for them both.\[init\] Generally, assume that a CDC filter $F$ of kernel size ($k_l$, $k_h$, $k_w$) takes the input receptive field $X$ of height $k_h$ and width $k_w$, and produces $Y$ that consists of $k_l$ successive outputs in time. For the example given in Figure \[filter\] (c), we have $k_l=2$, $k_h=4$, $k_w=4$. Given the indices $a \in \left\{ {1,...,{k_h}} \right\} $ and $b \in \left\{ {1,...,{k_w}} \right\}$ in height and width respectively for $X$ and the index $c \in \left\{ {1,...,{k_l}} \right\}$ in time for $Y$: during the forward pass, we can compute $Y$ by $$Y\left[ c \right] = \sum\limits_{a = 1}^{{k_h}} {\sum\limits_{b = 1}^{{k_w}} {F\left[ {c,a,b} \right] \cdot X\left[ {a,b} \right]} };$$ during the back-propagation, our CDC filter follows the chain rule and propagates gradients from $Y$ to $X$ via $$X\left[ {a,b} \right] = \sum\limits_{c = 1}^{{k_l}} {F\left[ {c,a,b} \right] \cdot } Y\left[ c \right].$$ A CDC filter $F$ can be regarded as coupling a series of convolutional filters (each one has kernel size $k_h$ in height and $k_w$ in width) in time with a shared input receptive field $X$, and at the same time, $F$ performs 1D de-convolution with kernel size $k_l$ in time. In addition, the cross-channel mechanisms within a CDC layer and the way of adding biases to the outputs of the CDC filters follow the conventional strategies used in convolutional and de-convolutional layers. ![image](network-3-crop.pdf){width="\textwidth"} Design of CDC network architecture {#3.3} ---------------------------------- In Figure \[network\], we illustrate our CDC network for labeling every frame of a video. The final output shape of the CDC network is ($K$+1, $L$, 1, 1), where $K$+1 stands for $K$ action categories plus the background class. As described in Section \[build\], from $\tt conv1a $ to $\tt pool5 $, the temporal length of an input segment has been reduced from $L$ to $L/8$. On top of $\tt pool5 $, in order to make per-frame predictions, we adapt FC layers in C3D as CDC layers to perform temporal upsampling and spatial downsampling operations. Following previous de-convolution works [@V2V; @Long_2015_CVPR; @Long_2016_PAMI], we upsample in time by a factor of 2 in each CDC layer, to gradually increase temporal length from $L/8$ back to $L$. In the previous Section \[init\], we provide an example of how to adapt $\tt FC6$ as $\tt CDC6 $, performing temporal 1D de-convolution of kernel size 2, stride 2, padding 0. For $\tt CDC6 $ in the CDC network, we construct a CDC filter with 4 convolutional filters instead of 2, and thus its temporal kernel size in time increases from 2 to 4. We set the corresponding stride to 2 and padding to 1. Now each 4x4 feature map produces 4 output nodes, and every two consecutive feature maps have 2 nodes overlapping in time. Consequently, the temporal length of input is still upsampled by $\tt CDC6 $ from $L/8$ to $L/4$, but each output node sums contributions from two consecutive input feature maps, allowing temporal dynamics in input to be taken into account. Likewise, we can adapt $\tt FC7$ as $\tt CDC7$, as indicated in Figure \[network\]. Additionally, we retain the Relu layers and the Dropout layers with 0.5 dropout ratio from C3D to attach to both $\tt CDC6 $ and $\tt CDC7 $. $\tt CDC8 $ corresponds to $\tt FC8$ but cannot be directly adapted from $\tt FC8$ because the classes in $\tt FC8$ and $\tt CDC8 $ are different. Since each channel stands for one class, $\tt CDC8 $ has $K$+1 channels. Finally, the $\tt CDC8 $ output is fed into a frame-wise softmax layer $\tt Softmax$ to produce per-frame scores. During each mini-batch with $N$ training segments, for the $n$-th segment, the $\tt CDC8 $ output $O_n$ has the shape ($K$+1, $L$, 1 ,1). For each frame, performing the conventional softmax operation and computing the softmax loss and gradient are independent of other frames. Corresponding to the $t$-th frame, the $\tt CDC8 $ output ${O_n}\left[ t \right]$ and $\tt Softmax$ output ${P_n}\left[ t \right]$ both are vectors of $K$+1 values. Note that for the $i$-th class, $P_n^{\left( i \right)}\left[ t \right] = \frac{{{e^{O_n^{\left( i \right)}\left[ t \right]}}}}{{\sum\nolimits_{j = 1}^{K+1} {{e^{O_n^{\left( j \right)}\left[ t \right]}}} }}$. The total loss ${\cal L}$ is defined as: $${\cal L} = \frac{1}{N}\sum\limits_{n = 1}^N {\sum\limits_{t = 1}^L {\left( { - \log \left( {P_n^{\left( {{z_n}} \right)}\left[ t \right]} \right)} \right)} },$$ where $z_n$ stands for the ground truth class label for the $n$-th segment. The total gradient w.r.t the output of $i$-th channel/class and $t$-th frame in $\tt CDC8$ is the summation over all $N$ training segments of: $$\frac{{\partial {\cal L}}}{{\partial O_n^{\left( i \right)}\left[ t \right]}} = \left\{ {\begin{array}{*{20}{c}} {\frac{1}{N} \cdot \left( {P_n^{\left( {{z_n}} \right)}\left[ t \right] - 1} \right)}&{{\rm{if}}\:i = {z_n}}\\ {\frac{1}{N} \cdot P_n^{\left( i \right)}\left[ t \right]}&{{\rm{if}}\:i \ne {z_n}} \end{array}} \right. .$$ Training and prediction ----------------------- **Training data construction.** In theory, because both the convolutional filter and the CDC filter slide over the input, they can be applied to input of arbitrary size. Therefore, our CDC network can operate on videos of variable lengths. Due to GPU memory limitations, in practice we slide a temporal window of 32 frames without overlap on the video and feed each window individually into the CDC network to obtain dense predictions in time. From the temporal boundary annotations, we know the label of every frame. Frames in the same window can have different labels. To prevent including too many background frames for training, we only keep windows that have at least one frame belonging to actions. Therefore, given a set of training videos, we obtain a training collection of windows with frame-level labels. **Optimization.** We use stochastic gradient descent to train the CDC network with the aforementioned frame-wise softmax loss. Our implementation is based on Caffe [@caffe] and C3D [@3dcnn]. The learning rate is set to 0.00001 for all layers except for $\tt CDC8 $ layer where the learning rate is 0.0001 since $\tt CDC8 $ is randomly initialized. Following conventional settings [@3dcnn; @scnn_shou_wang_chang_cvpr16], we set momentum to 0.9 and weight decay to 0.005. C3D [@3dcnn] is trained on Sports-1M [@sports1m] and can be used to directly initialize $\tt conv1a $ to $\tt conv5b $. $\tt CDC6 $ and $\tt CDC7 $ are initialized by $\tt FC6 $ and $\tt FC7 $ respectively using the strategy described in the Section \[init\]. In addition, since $\tt FC8 $ in C3D and $\tt CDC8 $ in the CDC network have the different number of channels, we randomly initialize $\tt CDC8 $. With such initialization, our CDC network turns out to be very easy to train and converges quickly, 4 training epochs (within half a day) on THUMOS’14 . **Fine-grained prediction and precise localization.** \[post\] During testing, after applying the CDC network on the whole video, we can make predictions for every frame of the video. Through thresholding on confidence scores and grouping adjacent frames of the same label, it is possible to cut the video into segments and produce localization results. But this method is not robust to noise, and designing temporal smoothing strategies turns out to be ad hoc and non-trivial. Recently, researchers developed some efficient segment proposal methods [@scnn_shou_wang_chang_cvpr16; @victor_eccv16] to generate a small set of candidate segments of high recall. Utilizing these proposals for our localization model not only bypasses the challenge of grouping adjacent frames, but also achieves considerable speedup during testing, because we only need to apply the CDC network on the proposal segments instead of the whole video. Since these proposal segments only have coarse boundaries, we propose using fine-grained predictions from the CDC network to localize precise boundaries. First, to look at a wider interval, we extend each proposal segment’s boundaries on both sides by the percentage $\alpha$ of the original segment length. We set $\alpha$ to 1/8 for all experiments. Then, similar to preparing training segments, we slide temporal windows without overlap on the test videos. We only need to keep test windows that overlap with at least one extended proposal segment. We feed these windows into our CDC network and generate per-frame action classes scores. The category of each proposal segment is set to the class with the maximum average confidence score over all frames in the segment. If a proposal segment does not belong to the background class, we keep it and further refine its boundaries. Given the score sequence of the predicted class in the segment, we perform Gaussian kernel density estimation and obtain its mean $\mu$ and standard deviation $\sigma$. Starting from the boundary frame at each side of the extended segment and moving towards its middle, we shrink its temporal boundaries until we reach a frame with the confidence score no lower than $\mu$ - $\sigma$. Finally, we set the prediction score of the segment to the average confidence score of the predicted class over frames in the refined segment of boundaries. methods mAP -------------------------------- ---------- Single-frame CNN [@Simonyan15] 34.7 Two-stream CNN [@Simonyan14b] 36.2 LSTM [@lrcn2014] 39.3 MultiLSTM [@yeung2015every] 41.3 C3D + LinearInterp 37.0 Conv & De-conv 41.7 CDC (fix 3D ConvNets) 37.4 **CDC** **44.4** : Per-frame labeling mAP on THUMOS’14 .[]{data-label="map1"} Experiments {#exp} =========== Per-frame labeling ------------------ We first demonstrate the effectiveness of our model in predicting accurate labels for every frame. Note that this task can accept an input of multiple frames to take into account temporal information. We denote our model as **CDC**. **THUMOS’14 [@THUMOS14].** The temporal action localization task in THUMOS Challenge 2014 involves 20 actions. We use 2,755 trimmed training videos and 1,010 untrimmed validation videos (3,007 action instances) to train our model. For testing, we use all 213 test videos (3,358 action instances) which are not entirely background videos. **Evaluation metrics.** \[eval1\] Following conventional metrics [@yeung2015every], we treat the per-frame labeling task as a retrieval problem. For each action class, we rank all frames in the test set by their confidence scores for that class and compute Average Precision (AP). Then we average over all classes to obtain mean AP (mAP). **Comparisons.** In Table \[map1\], we first compare our CDC network (denoted by CDC) with some state-of-the-art models (results are quoted from [@yeung2015every]): (1) Single-frame CNN: the frame-level 16-layer VGG CNN model [@Simonyan15]; (2) Two-stream CNN: the frame-level two-stream CNN model proposed in [@Simonyan14b], which has one stream for pixel and one stream for optical flow; (3) LSTM: the basic per-frame labeling LSTM model of 512 hidden units [@lrcn2014] on the top of VGG CNN $ \tt FC7$ layer; (4) MultiLSTM: a LSTM model developed by Yeung [@yeung2015every] to process multiple input frames together with temporal attention mechanism and output predictions for multiple frames. Single-frame CNN only takes into account appearance information. Two-stream CNN models appearance and motion information separately. LSTM based models can capture temporal dependencies across frames but do not model motion explicitly. Our **CDC** model is based on 3D convolutional layers and CDC layers, which can operate on spatial and temporal dimensions simultaneously, achieving the best performance. In addition, we compare CDC with other C3D based approaches that use different upsampling methods. (1) C3D + LinearInterp: we train a segment-level C3D using the same set of training segments whose segment-level labels are determined by the majority vote. During testing we perform linear interpolation to upsample segment-level predictions as frame-level. (2) Conv & De-conv: $\tt CDC7$ and $\tt CDC8$ in our CDC network keep the spatial data shape unchanged and therefore can be also regarded as de-convolutional layers. For $\tt CDC6$, we replace it with a convolutional layer $\tt conv6$ and a separate de-convolutional layer $\tt deconv6$ as shown in Figure \[filter\] (b). The CDC model outperforms these baselines because the CDC filter can simultaneously model high-level semantics and temporal action dynamics. We also evaluate the CDC network with fixed weights in 3D ConvNets and only fine-tune CDC layers, resulting in a minor performance drop. This implies that it is helpful to train CDC networks in an end-to-end manner so that the 3D ConvNets part can be trained to summarize more discriminative information for CDC layers to infer more accurate temporal dynamics. IoU threshold 0.3 0.4 0.5 0.6 0.7 ------------------------------------------- ---------- ---------- ---------- ---------- --------- Karaman [@th3] 0.5 0.3 0.2 0.2 0.1 Wang [@th2] 14.6 12.1 8.5 4.7 1.5 Heilbron [@fast_temporal_activity_cvpr16] - - 13.5 - - Escorcia [@victor_eccv16] - - 13.9 - - Oneata [@th1] 28.8 21.8 15.0 8.5 3.2 Richard and Gall [@Richard_2016_CVPR] 30.0 23.2 15.2 - - Yeung [@stanford_cvpr16] 36.0 26.4 17.1 - - Yuan [@yuan_cvpr16] 33.6 26.1 18.8 - - S-CNN [@scnn_shou_wang_chang_cvpr16] 36.3 28.7 19.0 10.3 5.3 C3D + LinearInterp 36.0 26.4 19.6 11.1 6.6 Conv & De-conv 38.6 28.2 22.4 12.0 7.5 CDC (fix 3D ConvNets) 36.9 26.2 20.4 11.3 6.8 **CDC** **40.1** **29.4** **23.3** **13.1** **7.9** : Temporal action localization mAP on THUMOS’14 as the overlap IoU threshold used in evaluation varies from 0.3 to 0.7. - indicates that results are unavailable in the corresponding papers.[]{data-label="map2"} Temporal action localization {#loc} ---------------------------- Given per-frame labeling results from the CDC network, we generate proposals, determine class category, and predict precise boundaries following Section \[post\]. Our approach is applicable to any segment proposal method. Here we conduct experiments on THUMOS’14, and thus employ the publicly available proposals generated by the S-CNN proposal network [@scnn_shou_wang_chang_cvpr16], which achieves high recall on THUMOS’14 . Finally, we follow [@yeung2015every; @scnn_shou_wang_chang_cvpr16] to perform standard post-processing steps such as non-maximum suppression. **Evaluation metrics.** \[eval2\] Localization performance is also evaluated by mAP. Each item in the rank list is a predicted segment. The prediction is correct when it has the correct category and its temporal overlap IoU with the ground truth is larger than the threshold. Redundant detections for the same ground truth instance are not allowed. **Comparisons.** As shown in Table \[map2\], **CDC** achieves much better results than all the other state-of-the-art methods, which have been reviewed in Section \[soa\]. Compared to the proposed CDC model: the typical approach of extracting a set of features to train SVM classifiers and then applying the trained classifiers on sliding windows or segment proposals (Karaman [@th3], Wang [@th2], Oneata [@th1], Escorcia [@victor_eccv16]) does not directly address the temporal localization problem. Systems encoding iDTF with FV (Heilbron [@fast_temporal_activity_cvpr16], Richard and Gall [@Richard_2016_CVPR]) cannot learn spatio-temporal patterns directly from raw videos to make predictions. RNN/LSTM based methods (Yeung [@stanford_cvpr16], Yuan [@yuan_cvpr16]) are unable to explicitly capture motion information beyond temporal dependencies. S-CNN can effectively capture spatio-temporal patterns from raw videos but lacks the ability of adjusting boundaries from proposal candidates. With the proposed CDC filter, the CDC network can determine confidence scores at a fine granularity, beyond segment-level prediction, and hence precisely localize temporal boundaries. In addition, we employ per-frame predictions of other methods indicated in Table \[map1\] (C3D + LinearInterp, Conv & De-conv, CDC with fixed 3D ConvNets ) to perform temporal localization based on S-CNN proposal segments. As shown in Table \[map2\], the performance of the CDC network is still better, because more accurate predictions at the same temporal granularity can be used to predict more accurate label and more precise boundaries for the same input proposal segment. In Figure \[refinement\], we illustrate how our model refines boundaries from segment proposal to precisely localize action instance in time. ![image](refinement-10-crop.pdf){width="96.00000%"} Discussions ----------- **The necessity of predicting at a fine granularity in time.** In Figure \[granularity\], we compare CDC networks predicting action scores at different temporal granularities. When the temporal granularity increases, mAP increases accordingly. This demonstrates the importance of predicting at a fine-granularity for achieving precise localization. ![mAP gradually increases when the temporal granularity of CDC network prediction increases from x1 (one label for every 8 frames) to x8 (one label per frame). Each point corresponds to **x total upscaling factor (x $\tt CDC6$ upscaling factor x $\tt CDC7$ upscaling factor x $\tt CDC8$ upscaling factor)** in time. We conduct the evaluation on THUMOS’14 with IoU 0.5.[]{data-label="granularity"}](2-crop-2.pdf){width="43.00000%"} **Efficiency analysis.** The CDC network is compact and demands little storage, because it can be trained from raw videos directly to make fine-grained predictions in an end-to-end manner without the need to cache intermediate features. A typical CDC network such as the example in Figure \[network\] only requires around 1GB storage. Our approach is also fast. Compared with segment-level prediction methods such as S-CNN localization network [@scnn_shou_wang_chang_cvpr16], CDC has to perform more operations due to the need of making predictions at every frame. Therefore, when the proposal segment is long, CDC is less efficient for the sake of achieving more accurate boundaries. But in the case of short proposal segments, since these proposals usually are densely overlapped, segment-level methods have to process a large number of segments one by one. However, CDC networks only need to process each frame once, and thus it can avoid redundant computations. On a NVIDIA Titan X GPU of 12GB memory, the speed of a CDC network is around 500 Frames Per Second (FPS), which means it can process a 20s long video clip of 25 FPS within one second. mAP 0.5 -------- ------ ------ ----- ------ before 45.1 4.1 0.0 16.4 after 45.3 26.0 0.2 23.8 : Temporal localization mAP on ActivityNet Challenge 2016 [@activitynet] of Wang and Tao [@AN1] before and after the refinement step using our CDC network. We follow the official metrics used in [@activitynet] to evaluate the average mAP.[]{data-label="map3"} **Temporal activity localization.** Furthermore, we found that our approach is also useful for localizing activities of high-level semantics and complex components. We conduct experiments on ActivityNet Challenge 2016 dataset [@caba2015activitynet; @activitynet], which involves 200 activities, and contains around 10K training videos (15K instances) and 5K validation videos (7.6K instances). Each video has an average of 1.65 instances with temporal annotations. We train on the training videos and test on the validation videos. Since no activity proposal results of high quality exist, we apply the trained CDC network to the results of the first place winner [@AN1] in this Challenge to localize more precise boundaries. As shown in Table \[map3\], they achieve high mAP when the IoU in evaluation is set to 0.5, but mAP drops rapidly when the evaluation IoU increases. After using the per-frame predictions of our CDC network to refine temporal boundaries of their predicted segments, we gain significant improvements particularly when the evaluation IoU is high (0.75). This means that after the refinement, these segments have more precise boundaries and have larger overlap with ground truth instances. Conclusion and future works =========================== In this paper, we propose a novel CDC filter to simultaneously perform spatial downsampling (for spatio-temporal semantic abstraction) and temporal upsampling (for precise temporal localization), and design a CDC network to predict actions at frame-level. Our model significantly outperforms all other methods both in the per-frame labeling task and the temporal action localization task. Supplementary descriptions of the implementation details and additional experimental results are available in [@cdc_zheng_arxiv]. Acknowledgment ============== The project was supported by Mitsubishi Electric, and also by Award No. 2015-R2-CX-K025, awarded by the National Institute of Justice, Office of Justice Programs, U.S. Department of Justice. The opinions, findings, and conclusions or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect those of the Department of Justice. The Tesla K40 used for this research was donated by the NVIDIA Corporation. We thank Wei Family Private Foundation for their support for Zheng Shou, and anonymous reviewers for their valuable comments. ![image](RefGradient3-crop.pdf){width="85.00000%"} Appendix ======== Additional justification of the motivation ------------------------------------------ As mentioned in the paper, the traditional approaches use segment-level detection, in which segment proposals are analyzed to predict the action class in each segment. Such approaches are limited by the fixed segment lengths and boundary locations, and thus inadequate for finding precise action boundaries. Here we proposed a novel model to first predict actions at fine-level and then use such fine-grained score sequences to accurately detect the action boundaries. The fine-grained score sequence also offers natural ways to determine the score threshold needed in refining boundaries at the frame level. Also, though not emphasized in the paper, the fine-level score sequence can also be used to select precise keyframes or discover sub-actions within an action. Following the reviewer’s suggestion, we also computed the frame-to-frame score gradient using the frame-level detection results. As shown in Figure \[gradient\], the frame-level gradient peaks nicely correlate with the action boundaries, confirming the intuition of using the fine-level detection results. Also, as shown in Figure \[granularity\] in the paper, when the temporal granularity increases, localization performance increases accordingly. Finally, our motivation is quantitatively justified by the good results on two standard benchmarks as shown in Section \[exp\]. Additional implementation details --------------------------------- **Temporal boundary refinement.** Here, we provide details and pseudo-codes for temporal boundary refinement presented in Section \[post\]. Algorithm \[alg-refine\] is used to refine boundaries of each proposal segment. Also, our source codes can be found at <https://bitbucket.org/columbiadvmm/cdc>. **Input**: A proposal segment of starting frame index $t_s$ and ending frame index $t_e$, the percentage parameter of segment length expansion $\alpha$, the first frame index $v_s$ and the last frame index $v_e$ of the video containing the proposal segment, the total number of categories $K$ **Output**: the refined starting frame index ${t_s}'$ and ending frame index ${t_e}'$, the predicted category $c$, the predicted confidence score $s$ 1\. // Extend boundaries on both sides by the percentage of the original segment length 2\. ${t_s}' = \max \left( {{v_s},{t_s} - \alpha \cdot \left( {{t_e} - {t_s} + 1} \right)} \right)$ 3\. ${t_e}'= \min \left( {{v_e},{t_e} + \alpha \cdot \left( {{t_e} - {t_s} + 1} \right)} \right)$ 4\. // Feed frames into the CDC network to produce the confidence score matrix ${\bf{P}} \in {\Re^{\left( {{t_e} - {t_s} + 1} \right) \times K}}$ 5\. $\bf{P} =$ **CDC**(frames from ${t_s}'$ to ${t_e}'$) 6\. assign $c$ as the category with the maximum average confidence score over all frames from ${t_s}'$ to ${t_e}'$ 7\. // Estimate the mean $\mu$ and the standard deviation $\sigma$ 8\. $\mu ,\sigma =$ Gaussian Kernel Density Estimation$\left( {{\bf{P}}\left[ {:,c} \right]} \right)$ 9\. $\beta = \mu - \sigma $ // Compute the score threshold 10\. // Refine the starting time 11\. **for** $i_s = 1,2, \ldots , \left( {t_e}'-{t_s}'+1 \right) $ **do** 12\. **if** ${\bf{P}}\left[ {i_s,c} \right] > = \beta$ **then** 13\. **break** 14\. **end if** 15\. **end for** 16\. // Refine the ending time 17\. **for** $i_e = \left( {t_e}'-{t_s}'+1 \right) ,\ldots , 2,1 $ **do** 18\. **if** ${\bf{P}}\left[ {i_e,c} \right] > = \beta$ **then** 19\. **break** 20\. **end if** 21\. **end for** 22\. ${t_e}' = {t_s}' + {i_e} - 1$ 23\. ${t_s}' = {t_s}' + {i_s} - 1$ 24\. $s = \frac{{\sum\limits_{i = {i_s}}^{{i_e}} {{\bf{P}}\left[ {i,c} \right]} }}{{{t_e} - {t_s} + 1}}$ // Compute the average score 25\. **return** ${t_s}'$, ${t_e}'$, $c$, $s$ **Discussions about the window length used during creating mini-batches.** During mini-batch construction, ideally we would like to set the window length as longer as possible. Therefore, when CDC processes each window, it can take into account more temporal contextual information. However, due to the limitation of the GPU memory, if the window length is too high, we have to set the number of training samples for each mini-batch to be very small, which will make the optimization unstable and thus the training procedure cannot converge well. Also, a long window usually contains much more background frames than action frames and thus we need to further handle the data imbalance issue. During experiments, we conduct a grid search of window length in 16, 32, 64, 128, 256, 512 and empirically found that setting the window length to 32 frames is a good trade-off on a single NVIDIA Titan X GPU of 12GB memory: (1) we can include sufficient temporal contextual information to achieve good accuracy and (2) we can set the batch size as 8 to guarantee stable optimization. ![image](TH-3-crop-1.pdf){width="85.00000%"} ![image](TH-3-crop-2.pdf){width="85.00000%"} Additional experiments ---------------------- **Sensitivity analysis.** When we extend the segment proposal, the percentage $\alpha$ of the original proposal length should not be too small so that our model can consider a wider interval and not be too large to include too many irrelevant frames. As shown in Table \[map4\], the system has stable performances when $\alpha$ varies within a reasonable range. $\alpha$ 1/8 1/7 1/6 1/5 1/4 ---------- ------ ------ ------ ------ ------ mAP 23.3 23.2 23.1 23.1 23.6 : mAP on THUMOS’14 with the evaluation IoU set to 0.5 when we vary the extension percentage $\alpha$ of the original proposal length from 1/8 to 1/4.[]{data-label="map4"} **Additional results on ActivityNet.** We expand the comparisons on ActivityNet validation set to include results provided by additional top performers \[51, 52\] in ActivityNet Challenge 2016. As shown in Table \[map3\], our method CDC outperforms all other methods. As shown in Table \[map3test\], CDC also performs the best on ActivityNet test set. IoU threshold 0.5 --------------------------- ------ ------ ----- ------ Singh and Cuzzolin [@AN2] 22.7 10.8 0.3 11.3 Singh [@AN3] 26.0 15.2 2.6 14.6 Wang and Tao [@AN1] 45.1 4.1 0.0 16.4 CDC 45.3 26.0 0.2 23.8 : Additional baseline results of temporal localization mAP on ActivityNet Challenge 2016 [@activitynet] validation set. The baseline results are kindly provided by the authors of [@AN2; @AN3; @AN1].[]{data-label="map3"} IoU threshold 0.5 --------------------------- ------ ------ ----- ------ Singh and Cuzzolin [@AN2] 36.4 11.1 0.1 17.8 Singh [@AN3] 28.7 17.8 2.9 17.7 Wang and Tao [@AN1] 42.5 2.9 0.1 14.6 CDC (train) 43.1 25.6 0.2 22.9 CDC (train+val) 43.0 25.7 0.2 22.9 : Comparisons of temporal localization mAP on ActivityNet Challenge 2016 [@activitynet] test set. The baseline results are quoted from the ActivityNet Challenge 2016 leaderboard [@activitynet]. CDC (train) is training the CDC model on the training set only and CDC (train+val) uses the training set and the validation set together to train the CDC model. []{data-label="map3test"} **Discussions about other proposal methods.** As shown in Table \[map2s\], we evaluate temporal localization performances of CDC based on other proposals on THUMOS’14. IoU threshold 0.3 0.4 0.5 0.6 0.7 ----------------------------------------------------- ------ ------ ------ ------ ----- S-CNN [@scnn_shou_wang_chang_cvpr16] w/o CDC 36.3 28.7 19.0 10.3 5.3 ResC3D+S-CNN [@scnn_shou_wang_chang_cvpr16] w/o CDC 40.6 32.6 22.5 12.3 6.4 S-CNN [@scnn_shou_wang_chang_cvpr16] 40.1 29.4 23.3 13.1 7.9 ResC3D+S-CNN [@scnn_shou_wang_chang_cvpr16] 41.3 30.7 24.7 14.3 8.8 On ActivityNet, the proposals currently used in Section \[exp\] from [@AN1] is a reasonable choice - its recall is 0.681 with 56K proposals when evaluate at IoU=0.5 on the validation set. We also have considered using other state-of-the-art proposal methods: (1) The ActivityNet challenge provides proposals computed by [@fast_temporal_activity_cvpr16], but it has a low recall at 0.527 on the validation set with 441K proposals, which contain a lot of false alarms. (2) DAPs [@victor_eccv16] advocates that train proposal model on THUMOS and then generalize the model to ActivityNet. Due to lack training data from ActivityNet, DAPs has a quite low recall at around 0.23 and is not a reasonable proposal candidate. (3) S-CNN [@scnn_shou_wang_chang_cvpr16] is designed for instance-level detection. However, ground truth annotations in ActivityNet do not distinguish consecutive instances - one ground truth interval can contain multiple activity instances. Also, for activities of high-level semantics, it is ambiguous to define what is an individual activity instance. Therefore, S-CNN does not suit ActivityNet. **Additional discussions about speed.** For the sake of avoiding confusions, we would like to emphasize that the CDC network is end-to-end while the task of temporal localization is not end-to-end due to the need of combing with proposals and performing post-processing. Throughout the paper, the speed is also computed for the CDC network itself. Following C3D [@3dcnn], each input frame has spatial resolution $128 \times 171$ and will be cropped into $112 \times 112$ as network input (random cropping during training and center cropping during testing). As indicated in Figure \[network\], each input video of $L$ frames has the shape of (3, $L$, 112, 112). As aforementioned, on a single NVIDIA Titan X GPU of 12GB memory, the speed of a CDC network is around 500 Frames Per Second (FPS), which means it can process a 20s long video clip of 25 FPS within one second. **Additional visualization examples.** As supplementary material to Figure \[refinement\], we provide additional examples to show the process of using Convolutional-De-Convolutional (CDC) model to refine the boundaries of proposal segments and achieve precise temporal action localization on THUMOS’14 [@THUMOS14]. As shown in Figure \[refinement1\] and Figure \[refinement2\], the combination of the segment proposal and the CDC frame-level score prediction is powerful. The segment proposal allows for leveraging candidates of coarse boundaries to help handle the noisy outliers in the dipped score intervals such as shown in Figure \[refinement2\]. The proposed CDC model allows for fine-grained predictions at the frame level to help refine the segment boundaries in frame-level for precise localization.
--- abstract: 'For the retrieval dynamics of sparsely coded attractor associative memory models with synaptic noise the inclusion of a macroscopic time-dependent threshold is studied. It is shown that if the threshold is chosen appropriately as a function of the cross-talk noise and of the activity of the memorized patterns, adapting itself automatically in the course of the time evolution, an autonomous functioning of the model is guaranteed. This self-control mechanism considerably improves the quality of the fixed-point retrieval dynamics, in particular the storage capacity, the basins of attraction and the mutual information content.' author: - title: 'Self-control Dynamics for Sparsely Coded Networks with Synaptic Noise' --- Introduction ============ Efficient neural network modelling requires an autonomous functioning independent from external constraints or control mechanisms. For fixed-point retrieval by an attractor associative memory model this requirement is mainly expressed by the robustness of its learning and retrieval capabilities against external noise, against malfunctioning of some of the connections and so on. Indeed, a model which embodies this robustness is able to perform as a content-adressable memory having large basins of attraction for the memorized patterns. Intuitively, one can imagine that these basins of attraction become smaller when the storage capacity gets larger. This might occur, e.g., in sparsely coded models (Okada, 1996 and references cited therein). Therefore, the necessity of a control of the activity of the neurons has been emphasized such that the latter stays the same as the activity of the memorized patterns during the recall process. This has led to several discussions imposing external constraints on the dynamics. However, the enforcement of such a constraint at every time step destroys part of the autonomous functioning of the network. To solve this problem, quite recently, a self-control mechanism has been introduced in the dynamics through the introduction of a time-dependent threshold in the transfer function (Dominguez & Bollé, 1998; Bollé, Dominguez & Amari 2000). This threshold is determined as a function of both the cross-talk noise and the activity of the memorized patterns in the network and adapts itself in the course of the time evolution. Up to now only neural network models without synaptic noise have been considered in this context. The purpose of the present work is precisely to generalise this self-control mechanism when synaptic noise is allowed. The model ========= Let us consider a network of $N$ binary neurons. At a discrete time step $t$ the neurons $\sigma_{i,t} \in \{0,1\}, \,\,\, i=1, \ldots, N$ are updated synchronously according to the rule $$\sigma_{i,t+1}= F_{\theta_{t}, \beta}(h_{i,t}), \,\,\, h_{i,t}= \sum_{j(\neq i)}^{N}J_{ij}(\sigma_{j,t}-a)\, , \label{2.si}$$ where $J_{ij}$ are the synaptic couplings, $a$ is the activity of the memorized patterns and $h_{i,t}$ is usually called the “local field” of neuron $i$ at time $t$. In general, the transfer function $F_{\theta_{t}, \beta}$ can be a monotonic function with $\theta_{t}$ a time-dependent threshold. Later on it will be chosen as $$F_{\theta_{t},\beta}(x)=\frac{1}{2}[1+ \tanh(\beta(x-\theta_t))]\, . \label{transfer}$$ The “temperature” $\beta=1/T$ controls the thermal fluctuations, which are a measure for the synaptic noise (Hertz et al., 1991). In the sequel, for theoretical simplicity in the methods used, the number of neurons $N$ will be taken to be sufficiently large. The synaptic couplings $J_{ij}$ themselves are determined by the covariance rule $$J_{ij}= \frac{C_{ij}}{C{\tilde a}} \sum_{\mu=1}^{p} (\xi^{\mu}_{i}-a)(\xi^{\mu}_{j}-a), \quad {\tilde a}\equiv a(1-a)\,. \label{2.Ji}$$ The memorized patterns $\xi^{\mu}_{i} \in\{0,1\}, \,\,\, \mu=1, \ldots, p$ are independent identically distributed random variables (iidrv) with respect to $i$ and $\mu$ chosen according to the probability distribution $$p(\xi^{\mu}_{i})= a\delta(\xi^{\mu}_{i}-1) +(1-a)\delta(\xi^{\mu}_{i}). \label{2.px}$$ The coefficients $C_{ij}\in\{0,1\}$ are iidrv with probability $$\begin{aligned} Pr\{C_{ij}=d\}=[1-({C}/{N})] \delta_{d,0} + ({C}/{N}) \delta_{d,1} \nonumber \\ Pr\{C_{ij}=C_{ji}\}=(C/N)^2,\quad ({C}/{N})\ll 1, \quad C>0. \end{aligned}$$ This introduces the so-called extremely diluted asymmetric architecture with $C$ measuring the average connectivity of the network (Derrida et al., 1987). At this point we remark that the couplings (\[2.Ji\]) are of infinite range (each neuron interacts with infinitely many others) such that our model allows a so-called mean-field theory approximation. This essentially means that we focus on the dynamics of a single neuron while replacing all the other neurons by an average background local field. In other words, no fluctuations of the other neurons are taken into account, not even in response to changing the state of the chosen neuron. In our case this approximation becomes exact because, crudely speaking, $h_{i,t}$ is the sum of very many terms and a central limit theorem can be applied (Hertz et al., 1991). It is standard knowledge by now that synchronous mean-field theory dynamics can be solved exactly for these diluted architectures (e.g., Bollé, 2004). Hence, the big advantage is that this will allow us to determine the precise effects from self-control in an exact way. We recall that the relevant parameters describing the solution of this dynamics are the retrieval overlap, $m^{\mu}_t$, between the memorized pattern, $\xi^{\mu}_i$, and the microscopic network state, $\sigma_{i,t}$, and the neural activity, $q_t$, given by, respectively $$m^{\mu}_{t}\equiv \frac{1}{Na} \sum_{i}\xi^{\mu}_{i}\sigma_{i,t}\, , \quad q_{t}\equiv \frac{1}{N}\sum_{i}\sigma_{i,t}\, . \label{parmq}$$ We remark that the $m^{\mu}_t$ are normalized parameters within the interval $[\,0,1]$ which attain the maximal value $1$ whenever the model succeeds in a perfect recall, i.e., $\sigma_{i,t}= \xi^{\mu}_{i}$ for all $i$. In order to measure the retrieval quality of the recall process, we use the mutual information function (Bollé, Dominguez & Amari, 2000; Nadal, Brunel & Parga, 1998; Schultz & Treves, 1998 and references therein). In general, it measures the average amount of information that can be received by the user by observing the signal at the output of a channel (Blahut, 1990; Shannon, 1948). For the recall process of memorized patterns that we are discussing here, at each time step the process can be regarded as a channel with input $\xi_i^\mu$ and output $\sigma_{i,t}$ such that this mutual information function can be defined as (forgetting about the pattern index $\mu$ and the time index $t$) $$\begin{aligned} &&I(\sigma_i;\xi_i)=S(\sigma_i)- \langle S(\sigma_i|\xi_i)\rangle_{\xi_i}; \label{3.Is} \\ &&S(\sigma_i) \equiv -\sum_{\sigma_i}p(\sigma_i)\ln[p(\sigma_i)], \label{3.Ss} \\ &&S(\sigma_i|\xi_i) \equiv -\sum_{\sigma_i}p(\sigma_i|\xi_i)\ln[p(\sigma_i|\xi_i)]. \label{3.Sx}\end{aligned}$$ Here $S(\sigma_i)$ and $S(\sigma_i|\xi_i)$ are the entropy and the conditional entropy of the output, respectively. These information entropies are peculiar to the probability distributions of the output. The term $\langle S(\sigma_i|\xi_i)\rangle_{\xi_i}$ is also called the equivocation term in the recall process. The quantity $p(\sigma_i)$ denotes the probability distribution for the neurons at time $t$, while $p(\sigma_i|\xi_i)$ indicates the conditional probability that the $i-th$ neuron is in a state $\sigma_{i}$ at time $t$, given that the $i-th$ pixel of the memorized pattern that is being retrieved is $\xi_{i}$. Hereby, we have assumed that the conditional probability of all the neurons factorizes, i.e., $p(\{\sigma_i\}|\{\xi_i\})=\prod_i p(\sigma_i|\xi_i)$, which is a consequence of the mean-field theory character of our model explained above. We remark that a similar factorization has also been used in Schwenker et al. (1996). The calculation of the different terms in the expression (\[3.Is\]) proceeds as follows. Formally writing $\langle O \rangle \equiv \langle \langle O \rangle_{\sigma|\xi} \rangle_{\xi}= \sum_{\xi} p(\xi) \sum_{\sigma} p(\sigma|\xi) O $ for an arbitrary quantity $O$ the conditional probability can be obtained in a rather straightforward way by using the complete knowledge about the system: $\langle \xi \rangle=a, \, \langle \sigma \rangle=q, \, \langle \sigma \xi \rangle=am, \, \langle 1 \rangle=1$. The result reads (we forget about the index $i$) $$\begin{aligned} p(\sigma|\xi)&=& [\gamma_{0}+(m-\gamma_{0})\xi]\,\delta(\sigma-1) \nonumber\\ &+& [1-\gamma_{0}-(m-\gamma_{0})\xi]\,\delta(\sigma) , \nonumber\\ \gamma_{0}&=& \frac{q-am}{1-a} \label{3.ps}\end{aligned}$$ One can simply verify that this satisfies the averages $$\begin{aligned} m=\frac{1}{a}\langle\langle \sigma \xi \rangle_{\sigma|\xi}\rangle_{\xi} \qquad q=\langle\langle\sigma\rangle_{\sigma|\xi}\rangle_{\xi} \label{3.ms}\end{aligned}$$ and those are precisely equal, for large $N$, to the parameters $m$ and $q$ mentioned above (Eq. (\[parmq\])). Using the probability distribution of the patterns (Eq.(\[2.px\])), we furthermore obtain $$p(\sigma)\equiv\sum_{\xi}p(\xi)p(\sigma|\xi)= q\delta(\sigma-1)+(1-q)\delta(\sigma). \label{3.px}$$ Hence the expressions for the entropies defined above become $$\begin{aligned} &&S(\sigma)= -q\ln q - (1-q)\ln(1-q),\,\, \\ && \langle S(\sigma|\xi)\rangle_{\xi}= -a[m\ln(m)+ (1-m)\ln(1-m)] \nonumber\\ && \hspace*{0.7cm} -(1-a)[ \gamma_0 \ln \gamma_0 + (1-\gamma_0)\ln(1-\gamma_0)]. \label{3.Hs}\end{aligned}$$ Recalling eq. (\[3.Is\]) this completes the calculation of the mutual information content of the present model. Self-control dynamics ===================== It is standard knowledge (e.g., Derrida et al., 1987; Bollé, 2004) that the synchronous dynamics for diluted architectures can be solved exactly following the method based upon a signal-to-noise analysis of the local field (\[2.si\]) (e.g., Amari, 1977; Amari & Maginu, 1988; Okada, 1996; Bollé, 2004 and references therein). Without loss of generality we focus on the recall of one pattern, say $\mu=1$, meaning that only $m^1_{t}$ is macroscopic, i.e., of order $1$ and the rest of the patterns causes a cross-talk noise at each time step of the dynamics. Supposing that the initial state of the network model, $\{\sigma_{i,0}\}$, is a collection of iidrv with mean zero and neural activity $q_0$ and correlated only with memorized pattern $1$ with an overlap $m^1_0$, then the full time evolution can be shown to be given by $$\begin{aligned} m_{t+1}^1= \langle F_{\theta_{t},\beta}[(1-a)M^1_{t}+\omega_{t}] \rangle_{\omega} \label{3.M1}\\ q_{t+1}=a m_{t+1}^1 + (1-a)\langle F_{\theta_{t},\beta}(-aM^1_{t} +\omega_{t}) \rangle_{\omega}\, , \label{3.Q}\end{aligned}$$ with $$M^1_{t}=\frac{m^1_{t}-q_{t}}{1-a},$$ where we have averaged over the first pattern $\xi^1$ and where the angular brackets indicate that we still have to average over the residual (cross-talk) noise $\omega_{t}$ which can be written as $$\omega_{t}=[\alpha Q_{t}]^{1/2} {\cal N}(0,1), \quad Q_{t}=(1-2a)q_{t} + a^2$$ with ${\cal N}(0,1)$ a Gaussian random variable with mean zero and variance unity and the (finite) loading defined by $p= \alpha C$. Recalling the specific form of the transfer function (\[transfer\]) we explicitly have $$\begin{aligned} &&\hspace*{-1.2cm} \langle F_{\theta_{t},\beta}[-aM^1_{t}+\omega_{t}] \rangle_{\omega} \nonumber\\ && \hspace*{-1cm} = \int_{-\infty}^{\infty} \frac{dy \,\,e^{-y^2/ \alpha Q_t}}{2\sqrt{2\pi \alpha Q_t}} [1+ \tanh[\beta (-aM_t - \theta_{t} + y)]] \label{transtemp} \end{aligned}$$ and an analogous expression for $\langle F_{\theta_{t},\beta}[(1-a)M^1_{t}+\omega_{t}] \rangle_{\omega}$. Of course, it is known that the quality of the recall process is influenced by the cross-talk noise at each time step of the dynamics. A novel idea is then to let the network itself autonomously counter this cross-talk noise at each time step by introducing an adaptive, hence time-dependent, threshold. This has been studied for neural network models at zero temperature, i.e., without synaptic noise where $F_{\theta_{t}, \beta=\infty}(x)=\Theta(x-\theta_t)$. For sparsely coded models, meaning that the pattern activity $a$ is very small and tends to zero for $N$ large, it has been found (Dominguez & Bollé, 1998; Bollé, Dominguez & Amari, 2000) that $$\theta_{t}(a)=c(a)\sqrt{\alpha Q_{t}}, \quad c(a)=\sqrt{-2 \ln(a)} \label{2.tt}$$ makes the second term on the r.h.s of Eq.(\[3.Q\]) asymptotically vanish faster than $a$ such that $q \sim a$. It turns out that the inclusion of this self-control threshold considerably improves the quality of the fixed-point retrieval dynamics, in particular the storage capacity, the basins of attraction and the information content. As an example we present in Fig. 1 the basin of attraction for the whole retrieval phase $R$ for the self-control model with $\theta_{sc}$ given by Eq. (\[2.tt\]) and initial value $q_0=0.01=a$, compared with a model where the threshold $\theta_{opt}$ is selected for every loading $\alpha$ by hand in an optimal way meaning that the information content $i=\alpha I$ is maximized. The latter is non-trivial because it is even rather difficult, especially in the limit of sparse coding, to choose a threshold interval by hand such that $i$ is non-zero. The basin of attraction is clearly enlarged with this self-control threshold choice and even near the border of critical storage the results are still improved. For more details we refer to Dominguez & Bollé (1998) and Bollé, Dominguez & Amari (2000). ![ The basin of attraction as a function of $\alpha$ for $a=0.01$ and initial $q_{0}=a$ for the self-control model (full line) and the optimal threshold model (dashed line) at zero temperature. ](bat0.ps){width="7cm"} A similar threshold also works for sparsely coded sequential patterns (Kitani & Aoyagi, 1998) and even for non-sparse architectures as well (Bollé & Dominguez Carreta, 2000). It is then worthwhile to examine whether such a self-control threshold can be found for networks with synaptic noise. No systematic study has been done in this case. The specific problem to be posed in analogy with the zero-temperature case is the following one. Can one determine a form for the threshold $\theta_t$ in Eq. (\[transtemp\]) such that the integral vanishes asymptotically faster than $a$? In contrast with the zero-temperature case, where due to the simple form of the transfer function, this threshold could be determined analytically (recall Eq. (\[2.tt\]), a detailed study of the asymptotics of the integral in Eq. (\[transtemp\]) gives no satisfactory analytic solution. Therefore, we have designed a systematic numerical procedure through the following steps: - Choose a small value for the activity $a'$. - Determine through numerical integration the threshold $\theta'$ such that $$\int_{-\infty}^{\infty} \frac{dx \,\,e^{-x^2/ 2 \sigma^2}}{\sigma \sqrt{2\pi }} \Theta (x- \theta) \leq a' \quad \mbox{for} \quad \theta > \theta'$$ for different values of the variance $\sigma^2={\alpha Q_t}$. - Determine, as a function of the temperature $T=1/\beta$, the value for $\theta'_T$ such that $$\begin{aligned} &&\hspace*{-2cm} \int_{-\infty}^{\infty} \frac{dx \,\,e^{-y^2/ \sigma^2}}{2 \sigma \sqrt{2\pi }} [1+ \tanh[\beta (x- \theta)]] \leq a' \nonumber \\ && \mbox{for} \quad \theta > \theta' +\theta'_T.\end{aligned}$$ The second step leads, as expected, precisely to a threshold having the zero-temperature form Eq. (\[2.tt\]). The third step determining the temperature dependent part $\theta'_T$ leads to the results shown in Fig. 2. ![The temperature dependent part of the threshold $\theta'_T$ as a function of $T$ for several values of $a'$[]{data-label="fitten1"}](fitten1.eps){width="6.5cm"} Intuitively it is seen that $\theta'_T$ behaves quadratically. Indeed, making a polynomial fit of these results we find that the linear term is negligable and that the quadratic term is of the form $\theta'_{T}=-\frac12 \ln(a') T^2$. Furthermore, the dependence of the coefficient of this quadratic term on the variance is very weak. Hence, we propose the following self-control threshold $$\theta_{t}(a,T)=\sqrt{-2 \ln (a)\alpha Q_{t}} - \frac12 \ln(a) T^2. \label{threstemp}$$ Together with Eqs.(\[3.M1\])-(\[3.Q\]) this relation describes the self-control dynamics of the network model with synaptic noise. This dynamical threshold is again a macroscopic parameter, thus no average must be taken over the microscopic random variables at each time step $t$. At this point we want to make two remarks. First, for a binary layered network (Bollé & Massolo, 2000) the inclusion of a threshold of the form (\[2.tt\]), although not designed for non-zero temperatures, is shown to still improve the retrieval quality for low pattern activities and low temperatures, in comparison with an optimal threshold model analogous to the one mentioned above. Secondly, in a recent study of an extremely diluted three-state neural network (Dominguez et al., 2002) based on information theoretic and mean-field theory arguments, a self-control threshold with a linear temperature correction term with coefficient $1$ has been mentioned without any further details. In that specific model this self-control threshold is shown to improve the retrieval quality for low temperatures but it is not specified how much of the improvement is really due to the linear correction itself. ![The basin of attraction as a function of $\alpha$ for $a=0.01$ and several values of the temperature with (full lines) and without (dashed lines) the temperature correction $\theta'_T$ in the threshold.[]{data-label="basins_T"}](basinsT.eps){width="6.5cm"} We have solved this self-control dynamics, Eqs.(\[3.M1\])-(\[3.Q\]) and Eq. (\[threstemp\]), for our model with synaptic noise, in the limit of sparse coding, numerically. In particular, we have studied in detail the influence of the temperature dependent part of the threshold. Of course, we are only interested in the retrieval solutions with $M>0$ and carrying a non-zero information $I$. We remark that all numerical calculations presented here are done for an appropriate number of time steps (at least of the order of a few hundred) in order to assure that a stable equilibrium point is reached. ![The evolution of the overlap $m_t$ for several initial values $m_0$ with $a=0.01$, $T=0.2$ and $\alpha=1.5$ without (left) and with (right) the temperature correction $\theta'_T$ in the threshold.[]{data-label="evolutie_T"}](evolT2.eps){width="6.5cm"} The important features of the solution are illustrated in Figs. 3-5. In Fig. 3 we show the basin of attraction for the whole retrieval phase for the model with the temperature-zero threshold (\[2.tt\]) (dashed curves) compared to the model with the temperature dependent threshold (\[threstemp\]) (full curves) (compare also Fig. 1). We see that there is no clear improvement for low temperatures but there is a substantial one for higher temperatures. Even near the border of critical storage the results are still improved such that also the storage capacity itself is larger. This is further illustrated in Fig. 4 where we compare the time evolution of the retrieval overlap $m_t$ starting from several initial values, $m_0$, for the model with (right figure) and without (left figure) the quadratic temperature correction in the threshold. Here this temperature correction is absolutely crucial to force some of the overlap trajectories to go to the retrieval attractor $m \approx 1$. It really makes the difference between retrieval and non-retrieval in the model. At this point we remark that the influence of a linear temperature correction term has been examined also here but no real improvement has been found of the results for the temperature-zero threshold. ![The information content $i$ as a function of $T$ for several values of the loading $\alpha$ and $a=0.001$ with (full lines) and without (dashed lines) the temperature correction $\theta'_T$ in the threshold.[]{data-label="info_T_2"}](infoT2.eps){width="6.5cm"} In Fig. 5 we plot the information content $i$ as a function of the temperature for the self-control dynamics with the threshold (\[threstemp\]) (full curves), respectively (\[2.tt\]) (dashed curves). We see that, especially for small loading $\alpha$ a substantial improvement of the information content is obtained. Conclusions =========== In this work we have generalized complete self-control in the dynamics of sparsely coded associative memory networks to models with synaptic noise. We have proposed an analytic form for the relevant macroscopic threshold consisting out of the known form for temperature zero plus a quadratic temperature correction term dependent on the pattern activity. The consequences of this self-control mechanism on the quality of the recall process by the network have been studied. We find that the basins of attraction of the retrieval solutions as well as the storage capacity are enlarged and that the mutual information content is maximized. This confirms the considerable improvement of the quality of recall by self-control, also for network models with synaptic noise. This allows us to conjecture that this idea of self-control, allowing the network to function autonomously, might be relevant for other architectures in the presence of synaptic noise, and for dynamical systems in general, when trying to improve the basins of attraction and convergence times. Acknowledgment {#acknowledgment .unnumbered} ============== We are indebted to S. Goossens for some contributions at the initial stages of this work. One of the authors (DB) would like to thank D. Dominguez for stimulating discussions. This work has been supported by the Fund for Scientific Research- Flanders (Belgium). [1]{} Amari S. (1977). Neural theory and association of concept information. [*Biological Cybernetics*]{}, [**26**]{}, 175-185. Amari S. and Maginu K. (1988). Statistical neurodynamics of associative memory. [*Neural Networks*]{}, [**1**]{}, 63-73. Blahut R.E. (1990). [*Principles and Practice of Information Theory*]{}. Reading, MA: Addison-Wesley. Bollé D (2004). Multi-state neural networks based upon spin-glasses: a biased overview in [*Advances in Condensed Matter and Statistical Mechanics*]{} eds. Korutcheva E and Cuerno R., Nova Science Publishers, New-York, p. 321-349. Bollé D. and Dominguez Carreta D. (2000). Mutual information and self-control of a fully connected low-activity neural network. [*Physica A*]{}, [**286**]{}, 401-416. Bollé D., Dominguez D.R.C. and Amari S. (2000). Mutual information of sparsely coded associative memory with self-control and ternary neurons. [*Neural Networks*]{}, [**13**]{}, 455-462. Bollé D. and Massolo G. (2000). Thresholds in layered neural networks with variable activity. [*Journal of Physics A*]{}, [**33**]{}, 2597-2609. Derrida B., Gardner E., and Zippelius A. (1987). An exactly solvable asymmetric neural network model. [*Europhysics Letters*]{}, [**4**]{}, 167-173. Dominguez D.R.C. and Bollé D. (1998). Self-control in sparsely coded networks. [*Physical Review Letters*]{}, [**80**]{}, 2961-2964. Dominguez D.R.C., Korutcheva E., Theumann W.K., and Erichsen Jr. R. (2002). Flow diagrams of the quadratic neural network. [*Lecture Notes in Computer Science*]{}, [**2415**]{}, 129-134. Hertz J., Krogh A. and Palmer R.G. (1991). [*Introduction to the Theory of Neural Computation*]{}. Addison-Wesley, Redwood City. Kitano K. and Aoyagi T. (1998). Retrieval dynamics of neural networks for sparsely coded sequential patterns, [*Journal of Physics A*]{}, [**31**]{}, L613-L620. Nadal J-P., Brunel N. and Parga N. (1998). Nonlinear feedforward networks with stochastic outputs: infomax implies redundancy reduction. [*Network: Computation in Neural Systems*]{}, [**9**]{}, 207-217. Okada M. (1996). Notions of associative memory and sparse coding. [*Neural Networks*]{}, [**9**]{}, 1429-1458. Schwenker F., Sommer F.T., and Palm G. (1996). Iterative retrieval of sparsely coded associative memory patterns. [*Neural Networks*]{}, [**9**]{}, 445-455. Schultz S. and Treves A. (1998). Stability of the replica-symmetric solution for the information conveyed by a neural network. [*Physical Review E* ]{}, [**57**]{}, 3302-3310. Shannon C.E. (1948). A mathematical theory for communication. [*Bell Systems Technical Journal*]{}, [**27**]{}, 379.
--- abstract: 'We present a method for computing $\mathbb{A}^1$-homotopy invariants of singularity categories of rings admitting suitable gradings. Using this we describe any such invariant, e.g. homotopy K-theory, for the stable categories of self-injective algebras admitting a connected grading. A remark is also made concerning the vanishing of all such invariants for cluster categories of type $A_{2n}$ quivers.' address: - ' Sira Gratz, School of Mathematics and Statistics, University of Glasgow, University Place, Glasgow G12 8SQ ' - 'Greg Stevenson, School of Mathematics and Statistics, University of Glasgow, University Place, Glasgow G12 8SQ ' author: - Sira Gratz - Greg Stevenson bibliography: - 'greg\_bib.bib' title: Homotopy invariants of singularity categories --- Introduction ============ Gradings often make life significantly easier. For instance, if $\Lambda$ is an exterior algebra on $n+1$ generators over a field $k$ then, although its stable category $\operatorname{\underline{\mathsf{mod}}}\Lambda$ cannot even have a non-trivial t-structure, if we standard grade $\Lambda$ then the graded stable category $\operatorname{\underline{\mathsf{gr}}}\Lambda$ has a full strong exceptional collection. This makes computing $\AA^1$-homotopy invariants of $\operatorname{\underline{\mathsf{gr}}}\Lambda$ very easythey are just given by $n+1$ copies of the corresponding invariant evaluated at the base field. However, one doesn’t always want to work with graded modules. In this article we exploit work of Tabuada [@TabuadaA1] and Keller, Murfet, and Van den Bergh [@KMV] to describe the invariants of $\operatorname{\underline{\mathsf{mod}}}\Lambda$ in terms of those of $\operatorname{\underline{\mathsf{gr}}}\Lambda$, the graded stable category. After covering the required preliminaries on orbit categories, $\AA^1$-homotopy invariants, and singularity categories in Section \[sec:prelims\] we use Tabuada’s work on $\AA^1$-homotopy invariants of orbit categories to present, in Theorem \[thm\_main1\], a cofibre sequence relating invariants of graded and ungraded singularity categories. We then specialise to finite dimensional algebras and exploit the very strong results on existence of tilting objects for graded singularity categories to perform concrete computations. In particular, we show in Theorem \[thm:Frobenius\] that if $\Lambda$ is a finite dimensional self-injective $k$-algebra admitting a connected grading then, for any $\AA^1$-homotopy invariant $\a1$, we have $$\a1(\operatorname{\underline{\mathsf{mod}}}\Lambda) \cong \operatorname{cone}(\a1(k) \xymatrix{\ar[rr]^-{\boldsymbol{\cdot}\dim \Lambda}&&}\a1(k)).$$ This generalises the computation of $K_0(\operatorname{\underline{\mathsf{mod}}}\Lambda)$ for such algebras by Tachikawa and Wakamatsu [@TW91]. In the final section we discuss a special case of a result of Tabuada concerning $\AA^1$-homotopy invariants of cluster categories. In [@TabuadaA1]\*[Corollary 2.11]{} a presentation for the $\AA^1$-homotopy invariants of cluster categories of finite acyclic quivers is given in terms of a cofibre sequence. Using this we point out that for the Dynkin quivers $A_{2n}$ this actually implies that all $\AA^1$-homotopy invariants of the corresponding cluster category vanish. We are grateful to Sebastian Klein for inspiring conversations, originating from discussions on tt-Chow groups, which led us to the considerations which were the genesis of this work. The second author thanks Lance Gurney and Shane Kelly for precious discussions. Preliminaries {#sec:prelims} ============= Our main result arises from putting together several observations made by others. In this section we recall some salient details regarding the ingredients we need. This also serves to fix ideas and notation for the rest of the article. Throughout we will work over a fixed base field, which we will denote by $k$, and by DG-category we always mean DG-category over $k$. Things could, as usual, be extended to more general base rings but we remain in the simplest case for the sake of avoiding technicalities in the exposition. Homotopy invariants of orbit categories {#sec:hoinv} --------------------------------------- We begin with a brief review of orbit categories. For further details the reader can consult [@KellerOrbit]. Let $\sfC$ be a DG-category and suppose we are given a DG-functor $F\colon C\to C$ such that $F$ is a quasi-equivalence, and hence $H^0(F)$ is an equivalence of categories. The DG-orbit category of $\sfC$ with respect to $F$, denoted by $\sfC/F$, is the DG-category whose objects are the same of those of $\sfC$ and whose morphism complexes are defined by $$\sfC/F(c,c') = \operatorname{colim}_{i\in \NN}\limits (\bigoplus_{j\in \NN} \sfC(F^jc, F^i c'))$$ where the transition maps are the obvious ones, namely $$\xymatrix{ \bigoplus\limits_{j\in \NN}\sfC(F^jc, c') \ar[r]^-{\oplus F} & \bigoplus\limits_{j\in \NN}\sfC(F^jc, F^1c') \ar[r]^-{\oplus F} & \bigoplus\limits_{j\in \NN}\sfC(F^jc, F^2c') \ar[r] & \cdots }$$ The DG-category structure is uniquely induced from that of $\sfC$ using functoriality of $F$, compatibility of tensor products with colimits, and the universal property of colimits. One can check that, upon taking the homotopy category, this gives the more familiar formula $$H^0(\sfC/F)(c,c') = \bigoplus_{i\in \ZZ}H^0(\sfC)(c, H^0(F)^i c')$$ and has the effect of making $H^0(F)$ equivalent to the identity functor. Put a little more carefully, there is a canonical functor $\pi\colon \sfC \to \sfC/F$ together with a natural transformation $\pi \to \pi F$ which becomes an isomorphism after taking homotopy. The DG-orbit category equipped with the canonical projection functor and natural transformation as above is initial in the appropriate sense with respect to triples of such data. Technically speaking one is stabilising (at the DG level) and taking orbits (at the triangulated level) with respect to the action of the additive monoid $\NN$ and its group completion $\ZZ$, generated by the action of the chosen functor, and so should indicate this somehow in the notation. But, since we shall only ever work with a single functor we omit such decorations from the notation. The formula for the morphism complexes in the DG-orbit category simplifies if $F$ is an honest equivalence of DG-categories: in this case $$\sfC/F(c,c') \cong \bigoplus_{i\in \ZZ} \sfC(c, F^ic')$$ (as is always the case after taking homology). We note that even if $\sfC$ is pretriangulated, so that $H^0(\sfC)$ is triangulated, and $F$ is an honest DG-equivalence, it may no longer be the case that $\sfC/F$ is pretriangulated. Of course, one of the draws of the DG-setting is that we can just take $\operatorname{\mathsf{Perf}}(\sfC/F)$ to remedy this situation, where $\operatorname{\mathsf{Perf}}(\sfC/F)$ denotes the DG-category of perfect DG-modules over $\sfC/F$. This being said, it is natural at this juncture to lay our cards on the table concerning the standing hypotheses we will make about existence of cones and idempotent completeness. \[conv:horror\] Unless explicitly mentioned otherwise we assume our DG-categories are pretriangulated with idempotent complete homotopy categories. In particular, despite the generality in which we have defined things above, for us $\sfC$ will always be quasi-equivalent to $\operatorname{\mathsf{Perf}}(\sfC)$ and by $\sfC/F$ we will really mean $\operatorname{\mathsf{Perf}}(\sfC/F)$. This is, without doubt, an abuse. But in our examples all Verdier quotients will be idempotent complete and our main focus is invariants which invert derived Morita equivalences and so it is a harmless abuse. On the occasions when we need to explicitly discuss idempotent completion we will use $\natural$ to denote it. \*   We now give a quick review of $\AA^1$-homotopy invariants. We let $\dgcatk$ denote the category of (essentially) small DG-categories over $k$, i.e. this is the category with objects the small DG-categories and morphisms given by isomorphism classes of DG-functors. In addition we fix some triangulated category $\sfT$. A functor $\a1\colon \dgcatk \to \sfT$ is an $\AA^1$*-homotopy invariant* if: - $\a1$ sends derived Morita equivalences to isomorphisms, in particular for any DG-category $\sfC$ the canonical inclusion $\sfC\to \operatorname{\mathsf{Perf}}(\sfC)$ is sent to an isomorphism by $\a1$; - $\a1$ sends localization sequences of DG-categories to triangles; - $\a1$ inverts the canonical inclusion $$\sfC \to \sfC[t] = \sfC\otimes_k k[t]$$ for every DG-category $\sfC$. Important examples are given by variants of $K$-theory, for instance Weibel’s homotopy $K$-theory and the topological $K$-theory of DG-categories (see [@AHK-theory] for the latter), and by periodic cyclic homology; see [@TabuadaFT] for further details. \[rem:SOD\] One consequence of the definition is that an $\AA^1$-homotopy invariant sends semi-orthogonal decompositions to direct sum decompositions. It follows that if $H^0\operatorname{\mathsf{Perf}}(\sfC)$ has a full exceptional collection $(E_1,\ldots, E_n)$ then $$\a1(\sfC) \cong \a1(k)^{\oplus n}$$ in the target category $\sfT$ (throughout we assume that for a collection to be exceptional each $\operatorname{thick}(E_i)$ is admissible). The main fact which we will need concerning $\AA^1$-homotopy invariants is that they are compatible with taking orbits. \[thm:tabuada\] Let $\sfC$ be a DG-category and $F\colon \sfC\to \sfC$ a quasi-equivalence. Then for any $\AA^1$-homotopy invariant $\a1\colon \dgcatk \to \sfT$ there is a distinguished triangle $$\xymatrix{ \a1(\sfC) \ar[rr]^-{\a1(F) - \operatorname{id}} && \a1(\sfC) \ar[r]^-{\a1(\pi)} & \a1(\sfC/F) \ar[r] & \Sigma \a1(\sfC) }$$ where $\pi\colon \sfC\to \sfC/F$ is the canonical DG-functor. Using this theorem one can reduce computations of $\AA^1$-invariants $\a1(\sfC/F)$ to understanding the action of $\a1(F)$ on $\a1(\sfC)$. As we shall see this is often easier than trying to directly compute $\a1(\sfC/F)$. Graded and ungraded modules {#sec_graded} --------------------------- In this section we recall some details on singularity categories and give a sketch of a result we will use which is due to Keller, Murfet, and Van den Bergh. Throughout, as above, we fix a base field $k$. Let $A$ be a finitely generated noetherian graded $k$-algebra. Recall that $A$ is said to be *connected* if $A$ is non-negatively graded and $A_0=k$. We can associate with $A$ the category of finitely generated graded $A$-modules $\operatorname{\mathsf{gr}}A$ and then go on to form the bounded derived category of finitely generated graded $A$-modules $\sfD^\bdd(\operatorname{\mathsf{gr}}A)$ and the full subcategory of perfect complexes $\sfD^\perf(\operatorname{\mathsf{gr}}A)$ within. The graded singularity category of $A$ is the quotient $$\sfD_\sg(\operatorname{\mathsf{gr}}A) = \sfD^\bdd(\operatorname{\mathsf{gr}}A)/ \sfD^\perf(\operatorname{\mathsf{gr}}A).$$ Each of these categories comes equipped with a grading shift autoequivalence $(1)$ which is defined on graded modules by reindexing $$M(i)_j = M_{i+j}.$$ Similarly we can work with ungraded $A$-modules and define $\sfD_\sg(\operatorname{\mathsf{mod}}A)$. We say that an $A$-module $M$ is *gradable* if there is a graded $A$-module whose underlying module is $M$. All of the triangulated categories mentioned above are algebraic and thus have DG-enhancements. So we have access to the definitions and tools mentioned in Section \[sec:hoinv\]. The result we wish to recall (in a slightly extended form) compares the two categories $\sfD_\sg(\operatorname{\mathsf{gr}}A)$ and $\sfD_\sg(\operatorname{\mathsf{mod}}A)$. There is an obvious exact comparison functor, given by forgetting the grading, $$F\colon \sfD_\sg(\operatorname{\mathsf{gr}}A) \to \sfD_\sg(\operatorname{\mathsf{mod}}A)$$ which ‘factors’ via a functor $$\widetilde{F}\colon \sfD_\sg(\operatorname{\mathsf{gr}}A) / (1) \to \sfD_\sg(\operatorname{\mathsf{mod}}A)^\natural$$ by the universal property of the orbit category. The reason for the scare quotes and the $\natural$ is that, in the process of forming the orbit category, we idempotent complete it and so we had better idempotent complete the target of the comparison functor (cf. Convention \[conv:horror\] and note that, in keeping with it, we drop the $\natural$ from now on). As an aside we note that in many cases, for instance if $A$ is complete, then the singularity category is already idempotent complete. This comparison functor $\widetilde{F}$ is always an embedding. The functor $\widetilde{F}$ is fully faithful. Since $\widetilde{F}$ is exact (and we’re conflating $\sfD_\sg(\operatorname{\mathsf{gr}}A) / (1)$ with its pretriangulated hull) the next lemma is an immediate consequence. \[lem\_orbitequiv\] If a classical generating set for $\sfD_\sg(\operatorname{\mathsf{mod}}A)$ is gradable then $$\widetilde{F}\colon \sfD_\sg(\operatorname{\mathsf{gr}}A) / (1) \to \sfD_\sg(\operatorname{\mathsf{mod}}A)$$ is an equivalence. Since $\widetilde{F}$ is fully faithful it embeds $\sfD_\sg(\operatorname{\mathsf{gr}}A)/(1)$ as a thick subcategory. If a classical generating set for $\sfD_\sg(\operatorname{\mathsf{mod}}A)$ is gradable then the image of $\widetilde{F}$ contains said generating set and so, since the image of $\widetilde{F}$ is thick, it must be all of $\sfD_\sg(\operatorname{\mathsf{mod}}A)$. If $A$ is connected graded, so in particular graded local, then the trivial module $k = A/A_{\geq 1}$ is always gradable and we obtain the following observation of Keller, Murfet, and Van den Bergh. If $A$ is a finitely generated connected commutative graded $k$-algebra such that the augmentation ideal $A_{\geq 1}$ defines an isolated singularity in $\operatorname{Spec}A$ then $\widetilde{F}$ is an equivalence. There are also other situations in which the lemma applies. The following elementary lemma covers some further cases of interest, for instance it applies to finite dimensional algebras with respect to a grading making the (ungraded) Jacobson radical homogeneous. This can be viewed as the obvious noncommutative generalization of the proposition in (geometric) dimension zero. If $A$ is finite dimensional and the simple modules are gradable then $\widetilde{F}$ is an equivalence. Since every object of $\operatorname{\mathsf{mod}}A$ has a finite composition series with semisimple subquotients the simples generate $\operatorname{\mathsf{mod}}A$ under finite direct sums and extensions. It follows that the simples form a classical generating set for $\sfD^\bdd(\operatorname{\mathsf{mod}}A)$. As the singularity category is a quotient of the bounded derived category their images in $\sfD_\sg(\operatorname{\mathsf{mod}}A)$ are thus also a classical generating set, which lifts along $F$ by hypothesis. Koszul duality -------------- Let us now recall a small piece of the theory of Koszul duality which will be used in one of our applications. Fix a field $k$, as above, and let $\Lambda$ be a left and right coherent connected graded $k$-algebra. The graded algebra $\Lambda$ is *Koszul* if the minimal graded free resolution of the trivial module $k$ is linear. Put explicitly the requirement is that if $$\xymatrix{ \cdots \ar[r] & F_i \ar[r] & F_{i-1}\ar[r] & \cdots \ar[r] & F_0 }$$ is the minimal graded free resolution then $F_i$ is generated in degree $i$. The *Koszul dual* of $\Lambda$ is $$\Lambda^! = \operatorname{Ext}^*_\Lambda(k,k)$$ where the Ext-algebra is computed sans grading and $\Lambda^!$ is graded using cohomological degree. \[rem:Koszul1\] This definition can be extended beyond the connected case, cf. Remark \[rem:Koszul2\]. The facts we will need concerning Koszul duality are summarised in the following theorem; these facts are all standard, and we do not attempt to give exhaustive references. At this level of generality one can consult [@VillaSaorin]\*[4.2 and 4.3]{} for further details. Really all that is needed for the statement we give is that $\operatorname{Ext}^*_\Lambda(k,k)$ is concentrated on the diagonal with respect to the bigrading by cohomological and internal degrees, see for example [@PPQA]\*[Chapter 2.1]{}. \[thm\_KD\] Suppose $\Lambda$ is a Koszul algebra. Then $\Lambda^!$ is also Koszul and $\Lambda^{!!}\cong \Lambda$. If in addition $\Lambda$ is finite dimensional then $\Lambda^!$ has finite global dimension. Moreover, in this case the full subcategory $$\sfT = \{\Sigma^{i}k(-i) \; \vert \; i\in \ZZ\}$$ of $\sfD^\mathrm{b}(\operatorname{\mathsf{gr}}\Lambda)$ is tilting. It induces an equivalence of triangulated categories $$\phi = \operatorname{\mathbf{R}Hom}(\sfT,-) \colon \sfD^\mathrm{b}(\operatorname{\mathsf{gr}}\Lambda) \to \sfD^\mathrm{perf}(\operatorname{\mathsf{gr}}\Lambda^!)$$ such that - $\phi\circ (1) \cong \Sigma(-1) \circ \phi$; - $\phi$ sends perfect complexes to complexes with torsion cohomology. In particular, $\phi$ restricts to an equivalence $$\sfD^\mathrm{perf}(\operatorname{\mathsf{gr}}\Lambda) \to \sfD^\mathrm{perf}_\mathrm{tors}(\operatorname{\mathsf{gr}}\Lambda^!),$$ and so induces an equivalence $$\sfD_\mathrm{sg}(\operatorname{\mathsf{gr}}\Lambda) \to \sfD^\mathrm{b}(\operatorname{\mathsf{qgr}}\Lambda^!) = \sfD^\mathrm{perf}(\operatorname{\mathsf{gr}}\Lambda^!) / \sfD^\mathrm{perf}_\mathrm{tors}(\operatorname{\mathsf{gr}}\Lambda^!).$$ In our applications $\Lambda^!$ will be coherent and so $\operatorname{\mathsf{gr}}\Lambda^!$ is an abelian category and one can identify $\sfD^\mathrm{perf}(\operatorname{\mathsf{gr}}\Lambda^!)$ with $\sfD^\mathrm{b}(\operatorname{\mathsf{gr}}\Lambda^!)$. Moreover, $\sfD^\mathrm{b}(\operatorname{\mathsf{qgr}}\Lambda^!)$ is then the bounded derived category of $\operatorname{\mathsf{qgr}}\Lambda^! = \operatorname{\mathsf{gr}}\Lambda^!/\operatorname{\mathsf{tors}}\Lambda^!$, where $\operatorname{\mathsf{tors}}\Lambda^!$ is the full subcategory of finitely presented torsion modules. An example of particular relevance is when $\Lambda$ is $\wedge(k(-1)^{n+1})$, an exterior algebra on $n+1$ degree $1$ generators. This algebra is certainly finite dimensional and is also Koszul, with Koszul dual $\mathrm{S}(k(-1)^{n+1})$ the symmetric algebra on $n+1$ degree $1$ generators. In this situation the theorem gives the classical BGG correspondence [@BGG] $$\operatorname{\underline{\mathsf{gr}}}\Lambda \cong \sfD^\mathrm{b}(\operatorname{coh}\mathbb{P}^{n}_k)$$ sending the autoequivalence $(1)$ on the left to the autoequivalence $\Sigma \str(-1)\otimes \text{-}$ on the right. The main results ================ We are now in a position to indicate how one can compute $\mathbb{A}^1$-homotopy invariants of singularity categories in the presence of a favourable grading. A general statement ------------------- We first give the obvious statement one gets from the given ingredients. \[thm\_main1\] Let $A$ be a finitely generated noetherian graded $k$-algebra such that there is a classical generating set of gradable modules for $\sfD_\sg(\operatorname{\mathsf{mod}}A)$. Then for any $\mathbb{A}^1$-homotopy invariant $\a1$ there is an isomorphism $$\a1(\sfD_\sg(\operatorname{\mathsf{mod}}A)) \cong \a1(\sfD_\sg(\operatorname{\mathsf{gr}}A)/(1)),$$ where $(1)$ denotes the grading shift autoequivalence on $\sfD_\sg(\operatorname{\mathsf{gr}}A)$, which induces a triangle $$\xymatrix{ \a1(\sfD_\sg(\operatorname{\mathsf{gr}}A)) \ar[rr]^-{\a1(1) - \operatorname{id}} && \a1(\sfD_\sg(\operatorname{\mathsf{gr}}A)) \ar[r] & \a1(\sfD_\sg(\operatorname{\mathsf{mod}}A)) \ar[r] & \Sigma \a1(\sfD_\sg(\operatorname{\mathsf{gr}}A)). }$$ By the hypotheses on $A$ we see from Lemma \[lem\_orbitequiv\] that there is an equivalence $$\sfD_\sg(\operatorname{\mathsf{gr}}A)/(1) \cong \sfD_\sg(\operatorname{\mathsf{mod}}A)$$ (recall that we are identifying the orbit category with its pretriangulated hull, see Convention \[conv:horror\]). The first statement of the theorem is immediate from this. The existence of the claimed cofibre sequence then follows from Tabuada’s result Theorem \[thm:tabuada\]. We now specialise to finite dimensional Koszul algebras. This case is particularly nice as Koszul duality allows one to rephrase the computation in terms of orbit categories arising from noncommutative projective algebraic geometry. \[cor:Koszul\] Let $\Lambda$ be a finite dimensional $k$-algebra equipped with a Koszul grading and denote by $\Lambda^!$ its Koszul dual. Then for any $\mathbb{A}^1$-homotopy invariant $\a1$ there is an isomorphism $$\a1(\sfD_\sg(\operatorname{\mathsf{gr}}\Lambda)) \cong \a1(\sfD^\mathrm{perf}(\operatorname{\mathsf{qgr}}\Lambda^!))$$ which induces a triangle $$\xymatrix{ \a1(\sfD^\mathrm{perf}(\operatorname{\mathsf{qgr}}\Lambda^!)) \ar[rrr]^-{\a1(\Sigma\str(-1)\otimes \text{-}) - \operatorname{id}} &&& \a1(\sfD^\mathrm{perf}(\operatorname{\mathsf{qgr}}\Lambda^!)) \ar[r] & \a1(\sfD_\sg(\operatorname{\mathsf{mod}}\Lambda)) \ar[r] &. }$$ Since $\Lambda$ admits a connected grading it is local as a plain algebra. The unique simple $k$ is certainly gradable and so Theorem \[thm\_main1\] applies to give us a cofibre sequence $$\xymatrix{ \a1(\sfD_\sg(\operatorname{\mathsf{gr}}\Lambda)) \ar[rr]^-{\a1(1) - \operatorname{id}} && \a1(\sfD_\sg(\operatorname{\mathsf{gr}}\Lambda)) \ar[r] & \a1(\sfD_\sg(\operatorname{\mathsf{mod}}\Lambda)) \ar[r] & \Sigma \a1(\sfD_\sg(\operatorname{\mathsf{gr}}\Lambda)). }$$ By Theorem \[thm\_KD\] there is an equivalence $\sfD_\sg(\operatorname{\mathsf{gr}}\Lambda) \cong \sfD^\mathrm{perf}(\operatorname{\mathsf{qgr}}\Lambda^!)$ which identifies $(1)$ on the former category with $\Sigma\str(-1)\otimes \text{-}$ on the latter. Rewriting the cofibre sequence above using these identifications gives the cofibre sequence claimed in the statement. \[rem:Koszul2\] One can consider a more general notion of Koszul where instead of working over $k$ with connected algebras one works with algebras augmented over more general semisimple bases. There is, of course, an analogue in this generality and all the proofs go through unchanged. Gorenstein algebras ------------------- In order to use these results to effectively compute invariants one needs a handle on the graded singularity category. Fortunately, introducing a grading has the effect of adding more simples and splitting up the Exts. As a result, there are frequently semi-orthogonal decompositions at the graded level which one can exploit to perform computations, cf. Remark \[rem:SOD\]. The situation is particularly good for certain finite dimensional algebras. Following [@BurkeStevenson], we say a graded $k$-algebra $\Lambda$ is *Artin-Schelter Gorenstein* if $\Lambda$ has finite injective dimension as both a left and a right $\Lambda$-module and $$\operatorname{\mathbf{R}\underline{Hom}}(\Lambda_0,\Lambda) \cong \Sigma^d\Lambda_0(a)$$ for some integers $d$ and $a$, where $\operatorname{\mathbf{R}\underline{Hom}}$ is the right derived functor of the graded hom-functor. We call the $a$ appearing above the *Gorenstein parameter* of $\Lambda$. If $\Lambda$ is a finite dimensional graded $k$-algebra with $\Lambda_0 = \Lambda/\operatorname{rad}(\Lambda)$ then $\Lambda$ being Artin-Schelter Gorenstein implies that $\Lambda$ is self-injective. In particular, there is a canonical equivalence $\sfD_\sg(\operatorname{\mathsf{gr}}\Lambda) \cong \operatorname{\underline{\mathsf{gr}}}\Lambda$. One can sometimes get away with asking less of $\Lambda$; by [@Yamaura] the stable category of graded modules over any non-negatively graded self-injective algebra $\Lambda$ with $\Lambda_0$ of finite global dimension has a tilting object. This can be used to run a similar argument, at least in some cases, to the one given below. However, we do not treat this case explicitly. \[cor:Gor\] Let $\Lambda$ be a finite dimensional basic Artin-Schelter Gorenstein $k$-algebra with $\Lambda_0 = \Lambda/\operatorname{rad}(\Lambda)$, where $\operatorname{rad}(\Lambda)$ denotes the ungraded radical, and with all simples $1$-dimensional. Then for any $\mathbb{A}^1$-homotopy invariant $\a1$ there is an isomorphism $$\a1(\sfD_\sg(\operatorname{\mathsf{gr}}\Lambda)) \cong \a1(k)^{\oplus n|a|}$$ where $n$ is the number of simples and $a\leq0$ is the Gorenstein parameter of $\Lambda$. In particular, there is a cofibre sequence $$\xymatrix{ \a1(k)^{\oplus n|a|} \ar[r]^-\phi & \a1(k)^{\oplus n|a|} \ar[r] & \a1(\sfD_\sg(\operatorname{\mathsf{mod}}\Lambda)) \ar[r] & }$$ where $\phi$ can be written in the form $$\phi = \begin{pmatrix} -1 & 0 & \cdots & 0 & \phi_{0,a+1} \\ 1 & -1 & \cdots & 0 & \phi_{-1,a+1} \\ 0 & 1 & \cdots & 0 & \phi_{-2,a+1} \\ \vdots & \vdots & \cdots & \vdots & \vdots \\ 0 & 0 & \cdots & -1 & \phi_{a+2, a+1} \\ 0 & 0 & \cdots & 1 & \phi_{a+1,a+1} -1 \end{pmatrix} \in \operatorname{End}(\a1(k)^{\oplus n|a|}).$$ By [@BurkeStevenson]\*[Theorem 6.4]{} (which extends [@Orlov09]\*[Corollary 2.9]{}) the graded singularity category has a semi-orthogonal decomposition $$\sfD_\sg(\operatorname{\mathsf{gr}}\Lambda) = (\Lambda_0(0), \ldots, \Lambda_0(a+1))$$ where $a$ is the Gorenstein parameter (which is negative in this case provided the singularity category isn’t trivial, so the sequence has length $a$). Since $\Lambda_0 \cong k^n$ is a semisimple algebra and $\AA^1$-homotopy invariants are functors sending localizations to triangles and inverting derived Morita equivalences, the isomorphism $\a1(\sfD_\sg(\operatorname{\mathsf{gr}}\Lambda)) \cong \a1(k)^{\oplus n|a|}$ is a formal consequence, as noted in Remark \[rem:SOD\]. Since $\Lambda_0 = \Lambda/\operatorname{rad}(\Lambda)$ all the simples are gradable and so Theorem \[thm\_main1\] applies to give a cofibre sequence $$\xymatrix{ \a1(\sfD_\sg(\operatorname{\mathsf{gr}}\Lambda)) \ar[rrr]^-{\a1(-1) - \operatorname{id}} &&& \a1(\sfD_\sg(\operatorname{\mathsf{gr}}\Lambda)) \ar[r] & \a1(\sfD_\sg(\operatorname{\mathsf{mod}}\Lambda)) \ar[r] & }$$ where we have taken orbits by $(-1)$ instead of $(1)$ for convenience (which makes no difference). Using the isomorphism $\a1(\sfD_\sg(\operatorname{\mathsf{gr}}\Lambda)) \cong \a1(k)^{\oplus n|a|}$ gives us a cofibre sequence of the claimed form up to verifying the description of $\phi = \a1(-1) - \operatorname{id}$. This description follows from noting that $(-1)$ just translates the chosen exceptional collection, except for $\Lambda_0(a+1)\mapsto \Lambda_0(a)$ which is no longer part of the collection. The final column expresses the class of $\Lambda_0(a)$ with respect to the decomposition of the Grothendieck group given by the semiorthogonal decomposition; see Remark \[rem:approximation\] for further explanation and intuition. \[rem:approximation\] Suppose for simplicity that $\Lambda_0 \cong k$, i.e. $\Lambda$ is local. The $\phi_{i,a+1}$ occurring in the final column of $\phi$ express the multiplicities occurring in the sequence of approximation triangles $$\xymatrix{ k(a) \ar[r] & k(a)_{a+2} \ar[d] \ar[r] & \cdots \ar[r] & k(a)_{-1} \ar[r] \ar[d] & k(a)_0 \ar[r] \ar[d] & 0 \ar[d] \\ & X_{a+1} \ar[ul]^-\Sigma & & X_{-2} & X_{-1} \ar[ul]^-\Sigma & X_0 \ar[ul]^-\Sigma }$$ for $k(a)$ with respect to the full exceptional collection $(k(0), \ldots, k(a+1))$; in this diagram we have $X_i \in \operatorname{thick}(k(i))$, each of the triangles is distinguished, and $k(a)_{i}$ lies in $(k(0),\ldots, k(i))$ (cf. [@TabuadaA1]\*[Proposition 2.8]{}). More precisely $$\phi_{i,a+1} = [X_i] \in K_0(\operatorname{thick}(k(i))) \cong \ZZ.$$ It is possible to give a sufficiently explicit description of the matrix $\phi$ in Corollary \[cor:Gor\] that one can actually compute in examples. The remainder of this section is devoted to providing this description. We start with an example, namely exterior algebras, illustrating how things work and how one can proceed with computations. We then explain, in Theorem \[thm:Frobenius\], how the story presented in the example generalises to any finite dimensional Artin-Schelter Gorenstein algebra. Exterior algebras ----------------- Let $\Lambda$ be an exterior algebra on $n+1$ generators in degree $1$. This algebra is Koszul with dual $\Lambda^!$ polynomial on $n+1$ degree $1$ generators. We are then in the situation of Corollary \[cor:Koszul\] (and of Corollary \[cor:Gor\]): the BGG correspondence gives $$\operatorname{\underline{\mathsf{gr}}}\Lambda \cong \sfD^\mathrm{b}(\operatorname{coh}\mathbb{P}^{n})$$ and we can exploit our knowledge of projective space. Given an $\AA^1$-homotopy invariant $\a1$ we can use the triangle $$\xymatrix{ \a1(\sfD^\mathrm{b}(\operatorname{coh}\PP^n)) \ar[rrr]^-{\a1(\Sigma^{-1}\str(1)\otimes \text{-}) - \operatorname{id}} &&& \a1(\sfD^\mathrm{b}(\operatorname{coh}\PP^n)) \ar[r] & \a1(\operatorname{\underline{\mathsf{mod}}}\Lambda) \ar[r] & }$$ to attempt to compute $\a1(\operatorname{\underline{\mathsf{mod}}}\Lambda)$ (again we have used the inverse of the functor taken in the corollaries for the sake of convenience). The Beilinson full exceptional collection $$\sfD^\mathrm{b}(\operatorname{coh}\PP^n) = (\str(-n), \str(-n+1), \ldots, \str(-1), \str)$$ implies that for any $\AA^1$-homotopy invariant $\a1$ we have $$\a1(\sfD^\mathrm{b}(\operatorname{coh}\PP^n)) \cong \a1(k)^{\oplus n+1}$$ and so we can rewrite our cofibre sequence as $$\xymatrix{ \a1(k)^{\oplus n+1} \ar[r]^-\phi & \a1(k)^{\oplus n+1} \ar[r] & \a1(\operatorname{\underline{\mathsf{mod}}}\Lambda) \ar[r] & }$$ and the game is to understand $\phi$ (as in Corollary \[cor:Gor\] which would have gotten us to the same place, noting that the Gorenstein parameter of $\Lambda$ is $-n-1$) which is the morphism $\a1(\Sigma^{-1}\str(1)\otimes\text{-})-\operatorname{id}$ written with respect to the system of coordinates given by the chosen full exceptional collection. We more or less understand $\phi$ in the sense that we can write it as $$\phi = \begin{pmatrix} 0 & 0 & \cdots & 0 & \psi_{-n} \\ \a1(\Sigma^{-1}) & 0 & \cdots & 0 & \psi_{-n+1} \\ 0 & \a1(\Sigma^{-1}) & \cdots & 0 & \psi_{-n+2} \\ \vdots & \vdots & \cdots & \vdots & \vdots \\ 0 & 0 & \cdots & 0 & \psi_{-1} \\ 0 & 0 & \cdots & \a1(\Sigma^{-1}) & \psi_{0} \end{pmatrix} - \mathrm{Id}_{n+1},$$ the main ‘difficulty’ being the computation of the $\psi_i$ which are the multiplicities for $\Sigma^{-1}\str(1)$. Indeed we know, by additivity for $\AA^1$-homotopy invariants [@TabuadaA1]\*[Proposition 2.5]{}, that $\a1(\Sigma) = -1$ and so only the last column needs to be computed. It turns out this is also relatively straightforward and doesn’t depend on $\a1$. In fact, as in [@TabuadaA1]\*[Proposition 2.8]{}, this comes down to computing the filtration by triangles for $\Sigma^{-1}\str(1)$ as indicated in Remark \[rem:approximation\]. This is essentially given by (the desuspension of) the Koszul complex $$0 \to \str(-n) \to \str(-n+1)^{\oplus\binom{n+1}{n}} \to \cdots \to \str^{\oplus\binom{n+1}{1}} \to \str(1) \to 0$$ and so we see that $\psi_{-i} = (-1)^{i+1}\binom{n+1}{i+1}$ for $0\leq i \leq n$. Thus $\a1(\operatorname{\underline{\mathsf{mod}}}\Lambda)$ is the cone of the endomorphism $$\phi = \begin{pmatrix} -1 & 0 & \cdots & 0 & (-1)^{n+1}\binom{n+1}{n+1} \\ -1 & -1 & \cdots & 0 & (-1)^{n}\binom{n+1}{n} \\ 0 & -1 & \cdots & 0 & (-1)^{n-1}\binom{n+1}{n-1} \\ \vdots & \vdots & \cdots & \vdots & \vdots \\ 0 & 0 & \cdots & -1 & \binom{n+1}{2} \\ 0 & 0 & \cdots & -1 & -\binom{n+1}{1}-1 \end{pmatrix}$$ of $\a1(k)^{\oplus n+1}$ and, as luck would have it, this cone is legitimately computable. Indeed, to compute the cone we can replace $\phi$ by its Smith normal form. An easy computation (the reader who is rightly suspicious of such statements will be reassured that we give an abstract justification for this computation in the next section, see the proof of Theorem \[thm:Frobenius\]) shows that the Smith normal form is $$\phi' = \begin{pmatrix} 1 & 0 & 0 &\cdots & 0 \\ 0 & 1 & 0 &\cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ 0 & 0 & 0 &\cdots & \det \phi \end{pmatrix}$$ where $\det\phi = \dim \Lambda = 2^{n+1}$. Hence we have proved: \[thm:exterior\] Let $\Lambda$ denote the exterior algebra on $n+1$ generators. Then for any $\AA^1$-homotopy invariant $\a1$ we have $$\a1(\operatorname{\underline{\mathsf{mod}}}\Lambda) \cong \operatorname{cone}(\a1(k) \xymatrix{ \ar[rr]^-{\boldsymbol{\cdot} 2^{n+1}} && } \a1(k)).$$ We could take $\a1$ to be Weibel’s homotopy K-theory $KH$. In this case we recover, for example, the computation that $$K_0(\operatorname{\underline{\mathsf{mod}}}\Lambda) \cong KH_0(\operatorname{\underline{\mathsf{mod}}}\Lambda) \cong \ZZ/2^{n+1}\ZZ.$$ Connected graded self-injective algebras ---------------------------------------- We now indicate how the argument given for exterior algebras generalises. Let $\Lambda$ be a finite dimensional basic Artin-Schelter Gorenstein $k$-algebra with $\Lambda_0 = \Lambda/\operatorname{rad}(\Lambda)$, where $\operatorname{rad}(\Lambda)$ denotes the ungraded radical, and $n$ simples all of which we assume are $1$-dimensional. In particular, $\Lambda$ is self-injective. Theorem \[thm:exterior\] naturally extends to this setting. In order to prove this we first need a technical lemma along the lines of [@TabuadaA1]\*[Corollary 1.6]{}. \[lem:nicesave\] There is an isomorphism $K_0(\operatorname{\underline{\mathsf{mod}}}\Lambda) \cong KH_0(\operatorname{\underline{\mathsf{mod}}}\Lambda)$. There is a natural comparison map $K\to KH$. Using this comparison and the sequence $$\xymatrix{ \operatorname{\underline{\mathsf{gr}}}\Lambda \ar[r]^-{(1)} & \operatorname{\underline{\mathsf{gr}}}\Lambda \ar[r]^-{\pi} & \operatorname{\underline{\mathsf{mod}}}\Lambda \cong \operatorname{\mathsf{Perf}}(\operatorname{\underline{\mathsf{gr}}}\Lambda / (1)) }$$ we get a commutative diagram $$\xymatrix{ K_0(\operatorname{\underline{\mathsf{gr}}}\Lambda) \ar[rr]^-{K_0((1)) - 1} \ar[d]^-\wr & &K_0(\operatorname{\underline{\mathsf{gr}}}\Lambda) \ar[r]^-{K_0(\pi)} \ar[d]^-\wr & K_0(\operatorname{\underline{\mathsf{mod}}}\Lambda) \ar[d] & \\ KH_0(\operatorname{\underline{\mathsf{gr}}}\Lambda) \ar[rr]^-{KH_0((1)) - 1} && KH_0(\operatorname{\underline{\mathsf{gr}}}\Lambda) \ar[r]^-{KH_0(\pi)} & KH_0(\operatorname{\underline{\mathsf{mod}}}\Lambda) \ar[r] & 0 \\ }$$ where the top composite is zero, and the first two vertical maps are isomorphisms since as in Corollary \[cor:Gor\] the category $\operatorname{\underline{\mathsf{gr}}}\Lambda$ has a full exceptional collection so $$KH(\operatorname{\underline{\mathsf{gr}}}\Lambda) \cong KH(k)^{\oplus n\vert a \vert} \cong K(k)^{\oplus n\vert a \vert} \cong K(\operatorname{\underline{\mathsf{gr}}}\Lambda).$$ by derived Morita invariance and additivity (cf. [@TabuadaFT]\*[Proposition 2.3]{} and [@WeibelKBook]\*[Example IV.12.5.1]{}). It also follows that the bottom row is exact since $0 = K_{-1}(k)^{\oplus n\vert a \vert} \cong KH_{-1}(\operatorname{\underline{\mathsf{gr}}}\Lambda)$. Furthermore, the map $K_0(\pi)$ is surjective: the classes of the simple modules generate $K_0(\operatorname{\underline{\mathsf{mod}}}\Lambda)$ and these classes are in the image of $K_0(\pi)$ as we have assumed the simples are all gradable. It then follows from a diagram chase that the third vertical map is an isomorphism as claimed. We denote by $C_\Lambda$ the Cartan matrix of $\Lambda$. It is the $n\times n$ integer matrix, where $n$ is the number of simples, whose entry in position $(i,j)$ is $\dim_k\operatorname{Hom}_\Lambda(P_i,P_j)$ for some fixed, arbitrary, order on the simple modules, where $P_i$ is the projective with top the $i$th simple. \[thm:Frobenius\] For any $\AA^1$-homotopy invariant $\a1$ we have $$\a1(\operatorname{\underline{\mathsf{mod}}}\Lambda) \cong \operatorname{cone}(\a1(k)^{\oplus n} \xymatrix{\ar[r]^-{C_\Lambda} &} \a1(k)^{\oplus n})$$ where $C_\Lambda$ is the Cartan matrix. By Corollary \[cor:Gor\] we know there is a cofibre sequence $$\xymatrix{ \a1(k)^{\oplus n|a|} \ar[r]^-\phi & \a1(k)^{\oplus n|a|} \ar[r] & \a1(\operatorname{\underline{\mathsf{mod}}}\Lambda) \ar[r] & }$$ where $\phi$ has the form indicated in the Corollary (and as in the last section and [@TabuadaA1]\*[Proposition 2.8]{}). In particular, $\phi$ is an integer matrix which does not depend on $\a1$. Taking $\a1$ to be homotopy K-theory and looking at $0$th homotopy groups we get an exact sequence $$\ZZ^{\oplus n|a|} \stackrel{\phi}{\to} \ZZ^{\oplus n|a|} \to KH_0(\operatorname{\underline{\mathsf{mod}}}\Lambda) \to 0$$ as in the proof of the previous lemma. Moreover, by said lemma there is an isomorphism $KH_0(\operatorname{\underline{\mathsf{mod}}}\Lambda) \cong K_0(\operatorname{\underline{\mathsf{mod}}}\Lambda)$ and we know that $K_0(\operatorname{\underline{\mathsf{mod}}}\Lambda) \cong \operatorname{coker}(C_\Lambda)$ by [@TW91]. This is only possible if the Smith normal form of $\phi$ is $$\begin{pmatrix} \mathrm{Id}_{n(\vert a\vert -1)} & 0 \\ 0 & S_\Lambda \end{pmatrix}$$ up to signs, where $S_\Lambda$ is the Smith normal form of $C_\Lambda$. Since $\phi$ is independent of the invariant $\a1$ and we can use its Smith normal form to compute the cone in the cofibre sequence computing $\a1(\operatorname{\underline{\mathsf{mod}}}\Lambda)$ the result follows. In particular, the $\AA^1$-invariants of the stable categories of such rings only depend on the Cartan matrix, i.e. on the dimension vectors of the projectives. This extends [@TW91]\*[Proposition 1]{}, which computes the Grothendieck group of the stable category, to arbitrary $\AA^1$-homotopy invariants. This allows us to perform various explicit computations over a finite field where we know the homotopy K-theory explicitly by the work of Quillen [@Quillen]\*[Theorem 8]{}. For instance, we deduce the following corollary. Let $\Lambda$ be a self-injective algebra over $\FF_p$ admitting a connected grading and of dimension $p^n$ for some $n\geq 1$. Then $$KH_i(\operatorname{\underline{\mathsf{mod}}}\Lambda) = \left\{\begin{array}{lr} \ZZ/p^n\ZZ & \text{if } i=0; \\ 0 & \text{if } i\geq 1. \end{array} \right.$$ In particular the inclusion $\operatorname{\mathsf{Perf}}\Lambda \to \sfD^\mathrm{b}(\operatorname{\mathsf{mod}}\Lambda)$ induces isomorphisms $$KH_i(\Lambda) \stackrel{\sim}{\to} KH_i(\sfD^\mathrm{b}(\operatorname{\mathsf{mod}}\Lambda))$$ for $i\geq 1$. Taking $\a1 = KH$ in Theorem \[thm:Frobenius\] and taking homotopy groups immediately yields the first computation by inspecting the resulting long exact sequence. Indeed, by [@WeibelKBook]\*[12.3.1]{} the homotopy K-theory of $\FF_p$ agrees with the usual algebraic K-theory, so we just observe that $p^n$ is invertible in $K_{2i-1}(\FF_p) \cong \ZZ/(p^i-1)\ZZ$ for $i\geq 1$ and $K_{2i}(\FF_p)$ is zero for $i\geq 1$. The second statement then follows from the long exact sequence for the localization sequence $$\operatorname{\mathsf{Perf}}\Lambda \to \sfD^\mathrm{b}(\Lambda) \to \operatorname{\underline{\mathsf{mod}}}\Lambda.$$ This applies for instance to the group algebra of $E^r = (\ZZ/p\ZZ)^{\oplus r}$ over $\FF_p$, showing that $\operatorname{\underline{\mathsf{mod}}}\FF_pE^r$ has no higher homotopy K-theory. In this case one can interpret the second statement of the corollary as computing the homotopy K-theory of cochains on the classifying space of $E^r$: $$KH_i(C^*(BE^r;\FF_p)) \cong K_i(\FF_p) \quad \text{ for all } i\geq 0.$$ One can use the theorem to produce many $\AA^1$*-homotopy phantoms*, i.e. DG-categories all of whose $\AA^1$-invariants are trivial. Indeed, the stable category of any suitable $\Lambda$ with invertible Cartan matrix will do. This will be pursued in future work. Phantoms from cluster theory ============================ In this section we observe that in Dynkin type $A_{2n}$ cluster categories have trivial $\AA^1$-homotopy invariants, i.e. they are ‘$\AA^1$-homotopy phantoms’. This is straightforward from work of Tabuada, who gave an expression for the $\AA^1$-homotopy invariants of cluster categories of finite quivers without oriented cycles in [@TabuadaA1]\*[Corollary 2.11]{}. However, it isn’t made explicit there that for $A_{2n}$ the stars align so that these invariants always vanish; we feel this is worth noting as the phantoms occurring in algebraic geometry are notoriously slippery, yet here we have a very concrete family of categories with trivial homotopy K-theory just lying around in the representation theorist’s toolbox. We now sketch the relevant setup (which proceeds, as one might expect, following [@TabuadaA1] and what we have done above). Let us fix some field $k$ and consider $\sfD^\mathrm{b}(\operatorname{\mathsf{mod}}kA_n)$, where for simplicity we will always think of $A_n$ with the linear orientation $$1 \to 2 \to \cdots \to n.$$ Of course, the derived category is independent of the orientation so this is purely a matter of convenience. The simples $S_i$ form a generating set for $\sfD^\mathrm{b}(\operatorname{\mathsf{mod}}kA_n)$ and, in fact, $(S_n,\ldots,S_1)$ is a full exceptional collection. Thus, if $\a1$ is any $\AA^1$-homotopy invariant, we have $$\a1(\sfD^\mathrm{b}(\operatorname{\mathsf{mod}}kA_n)) \cong \a1(\operatorname{thick}(S_1)) \oplus \cdots \oplus \a1(\operatorname{thick}(S_n)) \cong \a1(k)^{\oplus n}.$$ The ($2$-)cluster category of $A_n$ over $k$ is obtained by taking orbits by $\Sigma\tau^{-1}$ $$\sfC_{A_n} = \sfD^\mathrm{b}(\operatorname{\mathsf{mod}}kA_n) / \Sigma\tau^{-1}.$$ So we have, by [@TabuadaA1]\*[Theorem 1.5]{}, an identification $$\a1(\sfC_{A_n}) = \operatorname{cone}(\a1(\Sigma)\a1(\tau^{-1}) - 1).$$ By additivity for $\AA^1$-homotopy invariants [@TabuadaA1]\*[Proposition 2.5]{} we know $\a1(\Sigma) = -1$ and so it just remains to write down $\a1(\tau^{-1})$ in terms of the decomposition coming from the simples. We know that $$\tau^{-1} S_i = S_{i+1}$$ for $i\leq n-1$ and $\tau^{-1} S_n = \Sigma P_n$. Thus $\a1(\tau^{-1})$ can be represented by the $n\times n$ matrix $$\begin{pmatrix} 0 & 0 & \cdots & 0 & -1 \\ 1 & 0 & \cdots & 0 & -1 \\ 0 & 1 & \cdots & 0 & -1 \\ \vdots & \vdots & \cdots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & -1 \end{pmatrix}$$ with respect to the decomposition coming from the exceptional collection $(S_n,\ldots,S_1)$. Our rather modest observation is the following lemma. If $n$ is even then the matrix $$\a1(\Sigma\tau^{-1}) -1 = -\a1(\tau^{-1}) - 1 = \begin{pmatrix} -1 & 0 & \cdots & 0 & 1 \\ -1 & -1 & \cdots & 0 & 1 \\ 0 & -1 & \cdots & 0 & 1 \\ \vdots & \vdots & \cdots & \vdots & \vdots \\ 0 & 0 & \cdots & -1 & 0 \end{pmatrix}$$ has determinant $1$. In particular, it is an automorphism of $$\a1(\sfD^\perf(\operatorname{\mathsf{mod}}kA_n)) \cong \a1(k)^{\oplus n}.$$ We will in fact prove that the determinant of the above matrix, which we denote for the duration of the proof by $\phi_n$, is $1$ if $n$ is even and $0$ if $n$ is odd. We proceed by induction on $n$ starting with the case $n=2$, i.e.$$\phi_2 = \begin{pmatrix} -1 & 1 \\ -1 & 0 \end{pmatrix}$$ where one just observes the determinant is $1$. Assume then that the claim holds for $n-1 \geq 2$. By taking the Laplace expansion along the first row of $\phi_n$ we see $$\begin{aligned} \det(\phi_{n}) &= (-1)\det(\phi_{n-1}) + (-1)^{n+1}\det(X_n) \\ &= -\det(\phi_{n-1}) + (-1)^{2n} \\ &= -\det(\phi_{n-1}) + 1,\end{aligned}$$ where $X_n$ is the $(n-1) \times (n-1)$-matrix $$X_n = \begin{pmatrix} -1 & -1 & 0 & \cdots & 0 \\ 0 & -1 & -1 & \cdots & 0 \\ \vdots & \vdots & \cdots & \vdots & \vdots \\ 0 & 0 & \cdots & -1 & -1 \\ 0 & 0 & \cdots & 0 & -1 \end{pmatrix}.$$ Thus $\det(\phi_{n})$ is $-1 + 1 = 0$ if $n$ is odd and is $0 + 1 = 1$ if $n$ is even. This has the following rather striking consequence. \[thm:phantom\] If $n$ is even then for any $\AA^1$-homotopy invariant $\a1$ we have $$\a1(\sfC_{A_n}) = 0.$$ We know $\a1(\sfC_{A_n}) = \operatorname{cone}(\a1(\Sigma)\a1(\tau^{-1}) - 1)$. By the lemma the map $\a1(\Sigma\tau^{-1}) - 1$ is invertible when $n$ is even and so its cone vanishes. Thus we have had some manner of “$\AA^1$-phantoms” under our noses for some time. As a corollary to the theorem we deduce another surprising fact. Let $\Gamma_{n}$ denote the Ginzburg DG algebra associated to $A_n$ as in [@Ginzburg06]\*[Section 4.2]{}. We can then consider the DG category of perfect complexes over $\Gamma_n$, denoted $\operatorname{\mathsf{Perf}}(\Gamma_n)$, and its full DG subcategory consisting of those complexes with finite dimensional total cohomology, denoted $\operatorname{\mathsf{Perf}}_{\mathrm{fd}}(\Gamma_n)$. By [@Amiot09]\*[Corollary 3.12]{} there is a quasi-equivalence $$\operatorname{\mathsf{Perf}}(\Gamma_n) / \operatorname{\mathsf{Perf}}_{\mathrm{fd}}(\Gamma_n) \cong \sfC_{A_n}.$$ Combining this identification of the quotient with \[thm:phantom\] gives the following computation. If $n$ is even the inclusion $i\colon\operatorname{\mathsf{Perf}}_{\mathrm{fd}}(\Gamma_n) \to \operatorname{\mathsf{Perf}}(\Gamma_n)$ induces, for any $\AA^1$-homotopy invariant $\a1$, an isomorphism $$\a1(i)\colon \a1(\operatorname{\mathsf{Perf}}_{\mathrm{fd}}(\Gamma_n)) \stackrel{\sim}{\to} \a1(\Gamma_n).$$ By definition $\a1$ applied to the localization sequence $$\operatorname{\mathsf{Perf}}_{\mathrm{fd}}(\Gamma_n) \to \operatorname{\mathsf{Perf}}(\Gamma_n) \to \sfC_{A_n}$$ gives a triangle $$\a1(\operatorname{\mathsf{Perf}}_{\mathrm{fd}}(\Gamma_n)) \to \a1(\operatorname{\mathsf{Perf}}(\Gamma_n)) \to \a1(\sfC_{A_n}) \to \Sigma \a1(\operatorname{\mathsf{Perf}}_{\mathrm{fd}}(\Gamma_n)).$$ By the theorem the third object in this triangle is trivial, from which the assertion follows immediately. We note that, unlike $\operatorname{\mathsf{Perf}}(\Gamma_n)$ and $\sfC_{A_n}$, the category $\operatorname{\mathsf{Perf}}_{\mathrm{fd}}(\Gamma_n)$ is *not* smooth. Thus the corollary gives an explicit example of a smooth DG category which cannot be distinguished from a non-smooth DG subcategory by any $\AA^1$-homotopy invariant.
--- abstract: 'In the present contribution, we discuss the behavior of Skyrme forces when they are employed to study both neutron stars and giant resonance states in $^{208}$Pb within the fully self-consistent Random Phase Approximation (RPA). We point out that clear correlations exist between the results for the isoscalar monopole and isovector dipole resonances (ISGMR and IVGDR), and definite quantities which characterize the equation of state (EOS) of uniform matter. We propose that the RPA results or, to some extent, the mentioned EOS parameters, are used as constraints when a force is fitted. This suggestion can be valid also when the fit of a more general energy density functional is envisaged. We use our considerations to select a limited number of Skyrme forces (10) out of a large sample of 78 interactions.' author: - Gianluca Colò title: | Constraints, limits and extensions\ for nuclear energy functionals --- [ address=[Dipartimento di Fisica, Università degli Studi, and INFN, Sez. di Milano, via Celoria 16, 20133 Milano (Italy)]{} ]{} Introduction ============ The quest for an accurate density functional for atomic nuclei lies at the forefront of nuclear structure research. Nuclei are strongly interacting many-body systems. Except for the lightest among them, namely those having mass number $A$ smaller than $\approx$ 10-15 [@Wiringa], trying to include explicitly all correlations in the wavefunction is not feasible. One is obliged to reduce the complexity of the wavefunction and to employ effective interactions $V_{\rm eff}$. Since more than three decades, one of the most widely used approaches in nuclear physics is the so-called self-consistent mean-field (SCMF) approach. In it, starting from an effective Hamiltonian $H_{\rm eff}=T+V_{\rm eff}$ where $T$ is the kinetic energy, one calculates the total energy $E$ as the expectation value of that Hamiltonian over the most general Slater determinant $\vert\Phi\rangle$, i.e., over the most general one-body density $\rho$ which is compatible with the symmetries of the system under study. This defines an energy density functional $\cal E$, that is, $$\label{general_E} \langle \Phi \vert H_{\rm eff} \vert \Phi \rangle = E[\rho] = \int d_3r\ {\cal E}(\rho).$$ By minimizing the total energy $E[\rho]$, one can derive the nuclear ground-state. In the simplest case, that is, in a system which is not superfluid, this is achieved by means of the Hartee-Fock (HF) equations. Among effective interactions, the zero-range forces of the Skyrme type or the finite-range Gogny interactions, are the most popular. Their parameters are fitted on a limited number of known properties (saturation of nuclear matter, energies and radii of few magic nuclei). The relativistic mean field (RMF) models share the same philosophy, that is, the number of parameters is comparable and they are fitted in a similar way. For a recent review about the mean-field methods, their advantages and the obvious limitations, one can refer to [@Bender]. Recently, many authors have pointed out that the nuclear energy functional can be more general and not necessarily obtained from an effective Hamiltonian. There are groups, around the world, who are working intensively with the aim of writing directly the energy density functional with the most general structure compatible with symmetries, and of fitting the parameters at that level. The rationale behind this, is the guarantee that an exact functional exists (provided by the Hohenberg-Kohn theorem). An alternative strategy consists in trying to derive the energy functional from an underlying theory, whether Brückner-Hartree-Fock or Dirac-Brückner. While these attempts are certainly of paramount importance, they cannot suceed without a physical guidance. So, there is still work to be done on the existing functionals, albeit limited in their structure and not derived from an underlying theory. One should - remove the approximations, if any, which are still present in the calculations for the ground-state and the excited states within the SCMF implementations; - provide a clear link between functional parameters and observables; - propose extensions of the existing functionals as much as this appears necessary to account for measured observables. The basic issue is to define the observables. We mentioned above those associated with the ground-state: the total energy and the density with the quantities that can be derived like radii, quadrupole moments etc. Their relevance is undeniable, and at the same time they have been already much discussed (see, e.g., [@Pearson] for a discussion of functionals which are fitted to nuclear masses on a large scale). Much less attention is usually paid to excited states. However, there exist states, like the giant resonances, whose properties carry general and relevant nuclear structure information. Consequently, we focus in this contribution on elucidating the links between the giant resonance properties and specific features of the existing Skyrme functionals. We then exploit these links by proposing a selection of Skyrme forces; our first screening is actually based on neutron star properties, following closely the work of Ref. [@Stone]. Our discussion is based on the assumption of a relationship between the parameters of a functional and the results obtained from specific calculations of the excitation modes (in the case under study, the giant resonances). If, as recalled above, the HF equations provide the nuclear ground-state by minimizing the total energy, the corresponding time-dependent (TD) equations describe the oscillations around that minimum. In the limit of small amplitude, the TDHF equations reduce to the equations of the so-called Random Phase Approximation (RPA). RPA is a suitable theory to describe the nuclear giant resonances (although it cannot account for their spreading width). In self-consistent RPA, the residual interaction is derived from the ground-state mean field. Therefore, the RPA results depend only on the parameters of the effective Hamiltonian. In this way, one is able to link these parameters with the giant resonance properties, or with specific EOS parameters as we discuss below. Within phenomenological RPA, based e.g. on a Woods-Saxon mean field and a residual interaction which is fitted [*ad hoc*]{}, it is impossible to establish these links. Most of the existing nuclei are open-shell systems in which the pairing correlations are active. In this case, the HF framework can be extended and one introduces the Hartree-Fock-Bogoliubov (HFB) one, in which a Slater determinant of independent quasiparticles instead of independent particles, is assumed. The corrsponding linear response theory which describes the small oscillations is the quasiparticle RPA (QRPA). In these approaches, the Skyrme force must be supplemented by a pairing interaction which is not the focus of our present discussion. We point out, however, in what follows, that pairing can have an effect also on high-lying states like giant resonances and impact our discussion of the relationships between the parameters of a Skyrme functional and the EOS quantities. Sketch of the method to solve the (Q)RPA equations ================================================== We have at our disposal a fully self-consistent scheme for both RPA and QRPA. Some results from RPA have been first presented in Ref. [@comex2]. Skyrme-RPA theory is well known since many years, especially in its matrix formulation. In our scheme, we first solve the HF equations in coordinate space and calculate the unoccupied states by using the resulting mean field and box boundary conditions (which means that the continuum is discretized). We build a basis of particle-hole (p-h) configurations and we diagonalize the associated RPA matrix, by checking carefully that the basis is large enough so that our results are stable. We should also mention that in our scheme there is no approximation in the residual interaction, in that all its terms are taken into account. In the calculations presented below, the box dimension is typically between $\approx$ 3-4 times the size of the nucleus, and unoccupied states up to $\approx$ 60-80 MeV are included in the model space. The extension of our model for open-shell nuclei, in the form of a fully self-consistent QRPA based on HFB, has been presented for the first time in [@Li]. The formalism is analogous to that of Ref. [@Terasaki]. The starting point is the solution of the HFB equations in coordinate space. In this case, the basis for QRPA is built using canonical states. This allows keeping the equations reasonably simple, that is, the part of the QRPA matrix associated with the residual interaction is the same as in the case of BCS (whereas the part associated with the unperturbed Hamiltonian has non-diagonal elements, since the canonical states are not eigenstates of that Hamiltonian). However, the price to be paid is that the canonical basis must be quite large (the energy cutoff is $\approx$ 150-200 MeV, at variance with the RPA case). After solving the RPA or QRPA equations, we obtain the full set of eigenvalues and eigenvectors and from them the strength function $S(E)$ associated with, e.g., the IS monopole or IV dipole operators. The moments of the strength function are defined as $m_k=\int dE\ S(E) E^k$. The usual centroid energy is defined as $E_0=m_1/m_0$. In certain cases, and also in the discussion below, other definitions like $E_{-1}=\sqrt{m_1/m_{-1}}$ (called sometimes the constrained energy) are used. Obviously, all possible well-defined centroid energies coincide if the strength has a single, symmetric peak. In cases like the IS quadrupole, where there is a giant resonance but also a low-lying peak, centroid energies must be defined in a limited energy interval. We should point out that the quantity $E_{-1}=\sqrt{m_1/m_{-1}}$ could be obtained without resorting to a full QRPA calculation, at least in the nonrelativistic framework. In fact, $m_1$ can be obtained from the Thouless theorem and $m_{-1}$ from the dielectric theorem. These theorems, which have been known for long time in the non-superfluid case, have recently been demonstrated in the case with pairing - that is, in the case of self-consistent QRPA on top of HFB [@theorems]. Reminder of the relevant EOS parameters ======================================= In Eq. (\[general\_E\]) the energy density functional has been written in a schematic, oversimplified form. In fact, for systems that are not symmetric in neutrons and protons, the total energy must depend on both neutron and proton densities ($\rho_q$, where $q$ labels $n,p$). Moreover, in finite nuclei a local functional depends also on gradients of the densities, $\nabla\rho_q$, on kinetic energy densities, $\tau_q$, and on the so-called spin-orbit densities $J_q$ (for details, see Ref. [@Bender]). In uniform matter, only the dependence on spatial densities shows up. Instead of $\rho_n$ and $\rho_p$, one can employ as variables the total density $\rho$ and the [*local*]{} neutron-proton asymmetry, $\delta \equiv \left( \rho_n-\rho_p \right) / \rho$ (this quantity should not be confused with the [*global*]{} asymmetry $(N-Z)/A$). In uniform asymmetric matter, we can further simplify ${\cal E}(\rho,\delta)$ by making a Taylor expansion in $\delta$ and retaining only the quadratic term, $$\label{def_sym} {\cal E}(\rho,\delta) \approx {\cal E}_0(\rho,\delta=0) + {\cal E}_{\rm sym}(\rho) \delta^2 = {\cal E}_0(\rho,\delta=0) + \rho S(\rho) \delta^2.$$ It has been checked that the quartic term is negligible, for Skyrme functionals, at the densities of interest for our discussion [@Trippa1; @Trippa2]. The first term at the r.h.s. of Eq. (\[def\_sym\]) is the energy density of symmetric nuclear matter; for it, the minimum of the energy per particle $E/A={\cal E}/\rho$ is well known and used when functionals are fitted. The curvature around this minimum is simply related to the nuclear matter incompressibility, which reads $$\label{K} K_\infty = 9\rho_0^2 {d^2\over d\rho^2}{{\cal E}_0\over \rho}\vert_{\rho=\rho_0}.$$ The second term at the r.h.s. of Eq. (\[def\_sym\]) defines the symmetry energy $S(\rho)$. This quantity, and in particular its density dependence, is presently much under debate and different contributions in the present volume deal with its determination, in keeping with its ubiquitous relevance in nuclear structure, heavy-ion reactions, and nuclear astrophysics. The density dependence of the symmetry energy around the saturation density $\rho_0$ of symmetric nuclear matter can be expressed by means of $$S(\rho) = S(\rho_0)+S^\prime(\rho_0)(\rho-\rho_0) +{1\over 2}S^{\prime\prime}(\rho-\rho_0)^2+\ldots$$ Usually, one defines $S^\prime(\rho_0) = L / 3\rho_0$ and $S^{\prime\prime} = K_{\rm sym} / 9\rho_0^2$; in fact, if $x={\rho-\rho_0\over 3\rho_0}$, $L$ and $K_{\rm sym}$ are respectively ${dS\over dx}$ and ${d^2S\over dx^2}$. Starting from a different point of view, we can relate the ISGMR energy $E_{ISGMR}$ in a given nucleus to the so-called finite nucleus incompressibility $K_{\rm A}$ which has been introduced in Ref. [@Blaizot], $$K_{\rm A}={m \langle r^2 \rangle_0 E^2_{ISGMR}\over \hbar^2}$$ (where $m$ is the nucleon mass and $\langle r^2 \rangle_0$ is the ground-state expectation value). The interest of this quantity stems from the fact that if we consider a local functional like Skyrme written for a spherical system, and we calculate its second derivative around the minimum using various simplifying hypotheses, the main one being the use of the so-called “scaling model”, we can write $K_{\rm A}$ in a form analogous to that of the mass formula, namely [@Blaizot] $$\label{likemass} K_{\rm A} = K_\infty + K_{\rm surf} A^{-1/3} + K_{\tau} \left( {N-Z\over A} \right)^2 + K_{\rm Coul}{\rm Z^2\over A^{4/3}}.$$ Moreover, we can show that $$\label{ktau} K_{\tau} = K_{\rm sym} + 3L - {27\rho_0^2 L\over K_\infty}{d^3{\cal E}\over d\rho^3}\vert_{\rho_0}.$$ The last formula shows that a constraint on $K_\tau$ is reflected directly on the parameters which are associated with the density dependence of the symmetry energy, $L$ and $K_{\rm sym}$. Skyrme sets applied to neutron stars and to giant resonances ============================================================ The usual complain concerning Skyrme functionals is that there exist too many different parametrizations, and it is true that probably around 100 or more parameter sets have been introduced in the literature. It should be stressed that not all of them are to be put on the same footing: whereas some sets have been used for long time after they have been first proposed, other sets are “marginal” in the sense that they have been fitted with very specific purposes and adopted only for one or few applications. We wish to propose a strategy to limit the number of Skyrme parameter sets to be considered “reasonable”; the hope is that the criteria we propose, eventually improved, can be used when the fitting of a universal functional [@UNEDF] is envisaged. The strategy we propose is applied to an ensemble of 78 Skyrme forces which is certainly large enough to demonstrate the effectiveness of the method. It is impossible to recover the parameters of all Skyrme sets ever introduced, and this effort would be meaningless since in principle many more sets can be produced. We start from the work already done in Ref. [@Stone]: here, the authors consider as a starting point an ensemble of 87 forces which can be considered, quoting their words, “a representative sample of the Skyrme interactions used in the nuclear physics applications”. These forces are reported in Table I of [@Stone] and the reader can find there the original references, that we do not report here for the sake of brevity. Our starting sample is rather similar: we exclude, compare to [@Stone] the sets SLy0, SLy1, SLy2, SLy3, SLy8, SLy9 (they are unpublished), SLy6, SLy7, SLy10 (they include the two-body center-of-mass correction), SkI1, SkI4, SkI6, SkO (they do not lead easily to convergent results), and we add the four forces Ska [@Kohler], SK255, SK272 [@Shlomo] and LNS [@Cao]. We apply the following criteria: 1. We select, among the 78 forces, those which have an overall satisfactory behavior as far as the density dependence of the symmetry energy in the range 0.1$\le \rho \le$0.6 fm$^{-3}$ is concerned (we follow closely Ref. [@Stone] for this point). 2. For these forces we calculate the IVGDR in $^{208}$Pb and we make a further selection, by demanding that the forces reproduce the experimental value $E_{-1}$ = 13.46 MeV [@Dietrich] within $\pm$ 1 MeV. 3. Finally, we also demand that the selected interactions reproduce the experimental value of $E_{-1}$ = 14.17 MeV for the ISGMR in $^{208}$Pb [@Youngblood], with the same accuracy of $\pm$ 1 MeV. The Skyrme sets which have “survived” this kind of selection will be listed at the end of our discussion. ![For the four forces which label the panels (they are Skyrme sets which have been not studied in Ref. [@Stone]), we display the energy per particle in symmetric nuclear matter (full line) and in pure neutron matter (dashed line). Because of the fact that these quantities have a pronounced increase as a function of the density, and that the dashed curve never falls below the other one, these forces belong to the group I defined in [@Stone] and lead to qualitatively correct neutron star properties.[]{data-label="figure_esym"}](fig1_FRIB.eps){height=".35\textheight"} Neutron stars and the overall behavior of the symmetry energy ------------------------------------------------------------- In Ref. [@Stone] it has been tested whether Skyrme forces predict plausible neutron stars properties. The Tolman-Oppenheimer-Volkov (TOV) equation [@originalTOV; @textbookTOV] has been solved, coupled with the Skyrme EOS (supplemented by appropriate corrections for low densities), and the mass-radius relationship associated with a given parameter set has been given. The result is that Skyrme sets can be divided in three groups. Only the sets of group I reproduce the expected qualitative relationship: this has been found to be strictly related with the fact that the energy per particle increase quickly, as a function of density, both in symmetric nuclear matter and in pure neutron matter, with the latter being always characterized by a larger energy with respect to the former. We have checked the performance of the four sets that we have decided to add to the starting sample compared with [@Stone]: in Fig. \[figure\_esym\] we have displayed the mentioned quantities, so that it is clear that these interactions obey the conditions which are sufficient to be included in the group I of the satisfactory parameter sets. Since the energy in pure neutron matter equals the energy in symmetric nuclear matter plus $S$, we can say that in the proposed method of selection the overall behavior of the symmetry energy plays a key role. In conclusion, after the selection we end up with 18 forces (Gs, Rs, SGI, SLy230a, SLy4, SLy5, SV, SkI2, SkI3, SkI5, SkMP, SkO$^\prime$, SkT4, SkT5, Ska, SK255, SK272, LNS). ![In the left panel, the dipole energy is displayed as a function of the quantity $f(0.1)$ defined in the text. The thin line is a linear fit while the horizontal full lines correspond to the experimental energy $\pm$ 1 MeV. In the right panel, the constraint on $S(0.1)$ extracted from the dipole is displayed with a black square and an associated error bar. The open circle (also with error bar) shows the constraint from Ref. [@Klimkiewicz]. The two thick lines, and the two lines which join small triangles, are bounds (upper and lower) for the symmetry energy $S(\rho)$ coming from the studies respectively of Refs. [@Tsang] and [@BaoAnLi].[]{data-label="figure_dipole"}](fig2_FRIB.eps){height=".35\textheight"} The IVGDR and the related constraint on the symmetry energy ----------------------------------------------------------- With the 18 selected forces we have calculated the energy of the IVGDR in $^{208}$Pb. The results are displayed in the left panel of Fig. \[figure\_dipole\], where the energy $E_{-1}$ (cf. above) appears on the y-axis. On the x-axis the quantity $$f(0.1) \equiv \sqrt{S(0.1)(1+\kappa)}$$ is shown, where $S(0.1)$ is the symmetry energy evaluated at $\rho$=0.1 fm$^{-3}$ and $\kappa$ is the so-called enhancement factor of the IV dipole sum rule (with respect to the classical Thomas-Reiche-Kuhn sum rule). The quantity $f(0.1)$ has been already defined in [@Trippa1], where physical arguments have been provided to justify a correlation between $f(0.1)$ and the dipole energy $E_{-1}$. We do not repeat these arguments here. We notice that the correlation is visible in the left panel of Fig. \[figure\_dipole\], as testified by the thin line which corresponds to a linear fit. The two horizontal lines correspond to the experimental dipole energy $\pm$ 1 MeV: the results associated with 12 forces (Gs, Rs, SGI, SLy230a, SLy4, SLy5, SkI3, SkMP, SkO$^\prime$, SK255, SK272 and LNS) lie within those lines, and these forces are selected for further considerations. In Ref. [@Trippa1] the correlation between the dipole energy and $f(0.1)$ has been used to extract a constraint on this latter quantity. Unfortunately, $f$ contains at the same time the symmetry energy as well as $\kappa$ and we miss an unambiguous experimental determination of the dipole enhancement factor. If one introduce an acceptable range for $\kappa$, between 0.18 and 0.26, $S(0.1)$ is constrained in the interval 24.1$\pm$0.8 MeV. However, we have verified that in the ensemble of forces used in the present work (which includes some forces that were not considered in [@Trippa1]), there are some which do reproduce the experimental IVGDR energy having $S(0.1)$ outside the range of 24.1$\pm$0.8 MeV. To account for this, we have displayed in the right panel of Fig. \[figure\_dipole\] the point corresponding to the dipole constraint with a larger error bar, numerically equal to $\pm$ 3$\sigma$ (that is, 24.1$\pm$2.4 MeV). In the same panel, results from other groups are reported. In Ref. [@Klimkiewicz] the data on the Pygmy Dipole Resonance (PDR) obtained at GSI, Darmstadt has been compared with RMF calculations and eventually a range of values for $J$ has been extracted, which is shown in the panel with an open circle with its associated error bar ($J$=32$\pm$1.8 MeV). In Ref. [@Danielewicz], a throughout analysis of nuclear surface symmetry energies has been carried out. This analysis led to $J$=32.5$\pm$1 MeV which is compatible with the previous value. We do not display the corresponding point in the figure, just to keep it reasonably clear and readable. We also like to discuss, very briefly, the comparison with the constraints coming from studies of heavy-ion collisions. Data of isospin diffusion following the $^{112}$Sn-$^{124}$Sn reaction, have been anayzed using transport models, in particular the Improved Quantum Molecular Dynamics model (I) in [@Tsang] and the IBUU04 version of the Boltzmann-Uehling-Uhlenbeck (BUU) model (II) in [@BaoAnLi]. We are not in the position to discuss merits and pitfalls of these models (for which we confer the reader to the original references). However, for each model we draw two curves which correspond to acceptable upper and lower limits. These are the full thick lines in the case of model (I) and the thin lines joining the triangles in the case of model (II). In principle, other observables can be very effective to constrain the symmetry energy and its density dependence. We mention the sum rules of charge-exchange excitations [@Sagawa], which have been measured yet with unsufficient precision, and the neutron radius of $^{208}$Pb [@Piekarewicz], which should be very accurately determined by the PREX experiment. ![The monopole energy is displayed as a function of the nuclear matter incompressibility $K_\infty$ (cf. Eq. (\[K\])). The box defines a region which corresponds to $K_\infty$=240$\pm$20 MeV [@monopole_rev], and to a monopole energy which is within $\pm$ 1 MeV with respect to the experimental value. See the discussion in the main text.[]{data-label="figure_monopole"}](fig3_FRIB.eps){height=".35\textheight"} The ISGMR and the constraint on the nuclear incompressibility ------------------------------------------------------------- With the 12 forces mentioned in the previous subsection, we have calculated the ISGMR in $^{208}$Pb. The result for the energy $E_{-1}$ is compared with the experimental finding, namely 14.17 MeV [@Youngblood]. Almost all the forces that have been selected through the calculation of the IVGDR, reproduce the ISGMR energy $\pm$ 1 MeV. Only SGI and SkI3 must be rejected, and we end up with a set of 10 forces, that is, Gs, Rs, SLy230a, SLy4, SLy5, SkMP, SkO$^\prime$, SK255, SK272 and LNS. The results are shown in Fig. \[figure\_monopole\], as a function of the associated value of the nuclear matter incompressibility $K_\infty$ (cf. Eq. (\[K\])). The thin line corresponds to a linear fit. The open box defines a region which corresponds to $K_\infty$=240$\pm$20 MeV, and to a monopole energy which is within $\pm$ 1 MeV with respect to the experimental value. The preferred range given by $K_\infty$=240$\pm$20 MeV has been extensively discussed in [@monopole_rev], and it actually comes from an analysis which includes not only Skyrme forces but a careful confrontation with results obtained with Gogny interactions and RMF parametrizations as well, so that we believe its validity is rather general. Also in Fig. \[figure\_monopole\], almost all the forces which reproduce reasonably the monopole energy, have an associated value of the incompressibility in that range (there are only two exceptions). ![In the upper panel, the experimental results for the monopole centroid energies from [@Youngblood2004] and [@Li2007] are displayed. They are compared with the RPA and QRPA results of Ref. [@Li]. In the lower panel, the QRPA strength distribution is shown in detail for the case of $^{120}$Sn. The discrete QRPA results have been smeared out with Lorentzians having 1 MeV width, only for illustrative purposes. The arrow indcates the position of the experimental centroid from the upper panel.[]{data-label="figure_qrpa"}](fig5_FRIB.eps){height=".30\textheight"} Remarks on the ISGMR in open-shell isotopes ------------------------------------------- As a side remark, albeit quite important, we discuss in this subsection the ISGMR in the Sn isotopes. In Ref. [@Piekarewicz], the question has been raised “why is tin so soft” or, in other words, why do theoretical models (with a value of the incompressibility $K_\infty$ in the quoted range $K_\infty$=240$\pm$20 MeV) that reproduce the values of the ISGMR energy in $^{208}$Pb as well as in $^{90}$Zr, tend to overestimate this energy in Sn isotopes ? These are semi-magic nuclei and neutrons are superfluid. Answering to the question above, implies among the rest a serious assessment of the effect of pairing correlations on the ISGMR. This has motivated the work of Ref. [@Li], where a fully self-consistent QRPA based on HFB has been applied to the study of the monopole strength distribution in the Sn isotopes. The Skyrme force has been supplemented with an effective, zero-range, density-dependent pairing force. Three kinds of pairing forces have been tested, namely volume, surface, and mixed pairing forces. Writing the pairing force as $$V_{\rm pair}(\textbf{r}_{1},\textbf{r}_{2})=V_{0}\left[1-\eta \left(\frac{\rho(\frac{\textbf{r}_{1}+\textbf{r}_{2}}{2})}{\rho_{0}} \right) \right]\delta(\textbf{r}_{1}-\textbf{r}_{2}),$$ the three kinds of pairing correspond respectively to $\eta$=0, 1 and 1/2. $\rho_{0}$ is fixed at 0.16 fm$^{-3}$ and the values of $V_{0}$ (for every kind of pairing but also for every Skyrme parameter set employed) are fixed by fitting the empirical value of the mean neutron gap of $^{120}$Sn. Pairing is treated consistently in HFB and QRPA, and also the (small) pairing rearrangement terms have been analyzed. The reader can consult [@Li] for further details. Some of the main results of that investigation are in Fig. \[figure\_qrpa\]. Looking at the upper panel, one can notice a systematic shift downwards of the QRPA results with respect to RPA. This shift is due of the attractive monopole pairing matrix elements. It is not constant along the isotope chain, but it tends to decrease with increasing $N-Z$: this effect has been explained in [@Li] as a consequence of the level occupancies. In the lower panel of Fig. \[figure\_qrpa\] we show the QRPA monopole strength distribution in a typical case. This particular set of results corresponds to the force SkM$^*$ plus surface pairing. Since SkM$^*$ has an associated value of $K_\infty$ given by 217 MeV, looking at $^{112-120}$Sn one would conclude that the discrepancy between the values of the nuclear matter incompressibility extracted from the Sn isotopes and from $^{208}$Pb differ by about 10%. Of course, this points to our still incomplete understanding of the details of the nuclear effective functionals - but the puzzle would be greater if the pairing contribution had been overlooked. Still two considerations are in order. Firstly, in the two cases of $^{112}$Sn and $^{124}$Sn, the results of the two experimental groups disagree quite seriously, but understanding the reasons of this discrepancy is in progress [@Garg]. Secondly, the main motivation at the basis of the experiment reported in [@Li2007] was the extraction of the parameter $K_\tau$ defined by Eq. (\[likemass\]): in fact, the trend of the experimental data, plus the simplifying assumption which consists in setting $K_{\rm surf}\approx -K_\infty$, have allowed extracting $K_\tau$ from the data. The simplifying assumption is consistent with simple, geometrical arguments (the same which apply to the liquid-drop mass formula). The coefficient $K_{\rm Coul}$ can be calculated. In this way, a value of $K_\tau$ equal to 550 $\pm$ 100 MeV has been deduced. ![Values of $K_\infty$ and $K_\tau$ associated with the 10 Skyrme forces which have been selected. The relevance of the shaded box is discussed in the main text.[]{data-label="figure_ktau"}](fig4_FRIB.eps){height=".35\textheight"} Based on this result, one would be tempted to set another constraint on the Skyrme forces: that is, to select those with the empirical value of $K_\tau$. However, a strong warning is appropriate here. Comparing $K_\tau$ from the data with the result of Eq. (\[ktau\]) neglects the fact this latter equation does not include any surface-symmetry contribution. We have plotted in Fig. \[figure\_ktau\] the values of $K_\infty$ and $K_\tau$ of the 10 selected forces which have been listed at the end of the previous subsection. The shaded box defines the intersection of the constraints on the quantities. Whereas the one on $K_\infty$ has been claimed to be robust, the one on $K_\tau$ suffers from the mentioned drawback. There is a clear tendency of Skyrme forces to predict values of $K_\tau$ which are smaller (in absolute value) than 450 MeV. This remains true if a larger sample of Skyrme parametrizations is considered. Conclusions =========== While part of the nuclear structure community is striving to construct a universal, accurate energy density functional, many calculations are still performed with e.g. Skyrme forces which should probably be rejected. In fact, there is not consensus on which properties should be necessarily reproduced by a mean-field calculation with an effective force, and what pitfalls should be tolerated. It is clear that many Skyrme interactions have been built with an eye on very specific applications and should not be used systematically. In this paper, we focus on the performance of Skyrme forces when they are applied to the study of giant resonances, in particular the ISGMR and IVGDR. We do not calculate these modes by using a huge ensemble of Skyrme parameter sets, but we use for the purpose of screening the results of Ref. [@Stone], that is, we demand first that the overall behavior of the energy per particle both in symmetric uniform matter and in neutron matter are reasonable in the sense defined in the quoted work. The results for the ISGMR and IVGDR in $^{208}$Pb are claimed to be a valid constrain to be imposed on existing forces as well as on envisaged new functionals. This in keeping with the fact that we have shown that the constraints can be translated into conditions on physical parameters which characterize the nuclear EOS like the nuclear matter incompressibility and the symmetry energy at sub-saturation density. We have also demonstrated that it is not straighforward to extend trivially the considerations made for e.g. $^{208}$Pb, to the case of open-shell nuclei. In the case of ISGMR, in particular, we have elucidated the role played by the pairing correlations. The ISGQR could be considered as well as an input for our considerations, whereas in the case of spin and spin-isospin modes probably extensions of the effective forces should be envisaged (besides other reasons, to avoid instabilities of uniform matter in the spin and spin-isospin channels [@Margueron]). Many of the results reported here have been obtained through collaborations with colleagues and students. In particular, the author would like to thank L. Capelli, J. Li, J. Meng, L. Trippa, E. Vigezzi. Discusssions with U. Garg about the data and the analysis of Ref. [@Li2007] are gratefully acknowledged. Thanks are also due to B. Tsang and B.A. Li for providing the author with the data displayed in Fig. \[figure\_dipole\], and for clarifications about the issue of the symmetry energy extracted from the study of heavy-ion collisions. The authors expresses special thanks to P. Danielewicz for warning against a strict comparison of $K_\tau$ from data and from Eq. (\[ktau\]). [9]{} See the contribution of R. Wiringa in this volume. M. Bender, P. H. Heenen, P. G. Reinhard, *Rev. Mod. Phys.* **75**, 121 (2003). See the contribution of M. Pearson in this volume. J. Rikovska Stone, J. C. Miller, R. Koncewicz, M. D. Strayer, *Phys. Rev. C* **68**, 034324 (2003). G. Colò, P. F. Bortignon, S. Fracasso, N. Van Giai, *Nucl. Phys. A* **788**, 173 (2007). J. Li, G. Colò, J. Meng, *Phys. Rev. C* **78**, 064304 (2007). J. Terasaki, J. Engel, M. Bender, J. Dobaczewski, W. Nazarewicz, and M. Stoitsov, *Phys. Rev. C* **71**, 034310 (2005). E. Khan, N. Sandulescu, M. Grasso, and N. Van Giai, *Phys. Rev. C* **66**, 024309 (2002); L. Capelli, G. Colò, J. Li, *Phys. Rev. C*, submitted. L. Trippa, G. Colò, E. Vigezzi, *Phys. Rev. C* **77**, 061304(R) (2008). L. Trippa, M.Sc. thesis, University of Milano (unpublished). J. P. Blaizot, *Phys. Rep.* **64**, 171 (1980). `http:\\www.unedf.org`. H. S. Köhler, *Nucl. Phys. A* **258**, 301 (1976). B. K. Agrawal, S. Shlomo, and V. Kim Au, *Phys. Rev. C* **68**, 031304 (2003). L. G. Cao, U. Lombardo, C. W. Shen, and N. Van Giai, *Phys. Rev. C* **73**, 014313 (2006). S. S. Dietrich and B. L. Berman, *At. Data Nucl. Data Tables* **38**, 199 (1988). D. Youngblood, H. L. Clark, and Y. W. Lui, *Phys. Rev. Lett.* **82**, 691 (1999). R. C. Tolman, *Proc. Natl. Acad. Sci. U.S.A.* **20**, 3 (1943); J. R. Oppenheimer and G. M. Volkov, *Phys. Rev.* **55**, 374 (1939). S. L. Shapiro, S. A. Teukolsky, *Black holes, white dwarfs, and neutron stars*, John Wiley & Sons, New York, 1983, pp. 125 and 241. A. Klimkiewicz [*et al.*]{}, *Phys. Rev. C* **76**, 051603(R) (2007). P. Danielewicz, J. Lee, `arXiv:0811.3107`. M. B. Tsang, Y. Zhang, P. Danielewicz, M. Famiano, Z. Li, W. G. Lynch, A. W. Steiner, Phys. Rev. Lett. (in press). L. W. Chen, C. M. Ko, and B. A. Li, *Phys. Rev. Lett.* **94**, 32701 (2005). H. Sagawa, S. Yoshida, X.-R. Zhou, K. Yako, and H. Sakai, *Phys. Rev. C* **76**, 024301 (2007). C. J. Horowitz, J. Piekarewicz, *Phys. Rev. Lett.* **86**, 5647 (2001). S. Shlomo, V. M. Kolomietz, and G. Colò, *Eur. Phys. J. A* **30**, 23 (2006); G. Colò, *Physics of Elementary Particles and Atomic Nuclei (PEPAN)* **39**, 286 (2008). J. Piekarewicz, *Phys. Rev. C* **76**, 031301(R) (2007). D. H. Youngblood, Y.-W. Lui, H. L. Clark, B. John, Y. Tokimoto, and X. Chen, *Phys. Rev. C* **69**, 034315 (2004); Y.-W. Lui, D. H. Youngblood, Y. Tokimoto, H. L. Clark, and B. John, *Phys. Rev. C* **70**, 014307 (2004). T. Li [*et al.*]{}, *Phys. Rev. Lett.* **99**, 162503 (2007). U. Garg (private communication). H. Sagawa, J. Margueron (to be published).
--- abstract: 'We propose a general scattering matrix formalism that guarantees the charge conservation at junctions between conducting arms with arbitrary spin interactions. By using our formalism, we find that the spin-flip scattering can happen even at nonmagnetic junctions if the spin eigenstates in arms are not orthogonal. We apply our formalism to the Aharonov-Bohm interferometer consisting of $n$-type semiconductor ring with both the Rashba spin-orbit coupling and the Zeeman splitting. We discuss the characteristics of the interferometer as conditional/unconditional spin switch in the weak/strong-coupling limit, respectively.' author: - Minchul Lee - Dimitrije Stepanenko title: 'Current-Conserving Aharonov-Bohm Interferometry with Arbitrary Spin Interactions' --- Introduction ============ Coherent electronic transport through mesoscopic rings or structures with non-trivial geometries has been extensively investigated both theoretically [@Buttiker1984oct; @Loss1990sep; @Loss1992jun; @Yi1997apr; @Romer2000sep; @KangK2000dec; @Frustaglia2001nov; @Meijer2002jul; @Hentschel2004apr; @Frustaglia2004jun; @Wang2005oct; @LeeMC2006feb; @Lucignano2007jul; @Kovalev2007sep; @Pletyukhov2008may; @Borunda2008dec; @Stepanenko2009jun] and experimentally [@Umbach1984oct; @Webb1985jun; @Bergsten2006nov; @Habib2007apr; @Grbic2007oct; @Qu2011jun] in last decades. The studies have aimed at exploring theoretically the quantum interference in solid-state circuits and also revolutionizing electronic devices in such a way to exploit the quantum effects. At the heart of studies of mesoscopic rings, there are two hallmarks of quantum coherence: the Aharonov-Bohm [@Aharonov1959aug] (AB) and Aharonov-Casher [@Aharonov1984jul] (AC) effects. Two effects are related to geometric phases due to the coupling of a charge to a magnetic flux and of a spin degree of freedom to an electric field via spin-orbit coupling (SOC), respectively. Since the AB oscillation in conductance through normal-metal rings was revealed, [@Buttiker1984oct] it is found that the effects can lead to diverse quantum interference effects such as conductance fluctuations, [@Umbach1984oct] persistent charge and spin current, [@Loss1990sep; @Loss1992jun] AB effect for exciton, [@Romer2000sep] mesoscopic Kondo effect, [@KangK2000dec] spin switch, [@Frustaglia2001nov; @Frustaglia2004jun] spin filter, [@LeeMC2006feb] and spin Hall effect. [@Borunda2008dec] From a practical point of view, the quantum coherent phenomena in mesoscopic rings, especially using the spin degrees of freedom, have been applied to the fast growing field of spintronics [@Wolf2001nov; @Zutic2004apr] and are now known to provide the easy-to-control devices that generate, manipulate, and detect the spin-dependent current or signal. Mesoscopic rings fabricated in semiconductors offer intriguing possibility to study simultaneously the AB and AC effects because of spin-orbit coupling naturally formed in crystals. The spin-orbit coupling itself can have various forms in different materials, leading to diverse current oscillations.[@Frustaglia2004jun; @Habib2007apr; @Grbic2007oct] In addition, the strength of the spin-orbit coupling can be controlled by tuning a backgate voltage to the device. [@Nitta1997feb] Among spin-orbit couplings, the Rashba SOC, originating from the broken structural inversion symmetry, is linear in momentum and easy to analyze. The studies of spin interference [@Meijer2002jul; @Frustaglia2004jun] subject to the Rashba SOC have shown that the Rashba coupling strength can modulate the unpolarized current, suggesting the possibility of all-electrical spintronic devices. Recently, a number of experimental [@Habib2007apr; @Grbic2007oct] and theoretical [@Kovalev2007sep; @Borunda2008dec; @Stepanenko2009jun] studies have investigated transport of heavy holes in rings, whose SOC is cubic in momentum. In the presence of external magnetic fields, the Zeeman splitting is operative together with the SOCs and its effect should be taken into account. [@Yi1997apr; @Frustaglia2001nov; @Hentschel2004apr; @Wang2005oct; @Lucignano2007jul] The general framework for the theoretical studies of the mesoscopic transport relies on the Landauer approach, [@Datta1995] and it is described as tunneling through conduction modes between the source and drain electrodes coupled to rings. In the coherent regime, the tunneling is accompanied by interference between conduction modes. The conductance is then described by a scattering matrix between modes that carry distinct phases. Due to interference, the fact that each of the modes is charge conserving does not guarantee the charge conservation in the total transport. More specifically, a problem arises when the ring modes form nonorthogonal spin textures; the spins of the states with the same energy are not orthogonal at every point. Such a complexity does not arise if the ring has either of the linear-in-momentum SOCs (like the Rashba SOC) or the Zeeman splitting because one can then diagonalize the system Hamiltonian such that no mode mixing takes place. However, in realistic situations with arbitrary form of SOC and Zeeman splitting, the mode mixing, or spin mixing, naturally exists. The previous studies considering both the effect of the Rashba SOC and the Zeeman splitting have dealt with this situation by using the transfer-matrix method accompanied with wave function matching, [@Yi1997apr; @Frustaglia2001nov; @Hentschel2004apr] perturbative approach, [@Wang2005oct] and path-integral approach. [@Lucignano2007jul] The transfer-matrix method, even though different group velocities of ring modes are taken into account, fails guaranteeing the charge conservation at the lead-ring junction. In their studies, the relation between the lead and the ring modes are determined solely by the wave function matching, which alone cannot satisfy the charge conservation. The perturbative calculation is valid only in the small Zeeman splitting limit. Finally, the path-integral approach, which may be conceptually useful to interpret the result in terms of phases, is limited by its semiclassical treatment. Our goal in this work is to find a general scattering matrix formalism that guarantees the charge conservation at the lead-ring junctions by its own way of construction in the presence of arbitrary spin interactions. We detour the mode mixing problem by introducing artificial spin-independent *buffer* regions in the vicinity of every junction as shown in [Fig. \[fig:1\]]{}. The mode-mixing effect is then taken care of at the interfaces between the buffers and the spinful regions in a standard way. Finally, the original system is recovered in the limit of vanishing buffers. By using our formalism, we first recover the known results in the case of orthogonal spin textures and interpret the role of buffers added by hand. Secondly, we apply our formalism to the $n$-type semiconductor ring with both the Rashba SOC and the Zeeman splitting. We found that (1) our formalism truly guarantees the charge conservation at every junction, giving rise to correct predictions, (2) the spin-flip scattering can happen even if the junctions are nonmagnetic, (3) the ring interferometer can act as conditional/unconditional spin switch in the weak/strong-coupling regimes, respectively, if some conditions are met. Our paper is structured as follows: Our formalism is introduced and derived in detail in [Sec. \[sec:formalism\]]{}. In [Sec. \[sec:orthogonal\]]{} the case of orthogonal spin texture is treated within our formalism. [Section \[sec:nonorthogonal\]]{} is devoted to the study of a case of nonorthogonal spin states in which both the Rashba SOC and the Zeeman splitting are taken into account. Finally, we conclude and summarize our paper in [Sec. \[sec:discussion\]]{}. General Formalism to Build Current-Conserving Scattering Matrix\[sec:formalism\] ================================================================================ Buffered Structure ------------------ ![Schematic diagram of buffered Aharonov-Bohm interferometer. The upper and low arms of the ring are connected to the junctions through buffers whose angular size is given by $\phi_b$.[]{data-label="fig:1"}](Fig1){width="8cm"} The scattering in the mesoscopic system is frequently characterized in terms of the scattering matrix which defines the relative amplitudes and phases of the scatted states to the injected states. The scattering matrix depends on the details of the system but is constrained by the laws of conservation. The most important properties that the scattering matrix obeys are the current conservation, if there is neither a source or a sink in scatterer and, additionally the spin current conservation, if the scatterer is nonmagnetic. The conservation conditions, together with some symmetric arguments, quite simplify the form of the scattering matrix so that it can be described by a few parameters. For example, the most frequently used scattering matrix for the AB interferometer is controlled by a single parameter $\epsilon$ that is varied between 0 (no tunneling) and 1/2 (perfect tunneling). Here the scattering to and from upper and lower arms is assumed to be symmetric. This simple construction of the scattering matrix can be extended further to the magnetic case where spin-dependent interactions exist in the arms. In the presence of the Zeeman splitting, one can treat the scattering of spin up and down separately, each of which is described by the simple scattering matrix mentioned before. For the case of linear-in-momentum spin-orbit coupling such as the Rashba SOC, one can identify a common spin polarization axis at a junction so that spin-separate treatment is still possible. No difficulty in defining the scattering matrix that satisfies the conservation laws arises as long as all the arms meeting at a junction share a single spin polarization axis. However, the latter condition is quite fragile and generally fails in the presence of general spin-dependent interactions in the arms. The simplest case in which it happens is the ring with both the Rashba SOC and the Zeeman splitting. As will be shown later, the ring eigenstates are neither parallel or orthogonal to each other in the spin space at all, denying a common spin polarization axis. The scattering into the ring then involves all the spin states, preventing spin-separate treatment. The scattering matrix is then complicated although it should still fulfill the conservation laws. One can then issue several questions. Is there a general framework to build the spin-dependent scattering matrix, which guarantees satisfying the conservation laws by the way of its construction? What is the smallest number of controlling parameters that are required to describe the scattering in the presence of arbitrary spin-dependent interactions? Can the spin-flip scattering happen even if the scatterer itself is still nonmagnetic? In order to answer these questions we propose a general formalism to build up a consistent (spin-dependent) scattering matrix for arbitrary spin interaction. The key idea of our method is to insert artificial buffer regions between the scatter and arms as depicted in [Fig. \[fig:1\]]{}. The buffer regions are assumed to be free of any spin-dependent interaction. Hence, the scattering between buffer regions can be described by the simple spin-separate scattering matrix. The complexity due to the spin-dependent interaction makes its effect at interfaces between buffers and arms. The wave functions at interfaces are matched in the systematic way by using the continuity of wave function and its current density. This wave matching, together with the scattering matrix between buffer states, leads one to find the scattering matrix between states of arms. The size of artificial buffers is then shrunken to zero in order to recover the original configuration. The shrinking does not remove all the effects of buffers because the effect of scattering at buffer-arm interfaces still remains. In the end, the scattering matrix connecting states of arms is constructed. The advantages of our methods are listed as follows: (1) The scattering matrix obtained guarantees satisfying the conservation laws. It is because the scattering between buffer states and the scattering at buffer-arm interfaces are set up to conserve the charge and spin currents. (2) It systematically identifies the minimal set of controlling parameters that the scatterer can have. (3) It provides the reasonable explanation for the effect of spin interactions in arms on the spin-dependent (possibly spin-flip) scattering even when the scatterer itself is nonmagnetic. In the following sections we build up our formalism, especially focusing on the AB interferometer shown in [Fig. \[fig:1\]]{}. The system consists of two leads and one ring. Each part is assumed to be narrow enough to be regarded as one-dimensional conductor with a single transverse mode. The scattering matrix between lead and ring then becomes a $6\times6$ matrix. We will construct a general scattering matrix for ring with arbitrary spin interaction. Our formalism, however, is quite general and can be applied to mesoscopic circuits with any kind of geometry. Arms: Lead Part --------------- The leads are composed of normal conductors directing along the $x$ direction. They are free of any magnetic interaction, and their Hamiltonians read $$\begin{aligned} H_{\rm LEAD} = \frac{p_x^2}{2m_0^*} + U_0,\end{aligned}$$ where $m_0^*$ is the effective mass of electrons in the leads and $U_0$ is the minimum energy of the transverse mode. Thanks to the spin degeneracy, the spin polarization axis of eigenstates can be chosen arbitrarily. Eigenstates of the leads with the eigenenergy $E$ then are given by $$\begin{aligned} e^{\pm iqx} \chi_{\ell\mu}\end{aligned}$$ with the wave number $q = \sqrt{2m_0(E - U_0)}/\hbar$ and the spinors $$\begin{aligned} \chi_{\ell+} = \begin{bmatrix} e^{-i\varphi_\ell/2} \cos\vartheta_\ell \\ e^{+i\varphi_\ell/2} \sin\vartheta_\ell \end{bmatrix}, \quad \chi_{\ell-} = \begin{bmatrix} - e^{-i\varphi_\ell/2} \sin\vartheta_\ell \\ e^{+i\varphi_\ell/2} \cos\vartheta_\ell \end{bmatrix}.\end{aligned}$$ Here $\mu = \pm$ is the spin index, and $\ell = \rm L, R$ the lead index. The angles $(\vartheta_\ell,\varphi_\ell)$ define the spin polarization axis for the injection from the left lead ($\ell = \rm L$) and the spin detection axis in the right lead ($\ell = \rm R$), respectively. In terms of coefficients of injected ($s_\mu$), reflected ($r_\mu$), and transmitted ($t_\mu$) waves, the general wave functions in the leads are given by $$\begin{aligned} \nonumber \psi_{\rm L}(x) & = \sum_\mu [s_\mu e^{iqx} + r_\mu e^{-iqx}] \chi_{\rm L\mu} \\ & = {{\mathcal{U}}}_{\rm L} (e^{iqx} s + e^{-iqx} r) \\ \psi_{\rm R}(x) & = \sum_\mu t_\mu e^{iqx} \chi_{\rm R\mu} = {{\mathcal{U}}}_{\rm R} e^{iqx} t \end{aligned}$$ where $$\begin{aligned} s \equiv \begin{bmatrix} s_+ \\ s_- \end{bmatrix}, \quad r \equiv \begin{bmatrix} r_+ \\ r_- \end{bmatrix}, \quad t \equiv \begin{bmatrix} t_+ \\ t_- \end{bmatrix}\end{aligned}$$ and $$\begin{aligned} {{\mathcal{U}}}_\ell \equiv \begin{bmatrix} e^{-i\varphi_\ell/2} \cos\vartheta_\ell & - e^{-i\varphi_\ell/2} \sin\vartheta_\ell \\ e^{+i\varphi_\ell/2} \sin\vartheta_\ell & e^{+i\varphi_\ell/2} \cos\vartheta_\ell \end{bmatrix}.\end{aligned}$$ The group velocities of the eigenstates are $\pm v_0 \equiv \pm\hbar q/m_0$, and the charge current densities in the leads are $$\begin{aligned} J_{\rm L} = v_0 \sum_\mu (|s_\mu|^2 - |r_\mu|^2), \quad J_{\rm R} = v_0 \sum_\mu |t_\mu|^2.\end{aligned}$$ Arms: Ring Part --------------- Ring can be either of normal conductor, $n$-type semiconductor, or $p$-type semiconductor. It is narrow enough that the radial dimension is constant with the radius $\rho_0$ and the degree of freedom is solely described by the azimuthal angle $\phi$. We assume that an external magnetic field ${{\mathbf{B}}}$ is applied so that the ring encloses a magnetic flux $\Phi$, or the dimensionless flux $f = \Phi/\Phi_0$ with the flux quantum $\Phi_0 = hc/e$ and the spin splitting arises due to the Zeeman term $$\begin{aligned} H_Z = \frac{g^*\mu_B}{2} {{\boldsymbol{\sigma}}}\cdot{{\mathbf{B}}},\end{aligned}$$ where $g^*$ is the Landé $g$-factor, $\mu_B$ the Bohr magneton, and ${{\boldsymbol{\sigma}}}$ the Pauli matrices. For semiconductor rings, an appropriate spin-orbit interaction $H_{\rm SO}$ is operative. The ring Hamiltonian is then given by $$\begin{aligned} \label{eq:Hring} H_{\rm RING} = E_0 (-i\partial_\phi - f)^2 + H_{\rm SO} + \frac{g^*\mu_B}{2} {{\boldsymbol{\sigma}}}\cdot{{\mathbf{B}}}\end{aligned}$$ with $$\begin{aligned} E_0 = \frac{\hbar^2}{2m^*\rho_0^2}\end{aligned}$$ where $m^*$ is the effective mass in the ring. In general the Hamiltonian has four eigenstates labeled by the spin index $\mu = \pm$ and the propagation direction $\varrho = +$ (counterclockwise) and $-$ (clockwise) for each energy $E$. Each eigenstate is endowed with a wave number $k_\mu^\varrho$, the solution of the dispersion relation. The wave number can be real (propagating state) or complex number (evanescent wave). The general form of the eigenstates is then written as $$\begin{aligned} \varphi_\mu^\varrho(\phi) = e^{i(k_\mu^\varrho + f)\phi} \begin{bmatrix} a_\mu^\varrho(\phi) \\ b_\mu^\varrho(\phi) \end{bmatrix}.\end{aligned}$$ In terms of coefficients $u_\mu^\varrho$ for the upper arm (U) and $d_\mu^\varrho$ for the lower arm (D), the ring wave functions are given by \[eq:wftn:ring\] $$\begin{aligned} \psi_{\rm U}(\phi) & = \sum_{\mu\varrho} u_\mu^\varrho \varphi_\mu^\varrho(\phi) = \sum_\varrho {{\mathcal{U}}}^\varrho(\phi) {{\mathcal{K}}}^\varrho(\phi) u^\varrho \\ \psi_{\rm D}(\phi) & = \sum_{\mu\varrho} d_\mu^\varrho \varphi_\mu^\varrho(\phi) = \sum_\varrho {{\mathcal{U}}}^\varrho(\phi) {{\mathcal{K}}}^\varrho(\phi) d^\varrho, \end{aligned}$$ where $$\begin{aligned} u^\varrho \equiv \begin{bmatrix} u_+^\varrho \\ u_-^\varrho \end{bmatrix}, \quad d^\varrho \equiv \begin{bmatrix} d_+^\varrho \\ d_-^\varrho \end{bmatrix}\end{aligned}$$ and $$\begin{aligned} {{\mathcal{U}}}^\varrho(\phi) \equiv \begin{bmatrix} a_+^\varrho(\phi) & a_-^\varrho(\phi) \\ b_+^\varrho(\phi) & b_-^\varrho(\phi) \end{bmatrix}, \quad {{\mathcal{K}}}^\varrho(\phi) \equiv e^{if\phi} \begin{bmatrix} e^{ik_+^\varrho\phi} & 0 \\ 0 & e^{ik_-^\varrho\phi} \end{bmatrix}.\end{aligned}$$ The group velocity of each eigenstate, $\varrho v_\mu^\varrho$ is given by the expectation value ${\mathinner{\langle{\textstyle\varphi_\mu^\varrho|v_\phi|\varphi_\mu^\varrho}\rangle}}$ of the velocity operator $v_\phi$. Note that the spin-orbit interaction affects the velocity operator, and in general the energy eigenstate is not the eigenstate of the velocity operator. If the time reversal symmetry is not broken, the relations $v_+^+ = v_-^-$ and $v_+^- = v_-^+$ hold generally no matter what the spin-orbit interaction is. In terms of the group velocities, the charge current densities in the ring are expressed as $$\begin{aligned} J_{\rm U} & = \sum_\mu (v_\mu^+ |u_\mu^+|^2 - v_\mu^- |u_\mu^-|^2) \\ J_{\rm D} & = \sum_\mu (v_\mu^+ |d_\mu^+|^2 - v_\mu^- |d_\mu^-|^2) \end{aligned}$$ for the upper and lower arms, respectively. For later use, we define the wave function applied by the velocity operator \[eq:vwftn:ring\] $$\begin{aligned} v_\phi\psi_{\rm U}(\phi) & = \sum_\varrho \varrho {{\mathcal{V}}}^\varrho(\phi) {{\mathcal{K}}}^\varrho(\phi) u^\varrho \\ v_\phi\psi_{\rm D}(\phi) & = \sum_\varrho \varrho {{\mathcal{V}}}^\varrho(\phi) {{\mathcal{K}}}^\varrho(\phi) d^\varrho, \end{aligned}$$ where the $2\times2$ matrix ${{\mathcal{V}}}^\varrho(\phi)$ depends on the details of the system. Buffers ------- In our formalism, no buffer region is inserted between the leads and the junctions. It is because the leads are free of spin-dependent interaction like buffers and the scattering at the interface between the lead and the buffer becomes trivial. On the other hand, as shown in [Fig. \[fig:1\]]{}, the buffer regions are inserted between the junctions and the ring arms. In the AB interferometer, therefore, four buffer regions with the same angular size $\phi_b$ are defined in the left/right side of upper/lower arms. Having the junctions at the angles $\phi_{\rm L}$ and $\phi_{\rm R} = 0$, the interfaces between the buffers and the arms are located at $\phi_{\rm UR} = \phi_b$, $\phi_{\rm UL} = \phi_{\rm L} - \phi_b$, $\phi_{\rm DL} = \phi_{\rm L} + \phi_b$, and $\phi_{\rm DR} = 2\pi - \phi_b$. Since the buffers are free of any spin-dependent interaction like leads, the Hamiltonian in buffers reads $$\begin{aligned} H_{\rm BUFFER} = E_0 (-i\partial_\phi - f)^2 + U_b,\end{aligned}$$ where $U_b$ is the offset in the band bottom with respect to the ring part. With no spin interaction, it is free to choose the spin polarization axes in them, and the axis of each buffer is chosen to that of the nearest-neighboring lead. Eigenstates of the buffers with the energy $E$, labeled by $\mu$ and $\varrho$, are then given by $$\begin{aligned} e^{i(\varrho\kappa + f)\phi} \chi_{\ell\mu}\end{aligned}$$ with the wave number $\kappa = \sqrt{(E - U_b)/E_0}$ and the nearby-lead index $\ell$. By defining the coefficients $u_{\ell\mu}^\varrho$ for the upper buffers close to the side $\ell$ and $d_{\ell\mu}^\varrho$ for the lower buffers close to the side $\ell$, the buffer wave functions are written as \[eq:wftn:buffer\] $$\begin{aligned} \psi_{\rm U\ell}(\phi) & = \sum_{\mu\varrho} u_{\ell\mu}^\varrho e^{i(\varrho\kappa+f)(\phi-\phi_\ell)} \chi_{\ell\mu} = \sum_\varrho {{\mathcal{U}}}_\ell {{\mathcal{K}}}_b^\varrho(\phi) u_\ell^\varrho \\ \psi_{\rm D\ell}(\phi) & = \sum_{\mu\varrho} d_{\ell\mu}^\varrho e^{i(\varrho\kappa+f)(\phi-\phi_\ell)} \chi_{\ell\mu} = \sum_\varrho {{\mathcal{U}}}_\ell {{\mathcal{K}}}_b^\varrho(\phi) d_\ell^\varrho \end{aligned}$$ where $$\begin{aligned} u_\ell^\varrho \equiv \begin{bmatrix} u_{\ell+}^\varrho \\ u_{\ell-}^\varrho \end{bmatrix}, \qquad d_\ell^\varrho \equiv \begin{bmatrix} d_{\ell+}^\varrho \\ d_{\ell-}^\varrho \end{bmatrix}\end{aligned}$$ and $$\begin{aligned} {{\mathcal{K}}}_b^\varrho \equiv e^{i(\varrho\kappa + f)(\phi-\phi_\ell)} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}.\end{aligned}$$ The group velocities of the eigenstates are simply given by $\pm v_b \equiv \pm\hbar \kappa/m\rho_0$, which is the eigenvalue of the velocity operator $v_\phi = (\hbar/m\rho_0)(-i\partial_\phi - f)$, and the charge current densities in the buffers are $$\begin{aligned} J_{\rm U\ell} & = v_b \sum_\mu (|u_{\ell\mu}^+|^2 - |u_{\ell\mu}^-|^2) \\ J_{\rm D\ell} & = v_b \sum_\mu (|d_{\ell\mu}^+|^2 - |d_{\ell\mu}^-|^2) \end{aligned}$$ for the left/right and upper/lower buffers, respectively. For later use, we define the wave function applied by the velocity operator \[eq:vwftn:buffer\] $$\begin{aligned} v_\phi\psi_{\rm U\ell}(\phi) & = v_b {{\mathcal{U}}}_\ell ({{\mathcal{K}}}_b^+ u_\ell^+ - {{\mathcal{K}}}_b^- u_\ell^-) \\ v_\phi\psi_{\rm D\ell}(\phi) & = v_b {{\mathcal{U}}}_\ell ({{\mathcal{K}}}_b^+ d_\ell^+ - {{\mathcal{K}}}_b^- d_\ell^-). \end{aligned}$$ Lead-Buffer Scattering Matrices ------------------------------- With the buffered structure, the scattering at the junctions connects the states in the leads and the buffers. Since both the leads and the buffers have no magnetic interaction, the conventional scattering matrix can be defined to describe the scattering at the junctions. The reasonable conditions for the scattering matrix are that (1) no spin flip takes place, (2) the scatterings from and to the upper and lower arms are same, (3) no phase shift is acquired, and (4) the charge current is conserved. The first condition makes the scattering matrix diagonal in the spin space, and due to the second condition the scattering matrix with respect to the normalized flux is symmetric in the exchange between the upper and lower arms. The most general lead-buffer scattering matrix satisfying the above conditions is then $$\begin{aligned} {{\mathcal{S}}}= \begin{bmatrix} {{\mathcal{S}}}_{11} & {{\mathcal{S}}}_{12} \\ {{\mathcal{S}}}_{21} & {{\mathcal{S}}}_{22} \end{bmatrix} = \begin{bmatrix} {{\mathcal{S}}}_{0,11} \otimes \sigma_0 & {{\mathcal{S}}}_{0,12} \otimes \sigma_0 \\ {{\mathcal{S}}}_{0,21} \otimes \sigma_0 & {{\mathcal{S}}}_{0,22} \otimes \sigma_0 \end{bmatrix}\end{aligned}$$ with \[eq:S0\] $$\begin{aligned} {{\mathcal{S}}}_{0,11} & = -\zeta \sqrt{1 - 2\epsilon} \\ {{\mathcal{S}}}_{0,12} & = \sqrt{\frac{v_b}{v_0}\epsilon} \begin{bmatrix} 1 & 1 \end{bmatrix} \\ {{\mathcal{S}}}_{0,12} & = \sqrt{\frac{v_0}{v_b}\epsilon} \begin{bmatrix} 1 \\ 1 \end{bmatrix} \\ {{\mathcal{S}}}_{0,22} & = \begin{bmatrix} \frac{\zeta}{2} (\sqrt{1 - 2\epsilon} - 1) & \frac{\zeta}{2} (1 + \sqrt{1 - 2\epsilon}) \\ \frac{\zeta}{2} (1 + \sqrt{1 - 2\epsilon}) & \frac{\zeta}{2} (\sqrt{1 - 2\epsilon} - 1) \end{bmatrix} \end{aligned}$$ with $\zeta = \pm$. Here the controlling parameter $\epsilon$ varies from 0 (perfect transmission) to 1/2 (complete decoupling), and $\sigma_0$ is $2\times2$ identity matrix, indicating the absence of spin-flip scattering. Throughout this paper, we set $\zeta = +1$ considering the case of phase-conserving scattering between upper and lower arms. Assuming that both the junctions have the same scattering matrix, one can set up linear equations for the coefficients of lead and buffer states: at the left junction \[eq:sm:buffer:left\] $$\begin{aligned} r & = {{\mathcal{S}}}_{11} s + {{\mathcal{S}}}_{12} c_{\rm L}^\lms \\ c_{\rm L}^\rms & = {{\mathcal{S}}}_{12} s + {{\mathcal{S}}}_{22} c_{\rm L}^\lms \end{aligned}$$ and at the right junction \[eq:sm:buffer:right\] $$\begin{aligned} t & = {{\mathcal{S}}}_{12} c_{\rm R}^\rms \\ c_{\rm R}^\lms & = {{\mathcal{S}}}_{22} c_{\rm R}^\rms, \end{aligned}$$ where the left- and right-moving buffer states are defined as $$\begin{aligned} c_\ell^\lms \equiv \begin{bmatrix} u_\ell^+ \\ d_\ell^- \end{bmatrix} = \begin{bmatrix} u_{\ell+}^+ \\ u_{\ell-}^+ \\ d_{\ell+}^- \\ d_{\ell-}^- \end{bmatrix}, \quad c_\ell^\rms \equiv \begin{bmatrix} u_\ell^- \\ d_\ell^+ \end{bmatrix} = \begin{bmatrix} u_{\ell+}^- \\ u_{\ell-}^- \\ d_{\ell+}^+ \\ d_{\ell-}^+ \end{bmatrix},\end{aligned}$$ respectively. Note that the form of the scattering matrix guarantees the charge and spin current conservation by the way of its construction. Lead-Arm Scattering Matrices ---------------------------- Now we derive the scattering matrix connecting the lead states and the ring states. To do that, we need to find out the linear relations between the buffer states and the ring states. The relations are to be determined from the boundary conditions at the interfaces by using the continuity of wave function $\psi(\phi)$ and the current conservation. The latter condition can be reformulated in terms of the continuity of $H(\phi) \psi(\phi)$ where $H(\phi)$ is the Hamiltonian defined simultaneously in the buffer and ring regions. As long as the spin-orbit interaction is composed of the linear and/or second orders of the momentum operator, the continuity of $H(\phi) \psi(\phi)$ leads to the continuity of $v_\phi \psi(\phi)$. Now we apply the boundary conditions at four interfaces. By using [Eqs. (\[eq:wftn:ring\])]{} and (\[eq:wftn:buffer\]), the continuity of the wave function, $\psi_{\rm U}(\phi_{\rm U\ell}) = \psi_{\rm U\ell}(\phi_{\rm U\ell})$ and $\psi_{\rm D}(\phi_{\rm D\ell}) = \psi_{\rm D\ell}(\phi_{\rm D\ell})$ at the interfaces gives rise to $$\begin{aligned} \sum_\varrho {{\mathcal{U}}}^\varrho(\phi_{\rm U\ell}) {{\mathcal{K}}}^\varrho(\phi_{\rm U\ell}) u^\varrho & = {{\mathcal{U}}}_\ell \sum_\varrho {{\mathcal{K}}}_b^\varrho(\phi_{\rm U\ell}) u_\ell^\varrho \\ \sum_\varrho {{\mathcal{U}}}^\varrho(\phi_{\rm D\ell}) {{\mathcal{K}}}^\varrho(\phi_{\rm D\ell}) d^\varrho & = {{\mathcal{U}}}_\ell \sum_\varrho {{\mathcal{K}}}_b^\varrho(\phi_{\rm D\ell}) d_\ell^\varrho. \end{aligned}$$ The second continuity conditions, $v_\phi \psi_{\rm U}(\phi_{\rm U\ell}) = v_\phi \psi_{\rm U\ell}(\phi_{\rm U\ell})$ and $v_\phi \psi_{\rm D}(\phi_{\rm D\ell}) = v_\phi \psi_{\rm D\ell}(\phi_{\rm D\ell})$, together with [Eqs. (\[eq:vwftn:ring\])]{} and (\[eq:vwftn:buffer\]), lead to $$\begin{aligned} \sum_\varrho \varrho {{\mathcal{V}}}^\varrho(\phi_{\rm U\ell}) {{\mathcal{K}}}^\varrho(\phi_{\rm U\ell}) u^\varrho & = v_b {{\mathcal{U}}}_\ell \sum_\varrho \varrho {{\mathcal{K}}}_b^\varrho(\phi_{\rm U\ell}) u_\ell^\varrho \\ \sum_\varrho \varrho {{\mathcal{V}}}^\varrho(\phi_{\rm D\ell}) {{\mathcal{K}}}^\varrho(\phi_{\rm D\ell}) d^\varrho & = v_b {{\mathcal{U}}}_\ell \sum_\varrho \varrho {{\mathcal{K}}}_b^\varrho(\phi_{\rm D\ell}) d_\ell^\varrho. \end{aligned}$$ It is straightforward to solve the equations for the coefficients of the buffer states: \[eq:bufferringrelation\] $$\begin{aligned} u_\ell^\varrho & = [{{\mathcal{K}}}_b^\varrho(\phi_{\rm U\ell})]^{-1} \sum_{\varrho'} {{\mathcal{Z}}}_\ell^{\varrho\varrho'}(\phi_{\rm U\ell}) {{\mathcal{K}}}^{\varrho'}(\phi_{\rm U\ell}) u^{\varrho'} \\ d_\ell^\varrho & = [{{\mathcal{K}}}_b^\varrho(\phi_{\rm D\ell})]^{-1} \sum_{\varrho'} {{\mathcal{Z}}}_\ell^{\varrho\varrho'}(\phi_{\rm D\ell}) {{\mathcal{K}}}^{\varrho'}(\phi_{\rm D\ell}) d^{\varrho'} \end{aligned}$$ with $$\begin{aligned} \label{eq:Z} {{\mathcal{Z}}}_\ell^{\varrho\varrho'}(\phi) \equiv {{\mathcal{U}}}_\ell^{-1} \frac{{{\mathcal{U}}}^{\varrho'}(\phi) + \varrho {{\mathcal{V}}}^{\varrho'}(\phi)/v_b}{2}.\end{aligned}$$ Once the relations between coefficients of the buffer and the ring states are set up, it is time to shrink the buffers by setting $\phi_b\to0$. The buffer propagating matrices, ${{\mathcal{K}}}_b^\varrho$ become the identity matrix just because of zero propagating distance. Here some caution should be made about the limit values of the interface points. The left interfaces go to the single point, $\phi_{\rm UL}, \phi_{\rm DL} \to \phi_{\rm L} \equiv \phi_{\rm L}^\pm$, while the limit values of the right interfaces are different, $\phi_{\rm UR} \to 0 \equiv \phi_{\rm R}^+$ and $\phi_{\rm DR} \to 2\pi \equiv \phi_{\rm R}^-$. Combining [Eqs. (\[eq:sm:buffer:left\])]{}, (\[eq:sm:buffer:right\]), and (\[eq:bufferringrelation\]), one can build linear equations for the coefficients of lead and ring states, which are similar to [Eqs. (\[eq:sm:buffer:left\])]{} and (\[eq:sm:buffer:right\]): at the left junction \[eq:sm:ring:left\] $$\begin{aligned} r & = {{\mathcal{S}}}_{\rm L,11} s + {{\mathcal{S}}}_{\rm L,12} {{\mathcal{K}}}_{\rm L}^\lms c^\lms \\ {{\mathcal{K}}}_{\rm L}^\rms c^\rms & = {{\mathcal{S}}}_{\rm L,12} s + {{\mathcal{S}}}_{\rm L,22} {{\mathcal{K}}}_{\rm L}^\lms c^\lms \end{aligned}$$ and at the right junction \[eq:sm:ring:right\] $$\begin{aligned} t & = {{\mathcal{S}}}_{\rm R,12} {{\mathcal{K}}}_{\rm R}^\rms c^\rms \\ {{\mathcal{K}}}_{\rm R}^\lms c^\lms & = {{\mathcal{S}}}_{\rm R,22} {{\mathcal{K}}}_{\rm R}^\rms c^\rms, \end{aligned}$$ where the left- and right-moving ring states and the propagating matrices are defined as $$\begin{aligned} c^\lms \equiv \begin{bmatrix} u^+ \\ d^- \end{bmatrix} = \begin{bmatrix} u_+^+ \\ u_-^+ \\ d_+^- \\ d_-^- \end{bmatrix}, \quad c^\rms \equiv \begin{bmatrix} u^- \\ d^+ \end{bmatrix} = \begin{bmatrix} u_+^- \\ u_-^- \\ d_+^+ \\ d_-^+ \end{bmatrix},\end{aligned}$$ and $$\begin{aligned} {{\mathcal{K}}}_\ell^\lms \equiv \begin{bmatrix} {{\mathcal{K}}}^+(\phi_\ell^+)) & \\ & {{\mathcal{K}}}^-(\phi_\ell^-) \end{bmatrix}, \quad {{\mathcal{K}}}_\ell^\rms \equiv \begin{bmatrix} {{\mathcal{K}}}^-(\phi_\ell^+)) & \\ & {{\mathcal{K}}}^+(\phi_\ell^-) \end{bmatrix},\end{aligned}$$ respectively. The lead-ring scattering matrices $$\begin{aligned} {{\mathcal{S}}}_\ell = \begin{bmatrix} {{\mathcal{S}}}_{\ell,11} & {{\mathcal{S}}}_{\ell,12} \\ {{\mathcal{S}}}_{\ell,21} & {{\mathcal{S}}}_{\ell,22} \end{bmatrix}\end{aligned}$$ are then given by \[eq:S\] $$\begin{aligned} {{\mathcal{S}}}_{\ell,11} & = {{\mathcal{S}}}_{11} + {{\mathcal{S}}}_{12} {{\mathcal{Q}}}_\ell^- ({{\mathcal{Q}}}_\ell^+ - {{\mathcal{S}}}_{22} {{\mathcal{Q}}}_\ell^-)^{-1} {{\mathcal{S}}}_{21} \\ {{\mathcal{S}}}_{\ell,12} & = {{\mathcal{S}}}_{12} \left[ {{\mathcal{P}}}_\ell^+ + {{\mathcal{Q}}}_\ell^- ({{\mathcal{Q}}}_\ell^+ - {{\mathcal{S}}}_{22} {{\mathcal{Q}}}_\ell^-)^{-1} ({{\mathcal{S}}}_{22} {{\mathcal{P}}}_\ell^+ - {{\mathcal{P}}}_\ell^-) \right] \\ {{\mathcal{S}}}_{\ell,21} & = ({{\mathcal{Q}}}_\ell^+ - {{\mathcal{S}}}_{22} {{\mathcal{Q}}}_\ell^-)^{-1} {{\mathcal{S}}}_{21} \\ {{\mathcal{S}}}_{\ell,22} & = ({{\mathcal{Q}}}_\ell^+ - {{\mathcal{S}}}_{22} {{\mathcal{Q}}}_\ell^-)^{-1} ({{\mathcal{S}}}_{22} {{\mathcal{P}}}_\ell^+ - {{\mathcal{P}}}_\ell^-) \end{aligned}$$ with \[eq:PQ\] $$\begin{aligned} {{\mathcal{P}}}_{\rm L}^\varrho & \equiv \begin{bmatrix} {{\mathcal{Z}}}_{\rm L}^{\varrho+}(\phi_{\rm L}) & \\ & {{\mathcal{Z}}}_{\rm L}^{\varrho-}(\phi_{\rm L}) \end{bmatrix}, & {{\mathcal{Q}}}_{\rm L}^\varrho & \equiv \begin{bmatrix} {{\mathcal{Z}}}_{\rm L}^{\varrho-}(\phi_{\rm L}) & \\ & {{\mathcal{Z}}}_{\rm L}^{\varrho+}(\phi_{\rm L}) \end{bmatrix} \\ {{\mathcal{P}}}_{\rm R}^\varrho & \equiv \begin{bmatrix} {{\mathcal{Z}}}_{\rm R}^{\varrho-}(\phi_{\rm R}^+) & \\ & {{\mathcal{Z}}}_{\rm R}^{\varrho+}(\phi_{\rm R}^-) \end{bmatrix}, & {{\mathcal{Q}}}_{\rm R}^\varrho & \equiv \begin{bmatrix} {{\mathcal{Z}}}_{\rm R}^{\varrho+}(\phi_{\rm R}^+) & \\ & {{\mathcal{Z}}}_{\rm R}^{\varrho-}(\phi_{\rm R}^-) \end{bmatrix}. \end{aligned}$$ From [Eqs. (\[eq:S\])]{} and (\[eq:PQ\]), a few immediate general features of the scattering matrix can be discussed: (1) In general, the matrices ${{\mathcal{Z}}}_\ell^{\varrho\varrho'}$ are not spin diagonal. It means that the lead-ring scattering matrices are not diagonal in the spin basis even if we start with the assumption that the junction itself does not invoke the spin-flip scattering. For example, the spin up injected from the lead can be reflected into the spin down for any spin injection axis. It is not because the junction is a magnetic scatterer but because of spin-dependent interaction in the ring. The magnetic property in the arms of the ring can invoke the spin-dependent scattering at the junctions. (2) The buffer effect remains. The lead-ring scattering matrix have two controlling parameters: $\epsilon$ and $U_b$. The latter parameter enters into the scattering matrix in terms of the buffer group velocity $v_b$. The velocity $v_b$ appears in the scattering matrix in two ways: in the overall factor $\sqrt{v_b/v_0}$ of ${{\mathcal{S}}}_{12}$ and ${{\mathcal{S}}}_{21}$ \[see [Eq. (\[eq:S0\])]{}\] and in the matrices ${{\mathcal{Z}}}_\ell^{\varrho\varrho'}$ \[see [Eq. (\[eq:Z\])]{}\]. The overall factor $\sqrt{v_b/v_0}$ appears in ${{\mathcal{S}}}_{\ell,ij}$ in the same way as in ${{\mathcal{S}}}_{ij}$ and does not affect the spin-dependent scattering discussed above. On the other hand, $v_b$ in the matrices ${{\mathcal{Z}}}_\ell^{\varrho\varrho'}$ can tune magnitudes of its off-diagonal components. Therefore, we can draw a conclusion that at least two parameters for junctions, here $\epsilon$ and $U_b$, are necessary to specify and control the spin-dependent scattering due to arbitrary spin-dependent interaction in arms. We’d like to emphasize that the scattering matrix, [Eq. (\[eq:S\])]{} is the only solution that guarantees the conservation of the charge and spin currents at junctions under our symmetric assumptions. Since we have used the simplest buffer structure that introduces only one additional parameter, more complexity, if necessary, can be introduced into the scattering matrix by allowing additional interactions in the buffer. Here we introduce the minimal scattering matrix working properly in the presence of general spin-orbit interaction. Reflection and Transmission Coefficients ---------------------------------------- It is now quite straightforward to solve [Eqs. (\[eq:sm:ring:left\])]{} and (\[eq:sm:ring:right\]) in order to obtain the spin-resolved reflection and transmission coefficients in terms of the lead-ring scattering matrix ${{\mathcal{S}}}_{\ell,ij}$: $$\begin{aligned} t & = {{\mathcal{S}}}_{\rm R,12} \left( {{\mathcal{K}}}^\rms {{\mathcal{F}}}- {{\mathcal{S}}}_{\rm L,22} {{\mathcal{K}}}^\lms {{\mathcal{F}}}{{\mathcal{S}}}_{\rm R,22} \right)^{-1} {{\mathcal{S}}}_{\rm L,21} s \\ \nonumber r & = \left[ {{\mathcal{S}}}_{\rm L,11} + {{\mathcal{S}}}_{\rm L,12} {{\mathcal{K}}}^\lms {{\mathcal{F}}}{{\mathcal{S}}}_{\rm R,22} \right. \\ & \qquad\quad\left.\mbox{} \times \left( {{\mathcal{K}}}^\rms {{\mathcal{F}}}- {{\mathcal{S}}}_{\rm L,22} {{\mathcal{K}}}^\lms {{\mathcal{F}}}{{\mathcal{S}}}_{\rm R,22} \right)^{-1} {{\mathcal{S}}}_{\rm L,21} \right] s \end{aligned}$$ with $$\begin{aligned} {{\mathcal{K}}}^\lms & = {\rm diag} \left( e^{ik_+^+\phi_{\rm L}}, e^{ik_-^+\phi_{\rm L}}, e^{-ik_+^-(2\pi-\phi_{\rm L})}, e^{-ik_-^-(2\pi-\phi_{\rm L})} \right) \\ {{\mathcal{K}}}^\rms & \equiv {\rm diag} \left( e^{ik_+^-\phi_{\rm L}}, e^{ik_-^-\phi_{\rm L}}, e^{-ik_+^+(2\pi-\phi_{\rm L})}, e^{-ik_-^+(2\pi-\phi_{\rm L})} \right) \\ {{\mathcal{F}}}& \equiv {\rm diag} \left( e^{if\phi_{\rm L}}, e^{if\phi_{\rm L}}, e^{-if(2\pi-\phi_{\rm L})}, e^{-if(2\pi-\phi_{\rm L})} \right). \end{aligned}$$ Note that the overall factors $\sqrt{v_b/v_0}$ in ${{\mathcal{S}}}_{\ell,12}$ and ${{\mathcal{S}}}_{\ell,21}$ are canceled out in the reflection and transmission coefficients. Therefore, the velocity in the leads does not affect the coefficients at all. Below we calculate the transmission amplitudes $T_{\mu\mu'} = |t_{\mu\mu'}|^2$, and by using them the charge conductance $$\begin{aligned} G = \frac{e^2}{h} \sum_{\mu\mu'} T_{\mu\mu'}\end{aligned}$$ and the current polarization $$\begin{aligned} P = \frac12 \sum_{\mu\mu'} \mu T_{\mu\mu'}.\end{aligned}$$ with respect to unpolarized input current are obtained. Orthogonal Spin States\[sec:orthogonal\] ======================================== Before proceeding to study the case in which our formalism is indispensable, we want to apply it to the simple cases where the spin-separate treatment is possible. As mentioned in the previous section, the spin-separate treatment can be used when the ring is of the normal conductor, or has the linear-in-momentum spin-orbit coupling such as Rashba SOC, or has the Zeeman splitting only. What is in common in all the cases is that the group velocity and the spin matrix are direction-independent, $v_\mu^\varrho = v_\mu$ and ${{\mathcal{U}}}^\varrho(\phi) = {{\mathcal{U}}}(\phi)$ and that the energy eigenstates are also the eigenstates of the corresponding velocity operator, $$\begin{aligned} v_\phi \varphi_\mu^\varrho(\phi) = \varrho v_\mu \varphi_\mu^\varrho(\phi)\end{aligned}$$ ($v_+ \ne v_-$ only when the Zeeman splitting exists). Then the matrix ${{\mathcal{V}}}^\rho(\phi)$ in [Eq. (\[eq:vwftn:ring\])]{} is simply given by $$\begin{aligned} {{\mathcal{V}}}^\rho(\phi) = {{\mathcal{U}}}(\phi) \begin{bmatrix} v_+ & 0 \\ 0 & v_- \end{bmatrix}.\end{aligned}$$ Accordingly, the matrices ${{\mathcal{Z}}}_\ell^{\varrho\varrho'}(\phi)$ are simplified to $$\begin{aligned} \label{eq:Z:oss} {{\mathcal{Z}}}_\ell^{\varrho\varrho'}(\phi) = [{{\mathcal{U}}}_\ell^{-1} {{\mathcal{U}}}(\phi)] \begin{bmatrix} z_+^\varrho & 0 \\ 0 & z_-^\varrho \end{bmatrix}\end{aligned}$$ with $$\begin{aligned} z_\mu^\varrho \equiv \frac{1 + \varrho v_\mu/v_b}{2}.\end{aligned}$$ By setting ${{\mathcal{U}}}_\ell = {{\mathcal{U}}}(\phi_\ell)$ (note that ${{\mathcal{U}}}(\phi_{\rm R}^+)$ and ${{\mathcal{U}}}(\phi_{\rm R}^-)$ usually differ only up to the overall phase factor), the matrices ${{\mathcal{Z}}}_\ell^{\varrho\varrho'}(\phi_\ell)$ become spin diagonal, and consequently we recover the spin-separate lead-ring scattering matrix. For each spin component, the lead-ring scattering matrix for spin $\mu$ can be expressed as $$\begin{aligned} {{\mathcal{S}}}_{\ell\mu,11} & = {{\mathcal{S}}}_{\mu,11} + {{\mathcal{S}}}_{\mu,12} z_\mu^- (z_\mu^+ - z_\mu^- {{\mathcal{S}}}_{\mu,22})^{-1} {{\mathcal{S}}}_{\mu,21} \\ {{\mathcal{S}}}_{\ell\mu,12} & = {{\mathcal{S}}}_{\mu,12} \left[ z_\mu^+ + z_\mu^- (z_\mu^+ - z_\mu^- {{\mathcal{S}}}_{\mu,22})^{-1} (z_\mu^+ {{\mathcal{S}}}_{\mu,22} - z_\mu^-) \right] \\ {{\mathcal{S}}}_{\ell\mu,21} & = (z_\mu^+ - z_\mu^- {{\mathcal{S}}}_{\mu,22})^{-1} {{\mathcal{S}}}_{\mu,21} \\ {{\mathcal{S}}}_{\ell\mu,22} & = (z_\mu^+ - z_\mu^- {{\mathcal{S}}}_{\mu,22})^{-1} (z_\mu^+ {{\mathcal{S}}}_{\mu,22} - z_\mu^-). \end{aligned}$$ The buffer effect due to the velocity mismatch at buffer-ring interfaces still remains in the above expressions. However, one can recover the original form of the scattering matrix by redefining the controlling parameter $\epsilon$. In other words, one can easily prove that the above scattering matrix can be rewritten as $$\begin{aligned} {{\mathcal{S}}}_{\ell\mu,ij}(\epsilon,U_b) = {{\mathcal{S}}}_{ij}(\epsilon'_\mu)\end{aligned}$$ with $$\begin{aligned} \label{eq:e} \epsilon'_\mu(\epsilon,U_b) = \frac{(v_\mu/v_b) \epsilon}{(z_\mu^+ + \zeta z_\mu^- \sqrt{1 - 2\epsilon})^2}.\end{aligned}$$ Note that $0 \le \epsilon'_\mu \le 1/2$ for $0 \le \epsilon \le 1/2$ and $0 < v_b < \infty$, as expected. It implies that in the cases where the spin-separate treatment is possible the only role of the buffer is to renormalize the tunneling parameter $\epsilon$ through [Eq. (\[eq:e\])]{}. Hence the buffer is unnecessary and the junction can be characterized by a single parameter $\epsilon'_\mu$ of arbitrary values. However, our formalism reveals the possible origin of spin-dependent values for $\epsilon'_\mu$. The difference between $\epsilon'_\mu$ for two spins is due to different group velocity $v_\mu$ in the ring and consequent difference in the magnitude of velocity mismatch at the junction. Even though it is convention in literature to define a same value of $\epsilon$ for two spins, it is more physically correct to have different tunneling parameters for two spin components, as shown in our formalism. Nonorthogonal Spin States\[sec:nonorthogonal\] ============================================== As an application of our formalism, we consider the $n$-type semiconductor ring with both the Rashba SOC and the Zeeman splitting. First, we set up the lead-arm scattering matrix in this case and then examine the features of the scattering matrix. After that, the spin-resolved transport through the ring is investigated. Setup of Scattering Matrix -------------------------- The Rashba spin-orbit interaction in the ring geometry is given by $$\begin{aligned} \begin{split} H_{\rm SO} & = \frac{\alpha}{\rho_0} \left[ (\sigma_x \cos\phi + \sigma_y \sin\phi) \left(-i{\frac{\partial^{}}{\partial{\phi}^{}}} - f\right) \right. \\ & \qquad\qquad\left.\mbox{} + \frac{i}{2} (\sigma_x \sin\phi - \sigma_y \cos\phi) \right]. \end{split}\end{aligned}$$ It is straightforward to calculate the eigenstates of the ring Hamiltonian, [Eq. (\[eq:Hring\])]{} and we obtain, for a given energy $E \ge E_+(\gamma_R,\gamma_Z)$, four eigenstates[@eigenstate_validity] \[eq:evec\] $$\begin{aligned} \psi_+^\varrho(\phi) & = e^{i(k_+^\varrho + f)\phi} \begin{bmatrix} e^{-i\phi/2} \cos\frac{\theta_+^\varrho}{2} \\ e^{+i\phi/2} \sin\frac{\theta_+^\varrho}{2} \end{bmatrix} \\ \psi_-^\varrho(\phi) & = e^{i(k_-^\varrho + f)\phi} \begin{bmatrix} - e^{-i\phi/2} \sin\frac{\theta_-^\varrho}{2} \\ e^{+i\phi/2} \cos\frac{\theta_-^\varrho}{2} \end{bmatrix}, \end{aligned}$$ where the wave numbers are the solutions of $$\begin{aligned} \label{eq:k} \frac{E}{E_0} = [k_\mu^\varrho]^2 + \mu \sqrt{(\gamma_Z - k_\mu^\varrho)^2 + (\gamma_R k_\mu^\varrho)^2} + \frac14\end{aligned}$$ with dimensionless constants $$\begin{aligned} \gamma_Z \equiv \frac{g^*\mu_BB/2}{E_0} \quad\text{and}\quad \gamma_R \equiv \frac{\alpha/\rho_0}{E_0}.\end{aligned}$$ Here $E_+(\gamma_R,\gamma_Z)$ is the energy bottom of the upper spin branch ($\mu = +$), and the angles are defined via \[eq:tiltangle\] $$\begin{aligned} \cos\theta_\mu^\varrho & = \frac{\gamma_Z - k_\mu^\varrho} {\sqrt{(\gamma_Z - k_\mu^\varrho)^2 + (\gamma_R k_\mu^\varrho)^2}} \\ \sin\theta_\mu^\varrho & = \frac{\gamma_R k_\mu^\varrho} {\sqrt{(\gamma_Z - k_\mu^\varrho)^2 + (\gamma_R k_\mu^\varrho)^2}}. \end{aligned}$$ Note that the spin textures of the eigenstates are all crownlike as in the Rashba SOC-only case: The effective magnetic field for each eigenstate has the radial and $z$-directional components whose relative strength is determined by the angle $\theta_\mu^\varrho$. However, in this case, the angles $\theta_\mu^\varrho$ are all different, which may lead to complicated (energy-dependent) spin precession along the ring. On the reversal of the Zeeman splitting, [Eqs. (\[eq:k\])]{} and (\[eq:tiltangle\]) guarantees the following relations: $$\begin{aligned} \label{eq:ksym} k_\mu^\varrho(\gamma_Z) = -k_\mu^{\bar\varrho}(-\gamma_Z) \quad\text{and}\quad \theta_\mu^\varrho(\gamma_Z) = \theta_\mu^{\bar\varrho}(-\gamma_Z) + \mu\pi.\end{aligned}$$ Both the parameters $\gamma_Z$ and $f$ are proportional to the magnetic field $B$, and their ratio is fixed to $$\begin{aligned} \label{eq:ratio} \frac{\gamma_Z}{f} = g^* \frac{m^*}{m},\end{aligned}$$ where $m$ is the electron mass in vacuum. In solids, the effective mass of electrons can be much smaller than its raw value. So the dimensionless flux $f$ can vary over successive integers with a negligible change in $\gamma_Z$. In order to build the lead-ring scattering matrices, [Eq. (\[eq:S\])]{}, one needs to construct the appropriate matrices ${{\mathcal{U}}}^\rho(\phi)$ and ${{\mathcal{V}}}^\varrho(\phi)$. By using the above eigenstates and the velocity operator $$\begin{aligned} v_\phi = \frac{\hbar}{m\rho_0} \left(- i{\frac{\partial^{}}{\partial{\phi}^{}}} - f\right) + \frac{\alpha}{\hbar} (\sigma_x \cos\phi + \sigma_y \sin\phi),\end{aligned}$$ the matrices for the $n$-type semiconductor ring are found to be $$\begin{aligned} {{\mathcal{U}}}^\rho(\phi) & = \begin{bmatrix} e^{-i\phi/2} \cos\frac{\theta_+^\varrho}{2} & -e^{-i\phi/2} \sin\frac{\theta_-^\varrho}{2} \\ e^{+i\phi/2} \sin\frac{\theta_+^\varrho}{2} & e^{+i\phi/2} \cos\frac{\theta_-^\varrho}{2} \end{bmatrix} \\ {{\mathcal{V}}}^\rho(\phi) & = \varrho \frac{\hbar}{m\rho_0} \left( \begin{bmatrix} k_+^\varrho e^{-i\phi/2} \cos\frac{\theta_+^\varrho}{2} & - k_-^\varrho e^{-i\phi/2} \sin\frac{\theta_-^\varrho}{2} \\ k_+^\varrho e^{+i\phi/2} \sin\frac{\theta_+^\varrho}{2} & k_-^\varrho e^{+i\phi/2} \cos\frac{\theta_-^\varrho}{2} \end{bmatrix} + \frac{1}{2\cos\theta_R} \begin{bmatrix} - e^{-i\phi/2} \cos\frac{2\theta_R-\theta_+^\varrho}{2} & - e^{-i\phi/2} \sin\frac{2\theta_R-\theta_-^\varrho}{2} \\ - e^{+i\phi/2} \sin\frac{2\theta_R-\theta_+^\varrho}{2} & e^{+i\phi/2} \cos\frac{2\theta_R-\theta_-^\varrho}{2} \end{bmatrix} \right) \end{aligned}$$ with the Rashba angle $\theta_R$ defined via $$\begin{aligned} \cos\theta_R \equiv - \frac{1}{\sqrt{1 + \gamma_R^2}} \quad\text{and}\quad \sin\theta_R \equiv \frac{\gamma_R}{\sqrt{1 + \gamma_R^2}}.\end{aligned}$$ These matrices enter into [Eq. (\[eq:Z\])]{} and determine the lead-arm scattering matrices in [Eq. (\[eq:S\])]{} once the injection and detection spin axes are fixed through ${{\mathcal{U}}}_\ell$. For a closed ring, the single-valued condition quantizes the ring levels: $$\begin{aligned} \label{eq:ringlevel} n = k_\mu^\varrho(E) + f - \frac12,\end{aligned}$$ where $n$ is any integer. Lead-Arm Scattering Matrix -------------------------- ![(color online) Reflection amplitudes as functions of $U_b(\le E)$ for different values of $\gamma_Z$: 0 (solid), $\gamma_R/2$ (dotted), $\gamma_R$ (dashed), and $2\gamma_R$ (dot-dashed). Here we set $\epsilon = 1/4$, $\gamma_R = 0.1$, and $E = 1.02\times E_+(\gamma_R=0.1,\gamma_Z=0.2)$. The spin polarization in the lead is set to be along the $x$ axis: $(\vartheta_{\rm L},\varphi_{\rm L}) = (\pi/2,0)$. The arrows indicate the trend with increasing $\gamma_Z$.[]{data-label="fig:2"}](Fig2){width="6cm"} In this section we examine the matrix elements of lead-arm $S$-matrix, [Eq. (\[eq:S\])]{} in the presence of both the Rashba SOC and Zeeman terms. For later use, the matrix elements of $S$-matrix for the left junction ($\ell = \rm L$) are named as $$\begin{aligned} {{\mathcal{S}}}_{\rm L,11} = \begin{bmatrix} r_{++} & r_{+-} \\ r_{-+} & r_{--} \end{bmatrix} \ \text{and}\quad {{\mathcal{S}}}_{\rm L,21} = \begin{bmatrix} t_{u++} & t_{u+-} \\ t_{u-+} & t_{u--} \\ t_{d++} & t_{d+-} \\ t_{d-+} & t_{d--} \end{bmatrix}.\end{aligned}$$ First, we focus on the spin-flip scattering taking place in the lead side. [Figure \[fig:2\]]{} shows the dependence of the reflection amplitudes $|r_{\mu\mu'}|^2$ on $U_b$ for different values of $\gamma_Z$ with $\gamma_R$ being fixed at a finite value. In the absence of the Zeeman splitting $(\gamma_Z = 0)$, we obtain $|r_{++}|^2 = |r_{--}|^2$ and $|r_{+-}|^2 = |r_{-+}|^2 = 0$ as expected. We numerically confirmed that this is true regardless of the polarization axis $(\vartheta_\ell, \varphi_\ell)$, the Rashba SOC strength $\gamma_R$, the junction parameters $\epsilon$ and $U_b$. That is, no spin-flip reflection takes place when only the Rashba SOC exists. In this case the role of $U_b$ is to simply renormalize $\epsilon$ \[see [Eq. (\[eq:e\])]{}\] as displayed in [Fig. \[fig:2\]]{}(a) and (b): The perfect transmission can happen at some values of $U_b$ even though $\epsilon = 1/4 < 1/2$ is used. ![(color online) Contour plot of spin-flip reflection amplitude $|r_{+-}|^2$ as a function of $E$ and $\vartheta_{\rm L}$. Here we set $\epsilon = 1/2$, $U_b = 0$, $\gamma_R = 0.1$, $\gamma_Z = 2\gamma_R$, $\varphi_{\rm L} = 0$. The energy ranges from $E_+(\gamma_R=0.1,\gamma_Z=0.2)$ to $1.02 E_+(\gamma_R=0.1,\gamma_Z=0.2)$.[]{data-label="fig:3"}](Fig3){width="7cm"} ![(color online) Transmission amplitudes $|t_{u\mu+}|^2$ and $|t_{d\mu+}|^2$ as function of $U_b$ for spin $+$ injection from the lead. Values of parameters and plot styles are same as in [Fig. \[fig:2\]]{} except $E = 1.05\times E_+(\gamma_R=0.1,\gamma_Z=0.2)$.[]{data-label="fig:4"}](Fig4){width="8.5cm"} On the other hand, the spin-conserving feature of the reflection is no longer valid as soon as the Zeeman splitting is switched on. [Figure \[fig:2\]]{}(c) clearly shows that the spin-flip reflection occurs for finite values of $\gamma_Z$ and its amplitude, $|r_{+-}|^2 = |r_{+-}|^2$ increases with $\gamma_Z$. The spin-flip reflection depends sensitively on the incident energy $E$ and the polarization axis $(\vartheta_\ell,\varphi_\ell)$ as wells as $U_b$, as can be seen in [Fig. \[fig:3\]]{}. It modulates with the spin polarization axis in the lead, and more importantly, decreases rapidly with increasing $E$. Although the amplitude of spin-flip scattering can be considerable close to the band bottom, $E_+$, it becomes negligibly small with the incident energy $E$ well above the band bottom. It explains why the previous works [@Yi1997apr; @Frustaglia2001nov; @Hentschel2004apr] could not notice the breakdown of the current conservation with their wrong $S$-matrix: Unless the energy is close to the band bottom, the spin-flip scattering makes quite small contribution to the total current. However, its presence, though being small, is important to fulfill both the current conservation and the correct mathcing of the wave function. [Figure \[fig:4\]]{} displays the transmission amplitudes $|t_{u\mu+}|^2$ and $|t_{d\mu+}|^2$ as functions of $U_b$ for spin $\mu=+$ injection from the lead. In the absence of the Zeeman splitting, $|t_{u++}|^2 = |t_{d-+}|^2$ and $|t_{u-+}|^2 = |t_{d++}|^2$ hold no matter what values the other parameters have. Similar relations can be found for spin $-$ injection as wells. It is because the eigenstates $\varphi_+^+(\phi)$ and $\varphi_+^-(\phi)$ make time-reversal pairs with $\varphi_-^-(\phi)$ and $\varphi_-^+(\phi)$, respectively. However, the introduction of finite Zeeman splitting breaks the time reversal symmetry of the system, and the balance between the transmission coefficients is gone. The transmission amplitudes for different $\mu$ and $\varrho$ behave differently with increasing $\gamma_Z$ because the group velocities $v_\mu^\varrho$ are all different and the spin overlap between the injected wave and the eigenstates also get different from each other. Note that the transmission amplitudes are not necessarily smaller than one since it is the current, not the tunneling coefficient that satisfies the unitary condition. ![(color online) Charge currents, $J_{\rm L}$, $J_{\rm U}$, and $J_{\rm D}$ as function of $U_b$ with respect to a unit spin $+$ polarized current $(v_0 = 1)$ from the lead. Values of parameters and plot styles are same as in [Fig. \[fig:4\]]{}.[]{data-label="fig:5"}](Fig5){width="7cm"} ![(color online) Contour plots of effective control parameters $\epsilon_+$ \[(a)\] and $\epsilon_-$ \[(b)\] as functions of $\epsilon$ and $U_b$ for $\gamma_R = 0.1$, $\gamma_Z = 2\gamma_R$, $E = 1.05\times E_+(\gamma_R=0.1,\gamma_Z=0.2)$, and $(\vartheta_{\rm L},\varphi_{\rm L}) = (\pi/2,0)$.[]{data-label="fig:6"}](Fig6a "fig:"){width="5.5cm"}\ ![(color online) Contour plots of effective control parameters $\epsilon_+$ \[(a)\] and $\epsilon_-$ \[(b)\] as functions of $\epsilon$ and $U_b$ for $\gamma_R = 0.1$, $\gamma_Z = 2\gamma_R$, $E = 1.05\times E_+(\gamma_R=0.1,\gamma_Z=0.2)$, and $(\vartheta_{\rm L},\varphi_{\rm L}) = (\pi/2,0)$.[]{data-label="fig:6"}](Fig6b "fig:"){width="5.5cm"}\ As proposed in our formalism, the charge current conservation, $J_{\rm L} + J_{\rm U} - J_{\rm D} = 0$ is well satisfied as shown in [Fig. \[fig:5\]]{}. Interestingly, the time-reversal breaking and its consequences on the transmission amplitudes do not invalidate the symmetric scattering to two arms imposed on the raw $S$-matrix, [Eq. (\[eq:S0\])]{}. As can be seen from [Fig. \[fig:5\]]{}, the normalized currents in both arms are observed to always satisfy $J_{\rm U} = - J_{\rm D}$. One would guess that the imbalances in the transmission amplitudes \[see [Fig. \[fig:4\]]{}\] and non-orthogonality of the eigenstates lead to the asymmetry between the scatterings at upper and lower buffer-arm interfaces. However, our calculations show that the symmetric property of the junction remains untouched based on the fact that the raw $S$-matrix is symmetric and the upper and lower arms are identical. Finally, we extract the effective spin-dependent control parameters $\epsilon_\mu$ from $$\begin{aligned} \epsilon_\mu \equiv \frac{1 - \sum_{\mu'} |r_{\mu'\mu}|^2}{2}\end{aligned}$$ as a function of $\epsilon$ and $U_b$ in [Fig. \[fig:6\]]{}. As expected, $\epsilon_\mu$ depends sensitively on $U_b$, and is spin-dependent: $\epsilon_+ \ne \epsilon_-$. Moreover, it also depends on the ring property such as the strength of Rashba SOC and Zeeman term so that in contrast to the conventional scattering theory the scattering at a junction is not determined solely by the junction itself but is affected by the arm property as wells. Aharonov-Bohm Interferometry ---------------------------- In this section we investigate the charge and spin transport through the Aharonov-Bohm type interferometer in the presence of both the Rashba SOC and Zeeman terms. We divide the study into two regimes: weak- and strong-coupling limits. In the weak-coupling regime where the effective control parameters $\epsilon_\mu$ are small, the transport features the quantized levels in the ring, while in the strong-coupling regime the interference between the eigenstates is important. ### Weak-Coupling Limit ![(color online) Contour plot of charge conductance $G$ in unit of $e^2/h$ as a function of $f$ and $\gamma_Z$ in the weak coupling limit with $\epsilon = 0.15$ and $U_b = 0$. Here we have used $\gamma_R = 0.4$ and $E = 2 E_+(\gamma_R=0.4,\gamma_Z=0.8)$. The white lines follow the linear relation between $f$ and $\gamma_Z$: $\gamma_Z = 0.1\times f$.[]{data-label="fig:7"}](Fig7){width="7.5cm"} ![(color online) (a) Charge conductance $G$, (b,c) spin-conserving transmission amplitudes $T_{++}$, $T_{--}$ (dotted lines) and spin-flip transmission amplitudes $T_{-+}$, $T_{+-}$ (solid lines), and (d) current polarization as functions of $f$ along the white lines in [Fig. \[fig:7\]]{} with $\gamma_Z = 0.1\times f$. Here the polarization axis of two leads are chosen to align with the positive $x$ axis: $(\vartheta_\ell,\varphi_\ell) = (\pi/2,0)$. Values of other parameters are same as in [Fig. \[fig:7\]]{}.[]{data-label="fig:8"}](Fig8){width="8cm"} [Figure \[fig:7\]]{} shows a typical dependence of the charge conductance $G$ on $f$ and $\gamma_Z$ in the weak-coupling limit with $\epsilon = 0.15$ and $U_b = 0$. The high transmission (the bright lines) occurs when the quantization condition, [Eq. (\[eq:ringlevel\])]{} is satisfied. Here the resonant tunneling via the quantized ring levels boosts the transmission. This boosting is not affected by the choice of the spin polarization axis in the leads. Exactly same charge conductance is obtained by taking the spin polarization axis along the $z$ axis instead of the $x$ axis used in [Fig. \[fig:7\]]{}. The conductance plot is symmetric with respect to the point $(f,\gamma_Z)=(0,0)$, which is attributed to the relations in [Eq. (\[eq:ksym\])]{}. In addition, the resonance lines exhibit the anti-crossing-like behavior, which is absent in the quantized levels themselves, [Eq. (\[eq:ringlevel\])]{}. The anti-crossing behavior originates from the Fano-like anti-resonance between two degenerate ring states whose spin polarizations are rather parallel, leading to large overlap between their wavefunctions. In this case, the injected state with any spin polarization has almost same overlaps with the degenerate ring states, resulting destructive interference between them in the transmitted state. It happens mostly when the time-reversal pair states $(\varphi_+^+,\varphi_-^-)$ or $(\varphi_-^+,\varphi_+^-)$ cross as seen in [Fig. \[fig:7\]]{} and less frequently when the counter-propagating pair states $(\varphi_+^+,\varphi_+^-)$ or $(\varphi_-^+,\varphi_-^-)$ do. For the pairs $(\varphi_+^+, \varphi_-^+)$ or $(\varphi_+^-, \varphi_-^-)$, their spin polarizations are almost orthogonal to each other so that the transport through each state is almost independent of that through the other, and their transmission amplitudes are simply additive. In [Fig. \[fig:8\]]{}(a) the charge conductance is calculated as a function of external magnetic field $B$ or the normalized flux $f$ by taking into account the linear relation, [Eq. (\[eq:ratio\])]{} between $\gamma_Z$ and $f$ with the ratio $g^* m^*/m = 0.1$ which is indicated by the white lines in [Fig. \[fig:7\]]{}. The charge conductance clearly exhibits four (or three) peaks as the magnetic flux is increased by one flux quantum $\Phi_0$. The accidental degeneracy in the ring levels enhances the conductance further, while it is still smaller than the two-channel maximum value $2e^2/h$. The fluctuations in the peak heights is mainly due to the variation of spin polarization axis of the ring eigenstates at junctions. Each ring eigenstate, having the crownlike spin texture, brings about the spin-flip transport as shown in [Fig. \[fig:8\]]{}(b) and (c). While the peaks in the spin-flip transmission amplitudes (solid lines) are located at the same positions as those in the charge conductance, they alternate between $T_{+-}$ and $T_{-+}$: the $\mu=+$ level give rise to the enhancement of $T_{-+}$ and the $\mu=-$ level to that of $T_{+-}$. This level dependence is easily understood from the fact that the spin polarization of the $\mu=+/-$ level has inward/outward radial component. Since the tilt angle $\theta_\mu^\varrho$ varies between 0 and $\pi$, however, the spin-flip amplitudes also fluctuate. In addition, each level also makes the comparable contribution to the spin-conserving transmissions, $T_{++} = T_{--}$ (dotted lines), which follow the behavior of the charge conductance. These spin-dependent transmission enables the unpolarized current input to generate the spin polarized current. As seen in [Fig. \[fig:8\]]{}(d), the current polarization $P$ exhibits peaks and valleys whenever the spin-flip transmission is enhanced. However, since the spin blocking or the spin flip occur only partially, its magnitude is usually much smaller than 1/2. ![(color online) (a,b) Spin-conserving transmission amplitudes $T_{++}$, $T_{--}$ (dotted lines) and spin-flip transmission amplitudes $T_{-+}$, $T_{+-}$ (solid lines), and (c) current polarization as functions of $f$ with the relation $\gamma_Z = 0.1\times f$. The condition $k^+_+ = \gamma_Z$ is exactly satisfied at the point 1, and the energy $E$ is given by [Eq. (\[eq:E2\])]{} with respect to the solutions of [Eq. (\[eq:ringlevel2\])]{} for $n=6$. Here we have used $\epsilon = 0.1$, $U_b = 0$ and $\gamma_R = 0.4$. The red arrows indicates the points (1,2,3) where the spin switch is close to its maximum.[]{data-label="fig:9"}](Fig9){width="8cm"} In order to achieve the complete spin polarization or spin flip, the lead spin axis should be set to align with the (energy-dependent) spin polarization of the level at the junction. However, the arbitrary tuning of the spin polarization of the lead is not easy to implement. Instead, one can adjust the spin polarization of the ring level to the predefined spin axis of the lead by tuning the external magnetic field. The formulas for the tilt angle, [Eq. (\[eq:tiltangle\])]{} show that a special adjustment, $k_\mu^\varrho = \gamma_Z$ yields $\theta_\mu^\varrho = \pm\pi/2$, setting the spin polarization axis of the arm state at junctions along the $x$ direction. The adjustment requires the energy $$\begin{aligned} \label{eq:E2} \frac{E}{E_0} = \gamma_Z^2 + |\gamma_Z \gamma_R| + \frac14\end{aligned}$$ (here $\mu = +$ is chosen) and the quantization condition $$\begin{aligned} \label{eq:ringlevel2} n = \gamma_Z + f - \frac12.\end{aligned}$$ From [Eq. (\[eq:ringlevel2\])]{}, together with [Eq. (\[eq:ratio\])]{}, the candidates for the magnetic field $B$ or the normalized Zeeman splitting $\gamma_Z$ are suggested, and the energy is then determined through [Eq. (\[eq:E2\])]{}. [Figure \[fig:9\]]{} displays the variation of transmission amplitudes with $n=6$ in [Eq. (\[eq:ringlevel2\])]{}. At the point 1 $(f = f_1)$, the two conditions, [Eqs. (\[eq:E2\])]{} and (\[eq:ringlevel2\]) are exactly satisfied with $k_+^+ = \gamma_Z$ so that $T_{+-}$ is almost at its maximum and the other amplitudes are negligible. Hence, the *conditional spin switch* is embodied: the spin $+$ is completely blocked while spin $-$ is completely flipped. At the same time, the maximal current polarization shown in [Fig. \[fig:9\]]{}(c) indicates that it can also work as the perfect spin polarizer for unpolarized injection. The opposite spin switch that flips spin $+$ to $-$ can be implemented by reversing the direction of the external magnetic field so that the two conditions are satisfied with $k_+^- = \gamma_Z < 0$. Note that the behavior as the perfect spin switch or spin polarizer appears at $f \approx f_1 \pm 1$ (points 2 and 3) as wells. It is due to the small ratio $g^* m^*/m = 0.1$ used in calculations: $\gamma_Z$ does not change so much for a few periods of $f$ so that the conditions, [Eqs. (\[eq:E2\])]{} and (\[eq:ringlevel2\]) are approximately satisfied at several values of $f$. The spin flip occurring at the junction discussed in the previous section would spoil the spin switch efficiency by inducing the tunneling to the other spin branch, and the spin tunneling cannot be determined only by the spin texture of the levels in the ring. However, we numerically confirmed that the observed spin-switch functionality is immune to the variation of $\epsilon$ and $U_b$ as long as the effective control parameters $\epsilon_\mu$ are small enough. In fact, the spin flip is very weak if the injection energy is well above the band bottom $E_+$ \[see [Fig. \[fig:3\]]{}\]. This is the case for [Eq. (\[eq:E2\])]{} as long as $\gamma_R$ is large enough. One can then safely use the usual analysis of spin transport based on the spin precession in the ring with no spin flip at junctions. ### Strong-Coupling Limit ![(color online) Contour plots of charge conductance $G$ in unit of $e^2/h$ as a function of $f$ and $\epsilon$ for (a) $\gamma_R = \gamma_Z = 0.4$ and $E = 2 E_+(\gamma_R=0.4,\gamma_Z=0.8)$ (refer to [Fig. \[fig:7\]]{}) and (b,c,d) $\gamma_R = 0.6$ and $E = 2 E_+(\gamma_R=0.6,\gamma_Z=1.3)$ with $\gamma_Z = 0$ \[(b)\], 0.7 \[(c)\], and 1 \[(d)\]. Here we have used $U_b = 0$ and the color scale is same as in [Fig. \[fig:7\]]{}.[]{data-label="fig:10"}](Fig10){width="8.5cm"} ![(color online) Contour plot of charge conductance $G$ in unit of $e^2/h$ as a function of $f$ and $E/E_0$ in the strong-coupling limit $(\epsilon=1/2)$. We have used $U_b = 0$ and $\gamma_R = 0.4$ and the Zeeman splitting $\gamma_Z$ increases linearly with $f$: $\gamma_Z = 0.1\times f$. The color scale is same as in [Fig. \[fig:7\]]{}.[]{data-label="fig:11"}](Fig11){width="8cm"} [Figure \[fig:10\]]{} shows the evolution of the charge conductance $G$ as the lead-ring junction gets more transparent. The resonance feature due to the ring level, though getting smeared out with increasing $\epsilon$, is still visible up to $\epsilon \sim 0.4$. For larger values of $\epsilon\gtrsim0.4$, the conductance peak position does not follow the quantization condition, [Eq. (\[eq:ringlevel\])]{} any longer, and instead every four consecutive peaks in a period of $f$ are merged to a single one which is located close to $f = n\pi$. In addition, a dip is formed between them. The dip appears between the time-reversal pair states if they are in succession, as can be seen in [Fig. \[fig:10\]]{} (a), (c), and (d). Interestingly, the anti-crossing-like behavior can be intensified as $\epsilon$ increases as seen in [Fig. \[fig:10\]]{} (c), if the pair states are close to each other in the weak-coupling limit. In this case the transparent junction enhances the destructive interference between two resonant levels. The dip can also be formed in other places if the time-reversal pair is not in succession \[see [Fig. \[fig:10\]]{} (b)\]. In this case, the dip is less prominent, implying the destructive interference is not strong enough. ![(color online) Contour plot of (a) spin-conserving transmission amplitudes $T_{++} = T_{--}$ and (b,c) spin-flip transmission amplitudes $T_{-+}$ \[(b)\] and $T_{+-}$ \[(c)\] as functions of $f$ and $E/E_0$ in the strong-coupling limit $(\epsilon=1/2)$. Here the polarization axis of two leads are chosen to align with the positive $x$ axis: $(\vartheta_\ell,\varphi_\ell) = (\pi/2,0)$. Values of other parameters are same as in [Fig. \[fig:11\]]{}.[]{data-label="fig:12"}](Fig12){width="8cm"} The charge transport in the strong-coupling limit $(\epsilon \sim 1/2)$, as seen in [Fig. \[fig:11\]]{}, clearly exhibits the well-known AB oscillations as the magnetic flux is varied. In addition, the Zeeman splitting $\gamma_Z$, increasing linearly with $f$, superposes line-shaped patterns upon the AB oscillations along which the conductance is suppressed. This suppression is due to the localization effect in the ring. To be simple, consider the Rashba-free system. The analytical expression for spin-dependent transmission amplitude is then available: $$\begin{aligned} T_\mu = \frac{4\epsilon_\mu^{\prime2} \cos^2\pi f \sin^2\pi \widetilde{k}_\mu} {\left| \epsilon'_\mu e^{2\pi i\widetilde{k}_\mu} - \cos 2\pi\widetilde{k}_\mu + \left(\frac{1-p_\mu}{2}\right)^2 + \left(\frac{1+p_\mu}{2}\right)^2 \cos2\pi f \right|^2}\end{aligned}$$ with $\epsilon'_\mu$ given by [Eq. (\[eq:e\])]{}, $p_\mu = \sqrt{1 - 2\epsilon'_\mu}$ and $\widetilde{k}_\mu = \sqrt{E/E_0 - \mu\gamma_Z}$. The transmission vanishes not only when $f = n + 1/2$ but also when $\widetilde{k}_\mu = n$ where $n$ is an integer. The latter condition means that the wave in the ring forms the standing wave so that the state is localized and does not contribute to the transport. Hence the conductance suppression happens at $E/E_0 = n^2 + \mu\gamma_Z$, making spin-dependent dark lines in the charge conductance \[see [Fig. \[fig:11\]]{}\]. The Rashba SOC, present in our system but rather small, makes a perturbative coupling between spin-$\up$ and $\down$ states, inducing the anti-crossing of dark lines that would be degenerate otherwise. Finally, one can notice that in the lower right corner of [Fig. \[fig:11\]]{} (under the line $E/E_0 = 1 + \gamma_Z$) the charge conductance is quite suppressed; the maximum is reduced by half, reaching $e^2/h$, not $2e^2/h$. It is because in this region $E < E_+$ so that only the spin-$-$ channel is open. The spin-$+$ channel exists in the evanescent waves whose contribution decreases exponentially with $E_+-E$. ![(color online) (a) Charge conductance $G$, (b) spin-conserving transmission amplitudes $T_{++}$ (solid line), $T_{--}$ (dotted line), (c) spin-flip transmission amplitudes $T_{-+}$ (solid line), $T_{+-}$ (dotted line), and (d) current polarization as functions of $f$ along the $E = 9E_0$ line in [Fig. \[fig:12\]]{}. Here the polarization axis of two leads are chosen to align with the positive $x$ axis: $(\vartheta_\ell,\varphi_\ell) = (\pi/2,0)$. Values of other parameters are same as in [Fig. \[fig:12\]]{}.[]{data-label="fig:13"}](Fig13){width="8cm"} The spin transport in the strong-coupling limit is examined in [Fig. \[fig:12\]]{}. Similarly to the charge conductance, the spin-dependent transmissions feature the AB oscillations and the localization-induced dark line patterns. In addition, they also exhibit a global modulation of the height of the AB peaks. Interestingly, the modulation patterns are in opposite trends between spin-conserving transmissions ($T_{++}$ and $T_{--}$) and spin-flip transmissions ($T_{+-}$ and $T_{-+}$): when the spin-conserving transmissions are strong the spin-flip transmissions are weak and vice versa. This opposing behavior is clearly displayed in [Fig. \[fig:13\]]{} where the charge and spin transmissions are calculated at a given injection energy, $E = 9E_0$. This global modulation of spin-dependent transmission is surely related to the variation of $\gamma_Z$ with $f$: $\gamma_Z = 0.1\times f$ is used here. Subsequently, the non-adiabatic geometric phase connected to the Rashba SOC and the Zeeman splitting varies gradually and changes the interference between the ring modes, resulting in the modulation of the spin-dependent transmission. With the total charge transmission unchanged so much, the decrease of the spin-conserving transmission then accompanies the enhancement of the spin-flip transmission. Hence, in the regime of parameters where the spin-conserving transmissions are negligible, a *unconditional spin switch* is implemented: the injected spin $+$ is switched to the spin $-$ and vice versa. As can be seen in [Fig. \[fig:12\]]{} and [Fig. \[fig:13\]]{}, the parameter regime for the system to act as a good spin switch is quite wide: the working condition encloses several periods of $f$ and wide range of energy. It is attributed to the slow variation of the geometrical phase with $f$. Finally, this system can also behavior as a good spin polarizer for unpolarized current injection, as seen in [Fig. \[fig:13\]]{}(d). Since the maxima of $T_{-+}$ and $T_{+-}$ are off the synchronization, the current polarization oscillates strongly between -0.4 and 0.4. The polarization of spin current can then be easily tuned by changing the magnetic flux by the half flux quantum $\Phi_0/2$. Discussion and Conclusion\[sec:discussion\] =========================================== We have proposed a general scattering-matrix formalism that naturally guarantees the charge conservation through a quantum ring with arbitrary spin-dependent interactions. To the end, we insert artificial SOC-free buffers in the vicinity of every junction and solve the system Hamiltonian in a standard way. The original problem is recovered by shrinking the size of buffers to zero, while the effect of buffers still remains. It is found that as long as the ring has nonorthogonal spin textures the spin-flip scattering can happen even if the junction itself is nonmagnetic. In the case of $n$-type semiconductor with both the Rashba SOC and the Zeeman splitting, the finite spin-flip scattering and the conservation of charge current are numerically confirmed. In addition, it is found that the interplay of the AB and AC effects, in the presence of the Zeeman splitting, enables the ring interferometer to act as conditional/unconditional spin switch in the weak/strong coupling limit. It should be noted that our formalism is not restricted to the structure of the AB interferometer used in this paper. The technique of inserting artificial buffers and shrinking them to zero can be applied to any network of semiconductors with arbitrary SOC. As stated above, the merit of our formalism is that the charge conservation at junctions is guaranteed as long as the interfaces between buffers and the spin-dependent regions are treated correctly. While in our study we focus on the simplest scattering matrix by minimizing the number of physical parameters for buffers, the scattering matrix can be more generalized by introducing some spin-dependent coupling into the buffers in a controlled way. The extended form of the scattering matrix may give us a hint for the general structure of the scattering matrix connecting any spin-dependent channels with a single constraint: the charge current conservation. It would be interesting to find out the general form of the scattering-matrix based on no other than the conservation law without leaning on the specific model such as buffers. This work is supported by grants from the Kyung Hee University Research Fund (KHU-20090742). [99]{} M. Büttiker, Y. Imry, and M. Y. Azbel, Phys. Rev. A **30**, 1982 (1984). D. Loss, P. Goldbart, and A. V. Balatsky, Phys. Rev. Lett. **65**, 1655 (1990). D. Loss and P. M. Goldbart, Phys. Rev. B **45**, 13544 (1992). Y.-S. Yi, T.-Z. Qian, and Z.-B. Su, Phys. Rev. B **55**, 10631 (1997). R. A. Römer and M. E. Raikh, Phys. Rev. B **62**, 7045 (2000). K. Kang and S.-C. Shin, Phys. Rev. Lett. **85**, 5619 (2000). D. Frustaglia, M. Hentschel, and K. Richter, Phys. Rev. Lett. **87**, 256602 (2001). F. E. Meijer, A. F. Morpurgo, and T. M. Klapwijk, Phys. Rev. B **66**, 033107 (2002). M. Hentschel, H. Schomerus, D. Frustaglia, and K. Richter, Phys. Rev. B **69**, 155326 (2004). D. Frustaglia and K. Richter, Phys. Rev. B **69**, 235310 (2004). X. F. Wang and P. Vasilopoulos, Phys. Rev. B **72**, 165336 (2005). M. Lee and C. Bruder, Phys. Rev. B **73**, 085315 (2006). P. Lucignano, D. Giuliano, and A. Tagliacozzo, Phys. Rev. B **76**, 045324 (2007). A. A. Kovalev, M. F. Borunda, T. Jungwirth, L. W. Molenkamp, and J. Sinova, Phys. Rev. B **76**, 125307 (2007). M. Pletyukhov and U. Zülicke, Phys. Rev. B **77**, 193304 (2008). M. F. Borunda, X. Liu, A. A. Kovalev, X.-J. Liu, T. Jungwirth, and J. Sinova, Phys. Rev. B **78**, 245315 (2008). D. Stepanenko, M. Lee, G. Burkard, and D. Loss, Phys. Rev. B **79**, 235301 (2009). C. P. Umbach, S. Washburn, R. B. Laibowitz, and R. A. Webb, Phys. Rev. B **30**, 4048 (1984). R. A. Webb, S. Washburn, C. P. Umbach, and R. B. Laibowitz, Phys. Rev. Lett. **54**, 2696 (1985). T. Bergsten, T. Kobayashi, Y. Sekine, and J. Nitta, Phys. Rev. Lett. **97**, 196803 (2006). B. Habib, E. Tutuc, and M. Shayegan, Applied Physics Letters **90**, 152104 (2007). B. Grbiĉ, R. Leturcq, T. Ihn, K. Ensslin, D. Reuter, and A. D. Wieck, Phys. Rev. Lett. **99**, 176803 (2007). F. Qu, F. Yang, J. Chen, J. Shen, Y. Ding, J. Lu, Y. Song, H. Yang, G. Liu, J. Fan, Y. Li, Z. Ji, C. Yang, and L. Lu, Phys. Rev. Lett. **107**, 016802 (2011). Y. Aharonov and D. Bohm, Phys. Rev. **115**, 485 (1959). Y. Aharonov and A. Casher, Phys. Rev. Lett. **53**, 319 (1984). S. A. Wolf, D. D. Awschalom, R. A. Buhrman, J. M. Daughton, S. [von Molnár]{}, M. L. Roukes, A. Y. Chtchelkanova, and D. M. Treger, Science **294**, 1488 (2001). I. Zutić, J. Fabian, and S. [Das Sarma]{}, Rev. Mod. Phys. **76**, 323 (2004). J. Nitta, T. Akazaki, H. Takayanagi, and T. Enoki, Phys. Rev. Lett. **78**, 1335 (1997). S. Datta, *Electronic Transport in Mesoscopic Systems* (Cambridge University Press, Cambridge, 1995). The solution, [Eq. (\[eq:evec\])]{} is valid only when the energy $E$ is larger or equal to the bottom of the upper spin branch, $E_+$. Otherwise, the evanescent waves should be taken into account or all the four eigenstates belong to the lower spin branch.
--- abstract: 'A 1-ended finitely presented group has semistable fundamental group at $\infty$ if it acts geometrically on some (equivalently any) simply connected and locally finite complex $X$ with the property that any two proper rays in $X$ are properly homotopic. If $G$ has semistable fundamental group at $\infty$ then one can unambiguously define the fundamental group at $\infty$ for $G$. The problem, asking if all finitely presented groups have semistable fundamental group at $\infty$ has been studied for over 40 years. If $G$ is an ascending HNN extension of a finitely presented group then indeed, $G$ has semistable fundamental group at $\infty$, but since the early 1980’s it has been suggested that the finitely presented groups that are ascending HNN extensions of [*finitely generated*]{} groups may include a group with non-semistable fundamental group at $\infty$. Ascending HNN extensions naturally break into two classes, those with bounded depth and those with unbounded depth. Our main theorem shows that bounded depth finitely presented ascending HNN extensions of finitely generated groups have semistable fundamental group at $\infty$. Semistability is equivalent to two weaker asymptotic conditions on the group holding simultaneously. We show one of these conditions holds for all ascending HNN extensions, regardless of depth. We give a technique for constructing ascending HNN extensions with unbounded depth. This work focuses attention on a class of groups that may contain a group with non-semistable fundamental group at $\infty$.' author: - Michael Mihalik bibliography: - 'paper.bib' title: 'Bounded Depth Ascending HNN Extensions and $\pi_1$-Semistability at $\infty$' --- Introduction {#Intro} ============ If $H$ is a group, and $\phi:H\to H$ is a monomorphism, then the notation $\langle t,H:t^{-1}ht=\phi(h)\rangle$ stands for a presentation of a group $G$ with generators $\{t\}\cup H$ and relation set $\{t^{-1} ht=\phi( h)\hbox{ for all } h\in H\}$ union all relations for $H$. The group $G$ is usually denoted $H\ast_{\phi}$ and called an [*ascending HNN extension*]{} with [*base*]{} $H$ and [*stable letter*]{} $t$. By Britton’s lemma the obvious map of $H$ into $G$ is an isomorphism onto its image. If $F(\mathcal A)$ is the free group on the set $\mathcal A$, $\phi:\mathcal A\to F(\mathcal A)$ is a function and $\mathcal R$ is a set of $\mathcal A$-words, then the group $G$ with presentation $$\mathcal P=\langle t,\mathcal A:\mathcal R,t^{-1}at=\phi(a) \hbox { for all } a\in \mathcal A\rangle$$ is an ascending HNN extension of $A$, the subgroup of $G$ generated by $\mathcal A$. It is important to note that $\langle \mathcal A:\mathcal R\rangle$ need not be a presentation for $A$. For each integer $n>0$ and $r\in \mathcal R$, $\phi^n(r)$ may not be in the normal closure of $\mathcal R$ in $F(\mathcal A)$, but certainly $\phi^n(r)$ is a relator of $A$. In fact, when $\mathcal A$ is finite, one would rarely expect $A$ to be finitely presented. The relations $t^{-1}at=\phi(a)$ are called [*conjugation relations*]{}. Semistability of the fundamental group at $\infty$ for a finitely presented group is a geometric notion defined in $\S$\[ss\]. If a finitely presented 1-ended group $G$ has semistable fundamental group at $\infty$ then the fundamental group at $\infty$ of $G$ is independent of base ray. It is unknown if all finitely presented groups are semistable at $\infty$. To date, the strongest result in the theory of semistability and simple connectivity at $\infty$ for ascending HNN extensions is the following: [**(M. Mihalik [@HNN1])**]{} \[MM\] Suppose $H$ is a finitely presented group $\phi:H\to H$ is a monomorphism and $G=\langle t,H:t^{-1}ht=\phi(h)\rangle$ is the resulting HNN extension. Then $G$ is 1-ended and semistable at $\infty$. If additionally, $H$ is 1-ended, then $G$ is simply connected at $\infty$. The line of proof used for this result fails when $H$ is only finitely generated and it has been suggested since the 1980’s that a promising place to search for a group with non-semistable fundamental group at $\infty$ is among the finitely presented ascending HNN extensions with finitely generated base. More specifically, A. Ol’shanskii and M. Sapir [@OS1] and [@OS2] have constructed a finitely generated infinite torsion group $\bar {\mathcal H}$ and finitely presented ascending HNN extension $\mathcal G$ of $\bar{\mathcal H}$ which has been suggested as a possible group with non-semistable fundamental group at $\infty$. In $\S$\[HNNcomb\], we show that the collection of finitely presented ascending HNN extensions of finitely generated groups is naturally divided into two classes - those with what is called [*bounded depth*]{} and those of [*infinite/unbounded depth*]{}. If the finitely generated base is finitely presented, then the resulting ascending HNN extension has bounded depth. The Ol’shanskii-Sapir group $\mathcal G$ has bounded depth and is semistable at $\infty$ by our main theorem. \[mainbd\] Suppose $G$ is a finitely presented ascending HNN extension of a finitely generated group $A$ and $G$ has bounded depth. Then $G$ has semistable fundamental group at $\infty$. Semistable fundamental group at $\infty$ for [*finitely generated*]{} groups was defined in the mid-1980’s ([@M4]). While we are not concerned with that notion here, the following result (Theorem 4, [@M4]) is connected to the ideas in this paper. Suppose $G$ is an ascending HNN extension of a finitely generated 1-ended group $A$. If $A$ is semistable at $\infty$, then $G$ is semistable at $\infty$. To prove Theorem \[mainbd\] we use the main theorem of [@GGM1] which implies that a finitely presented group $G$ has semistable fundamental group at $\infty$ if and only if two (somewhat orthogonal) weaker semistability conditions hold for $G$. The rest of the paper is organized as follows. In $\S$\[ss\], we define semistability at $\infty$ for spaces and groups, and list a number of equivalent formulations of this notion. Two weaker notions, the semistablility of a finitely generated subgroup $J$ in an over group $G$ and, the co-semistability of $J$ in $G$ are defined. In $\S$\[basess\] we prove that if $A$ is an infinite finitely generated base group of a finitely presented ascending HNN extension $G$ and $t$ is the stable letter, then for any $N\geq 0$, $t^NAt^{-N}$ is semistable at $\infty$ in $G$ (regardless of depth). By the main theorem of [@GGM1] this reduces the proof of our main theorem to showing that $G$ satisfies the second semistability condition of [@GGM1]. In $\S$\[HNNcomb\] we review the combinatorial group theory of ascending HNN groups and define what it means for such a group to have bounded depth. Examples of Grigorchuk and Ol’shanskii-Sapir of ascending HNN extensions with bounded depth are reviewed and a method for constructing ascending HNN extensions with unbounded depth is given. In $\S$\[Smain\] the bulk of the proof of our main theorem is given. We show that if $G$ is an ascending HNN extension of a finitely generated group $A$, $\mathcal P$ is a finite HNN presentation with bounded depth for $G$, and $X$ is the Cayley 2-complex for $\mathcal P$, then for each compact subset $C$ of $X$, there is an integer $N(C)\geq 0$ such that $t^NAt^{-N}$ is co-semistable at $\infty$ in $X$ with respect to $C$. We also prove a result (Theorem \[FP\]) that considers the case when $A$ is finitely presented and connects this case to several papers already in the literature. When $A$ is finitely presented and $C$ is compact in $X$, we show there is an integer $N(C)\geq 0$ and compact set $Q(C)$ containing $C$ such that loops in $X-(t^{N}At^{-N}) Q$ are homotopically trivial in $X-(t^{N}At^{-N}) Q$. The basics of semistability at $\infty$ for groups {#ss} ================================================== Suppose $K$ is a locally finite connected CW complex. A [*ray*]{} in $X$ is a map $r:[0,\infty)\to K$. The space $K$ has [*semistable fundamental group at $\infty$*]{} if any two proper rays in $K$ converging to the same end are properly homotopic. Suppose $C_0, C_1,\ldots $ is a collection of compact subsets of a 1-ended locally finite complex $K$ such that $C_i$ is a subset of the interior of $C_{i+1}$ and $\cup_{i=0}^\infty C_i=K$, and $r:[0,\infty)\to K$ is proper, then $\pi_1^\infty (K,r)$ is the inverse limit of the inverse system of groups: $$\pi_1(K-C_0,r)\leftarrow \pi_1(K-C_1,r)\leftarrow \cdots$$ This inverse system is pro-isomorphic to an inverse system of groups with epimorphic bonding maps if and only if $K$ has semistable fundamental group at $\infty$. When $K$ is 1-ended with semistable fundamental group at $\infty$, $\pi_1^\infty (K,r)$ is independent of proper base ray $r$. If for any compact set $C$ in $K$ there is a compact set $D$ in $K$ such that loops in $K-D$ are homotopically trivial in $X-C$ (equivalently the above inverse sequence of groups is pro-trivial), then $K$ is [*simply connected at $\infty$*]{}. There are a number of equivalent forms of semistability which are collected as Theorem 3.2 of [@CM2]. \[ssequiv\] [**(G. Conner and M. Mihalik [@CM2])**]{} Suppose $K$ is a locally finite, connected and 1-ended CW-complex. Then the following are equivalent: 1. $K$ has semistable fundamental group at $\infty$. 2. For any proper ray $r:[0,\infty )\to K$ and compact set $C$, there is a compact set $D$ such that for any third compact set $E$ and loop $\alpha$ based on $r$ and with image in $K-D$, $\alpha$ is homotopic $rel\{r\}$ to a loop in $K-E$, by a homotopy with image in $K-C$. 3. For any compact set $C$ there is a compact set $D$ such that if $r$ and $s$ are proper rays based at $v$ and with image in $K-D$, then $r$ and $s$ are properly homotopic $rel\{v\}$, by a proper homotopy in $K-C$. If $K$ is simply connected, then a fourth equivalent condition can be added to this list: 4\. Proper rays $r$ and $s$ based at $v$ are properly homotopic $rel\{v\}$. If $G$ is a finitely presented group and $Y$ is a finite complex with $\pi_1(Y)=G$ then $G$ has [*semistable (respectively simply connected)*]{} fundamental group at $\infty$ if the universal cover of $Y$ has semistable (respectively simply connected) fundamental group at $\infty$. This definition only depends on the group $G$. In [@GGM1] we consider finitely generated groups acting (perhaps not co-compactly) as covering transformations on 1-ended CW complexes $X$ and we say what it means for such a group to be semistable at $\infty$ in $X$ with respect to a given compact subset of $X$. In this paper we only need consider a more simple notion. Suppose $A$ is a finitely generated infinite subgroup of a finitely presented 1-ended group $G$. Say $\mathcal A\cup \mathcal S$ is a finite generating set of $G$, where $\mathcal A$ generates $A$. Let $X$ be the Cayley 2-complex for some finite presentation $\mathcal P$ (with generating set $\mathcal A\cup \mathcal S$) of $G$. So $X$ is the simply connected 2-dimensional complex with 1-skeleton equal to the Cayley graph of $G$ with respect to $\mathcal A\cup \mathcal S$. The vertex set of $X$ is $G$ and each edge of $X$ is labeled by an element of $\mathcal A\cup \mathcal S$. For each vertex $v$ of $X$ and relation $r$ of $\mathcal P$ there is a 2-cell with boundary equal to the edge path loop at $v$ with edge labels spelling the word $r$. Let $\ast$ be the identity vertex of $X$. Let $\Lambda(A, \mathcal A)\subset X$ be the Cayley graph of $A$ with respect to $\mathcal A$. If $g\in G$ and $q$ is an edge path in $g\Lambda$, then $q$ is called an $\mathcal A $-[*path*]{} in $X$. Note that $q$ is an $\mathcal A$-path if and only if each edge of $q$ is labeled by an element of $\mathcal A$. If $g\in G$ and $C$ is compact in $X$ then we say $gAg^{-1}$ is [*semistable at $\infty$ in*]{} $X$ (or in $G$) [*with respect to $C$*]{} if there is a compact set $D(C)\subset X$ such that if $r$ and $s$ are two proper edge path rays in $g\Lambda(A,\mathcal A)-D$ based at the same vertex $v\in gA$ then $r$ and $s$ are properly homotopic $rel\{v\}$ by a proper homotopy in $X-C$. This definition is equivalent to the one of [@GGM1]. If $gAg^{-1}$ is semistable at $\infty$ with respect to every compact subset of $X$, then we say $gAg^{-1}$ is semistable at $\infty$ in $X$ (or in $G$). If $A$ is 1-ended and semistable at $\infty$, then $gAg^{-1}$ is always semistable at $\infty$ in $X$ ($G$). In $\S$\[basess\] we prove: \[strongss\] If $G$ is a finitely presented ascending HNN extension of a finitely generated infinite group $A$ and $t$ is the stable letter, then for all $N\geq 0$, $t^NAt^{-N}$ is semistable at $\infty$ in $G$. The main theorem of [@GGM1] is significantly more general than Theorem \[GGM\]. In [@GGM1], the main result does not require an overgroup $G$ acting cocompactly on $Y$, only that $Y$ be 1-ended and for each compact subset $C$ of $Y$, the existence a finitely generated group $J$ acting as covering transformations on $Y$ and satisfying conditions 1) and 2) below. The notion of a group $J$ being co-semistable at $\infty$ in a space is a bit technical and we define this afterwards. \[GGM\] [**(R. Geoghegan, C. Guilbault and M. Mihalik [@GGM1])**]{} Suppose $G$ is a 1-ended finitely presented group acting cocompactly on a simply connected locally finite CW-complex $Y$. If for each compact set $C\subset Y$ there is an infinite finitely generated subgroup $J$ of $G$ such that 1\) $J$ is semistable at $\infty$ in $Y$ with respect to $C$ and 2\) $J$ is co-semistable at $\infty$ in $Y$ with respect to $C$, then $Y$ (and hence $G$) has semistable fundamental group at $\infty$. The converse of Theorem \[GGM\] is rather straightforward. In fact, if $Y$ (equivalently $G$) has semistable fundamental group at $\infty$, then suppose $C$ is any compact subset of $Y$ and $J$ is any infinite finitely generated subgroup of $G$ then conditions 1) and 2) hold for $J$ and $C$. Interestingly, our proof of the main theorem of this paper relies on selecting different groups $J$ for different compact sets $C$ satisfying 1) and 2). We apply Theorem \[GGM\] when $G$ is an ascending HNN extension of a finitely generated group $A$, and $G$ acts cocompactly on $Y$ the Cayley 2-complex of $G$ with respect to some finite HNN presentation $\mathcal P$ (see $\S$\[Intro\]). In our situation, all of the subgroups $J$ of Theorem \[GGM\] will have the form $t^NAt^{-N}$ for some $N\geq 0$. Proposition \[strongss\] resolves part 1) of Theorem \[GGM\] for all compacts sets. All that remains to be shown is that for each compact set $C$ in $X$ there is an integer $N(C)\geq 0$ such that $t^NAt^{-N}$ is co-semistable at $\infty$ in $Y$ with respect to $C$. We now define what that means. Suppose $J$ is an infinite finitely generated group acting as covering transformations on the 1-ended, simply connected and locally finite CW-complex $Y$. A subset $S$ of $Y$ is [*bounded*]{} in $Y$ if $S$ is contained in a compact subset of $Y$. Otherwise $S$ is [*unbounded*]{} in $Y$. Let $q:Y\to J\backslash Y$ be the quotient map. If $K$ is a subset of $Y$, and there is a compact subset $C_1$ of $Y$ such that $K\subset JC_1$ (equivalently $q(K)$ has image in a compact set), then $K$ is a $J$-[*bounded*]{} subset of $Y$. Otherwise $K$ is a $J$-[*unbounded*]{} subset of $Y$. If $r:[0,\infty)\to Y$ is a proper edge path ray and $qr$ has image in a compact subset of $J\backslash Y$ then $r$ is said to be $J$-[*bounded*]{}. Equivalently, $r$ is a $J$-bounded proper edge path ray in $S$ if and only if $r$ has image in $J C_1$ for some compact set $C_1\subset Y$. Let $\ast$ be a base vertex in $Y$. When $r$ is $J$-bounded there is an integer $M$ (depending only on $C_1$ and fixed terms) such that each vertex of $r$ is (using edge path distance) within $M$ of a vertex of $J \ast\subset Y$. We say $J$ is [*co-semistable at $\infty$ in $Y$ with respect to the compact subset $C$ of $Y$*]{} if there is a compact subcomplex $C_1$ of $Y$ such that for each $J$-unbounded component $U$ of $Y-(JC_1)$, and any $J$-bounded proper ray $r$ in $U$ “loops in $U$ and based on $r$ can be properly pushed to infinity along $r$, avoiding $C$". More specifically: For any loop $\alpha:[0,1]\to U$ with $\alpha(0)=\alpha(1)=r(0)$ there is a proper homotopy $H:[0,1]\times [0,\infty)\to Y-C$ such that $H(t,0)=\alpha(t)$ for all $t\in [0,1]$ and $H(0,s)=H(1,s)=r(s)$ for all $s\in [0,\infty)$. Base group semistability in an ascending HNN extension {#basess} ====================================================== In this section we prove three lemmas that imply Proposition \[strongss\]. This shows that an infinite finitely generated base group is always semistable at $\infty$ in an ascending HNN extension (regardless of bounded or unbounded depth). Begin with a finite presentation for a group $G$ which is an ascending HNN extension with base group a finitely generated group $A$ with finite set of generators $\mathcal A$: $$\mathcal P=\langle t, \mathcal A: \mathcal R, t^{-1}at=\phi(a)\hbox{ for all }a\in \mathcal A\rangle$$ Here $\mathcal R$ is a finite subset of the free group $F(\mathcal A)$. Consider the homomorphism $P_0:G\to \mathbb Z$ that kills the normal closure of $A$. If $g\in G$ and $P_0(g)=N$, we say $g$ is in [*level*]{} $N$. Let $X$ be the Cayley 2-complex for the presentation $\mathcal P$ of $G$. Then $P_0$ can be extended to $P:X\to \mathbb R$ by taking each 2-cell corresponding to an element of $\mathcal R$ to $P_0(v)$ for any vertex $v$ of the cell, and if $D$ is 2-cell corresponding to a conjugation relation $t^{-1}at=\phi(a)$ for $a\in \mathcal A$, then $P$ maps $D$ to the interval $[N,N+1]$ (where the edge of $D$ corresponding to $a\in \mathcal A$ is mapped by $P_0$ to $N$ and those corresponding to $\phi(a)$ are mapped to $N+1$), in the obvious way. \[push\] Let $e:[0,1]\to X$ be an edge in $X$ with $e(0)=v$, $e(1)=w$ and label $a\in \mathcal A$. Let $r_v$ and $r_w$ be the edge path rays at $v$ and $w$ (respectively) each of whose edges is labeled $t$. There is a proper map $H_e:[0,1]\times [0,\infty)\to X$ such that $H_e(t,0)=e(t)$, $H_e(0,t)=r_v(t))$, $H_e(1,t)=r_w(t)$ and $P(H_e([0,1]\times [N,N+1]))\subset [N,N+1]$. On $[0,1]\times [0,1]$ define $H_e$ to have image the 2-cell at $v$ with boundary label $at\phi(a^{-1})t^{-1}$. Iterate to define $H_e$ as in Figure 1. Note that if $\phi(a)$ has length $L$ then the image of $H_e$ on $[0,1]\times [1,2]$, consists of $L$ conjugation relation 2-cells (each of which is mapped by $P$ to $[1,2]$). to 2in[ ![image](HNNF8) ]{} Figure 1 To see that $H_e$ is proper, let $C$ be compact in $X$. Then $P(C)\subset [-N,N]$ for some integer $N\geq 0$. But then $H_e^{-1} (C)\subset [0,1]\times [0,N]$. Recall $\Lambda$ is the Cayley graph of $A$ with respect to $\mathcal A$ and we assume $\ast\in \Lambda\subset X$ where $\ast$ is the identity vertex. \[straight\] Suppose $C$ is compact in $X$. There are only finitely many $\mathcal A$-edges $e$ in $\Lambda$ such that the image of $H_e$ (see Lemma \[push\]) intersects $C$. If $v\in A$, let $r_v$ be the proper edge path ray at $v$, each of whose edges is labeled $t$. If $e$ is an edge of $\Lambda$ with initial point $v$, let $H_e$ be the proper homotopy of Lemma \[push\]. For any integers $S>R\geq 0$, $P(H_e([0,1]\times [R,S]))\subset [R,S]$. Say that $P(C)\subset [-N,N]$ for $N\geq 0$. Then for any edge $e$ of $\Lambda$, $$H_e([0,1]\times [N+1,\infty))\cap C=\emptyset$$ (since $P(C)\subset [-N,N]$ and $P H_e([0,1]\times [N+1,\infty))\subset [N+1,\infty)$). Let $L$ be the length of the longest word in $\{\phi(a_1),\ldots, \phi(a_n)\}$. So for any integer $K\geq 0$, the length of the $\mathcal A$-word $H_e([0,1]\times \{K\}$ is $\leq L^K$ (if $e$ has label $a\in\mathcal A$, then $H_e([0,1]\times \{K\}$ has label $\phi^K(a)$). For any edge $e$ of $\Lambda$ with initial vertex $v$, $$H_e([0,1]\times [0,N])\subset St^{L^N+N}(v).$$ There are only finitely many vertices $v$ of $\Lambda$ such that $St^{L^N+L}(v)\cap C\ne \emptyset$ and so there are only finitely many edges $e$ of $\Lambda$ such that the image of $H_e$ intersects $C$. \[string\]Suppose $s=(s_0,s_1,\ldots )$ is a proper edge path ray in $\Lambda\subset X$. If $v$ is the initial point of $s$ let $r_v$ be the edge path at $v$ each of whose edges is labeled $t$, then there is a proper homotopy $H_s:[0,\infty)\times [0,\infty)\to X$ of $s$ to $r_v$ $rel\{v\}$ defined so that $H_s$ restricted to $[N,N+1]\times [0,\infty)$ is $H_{s_N}$ (i.e. $H_s(N+x,y)=H_{s_N}(x,y)$ for all $(x,y)\in [0,1]\times [0,\infty)$). Since $H(0,y)=r_v(y)$ and $H(x,0)=s$, $H$ is a homotopy of $r_v$ to $s$ $rel\{v\}$. It remains to show that $H$ is proper. If $C$ is compact in $X$, then by Lemma \[straight\] there are only finitely many edges $e$ of $s$ such that the image of $H_e$ intersects $C$. Choose $N$ such that for all $n>N$, $H_{s_n}$ avoids $C$. Then $H_s^{-1}(C)=\cup_{i=1}^NH_{s_i}^{-1}(C)$. This last set is a finite union of compact sets since each $H_{s_i}$ is proper. [**(of Proposition \[strongss\])**]{} We show that for any integer $N\geq 0$, the group $t^NAt^{-N}$ is semistable at $\infty$ in $X$ ($G$). Let $C$ be compact in $X$. If $v\in A$, let $r_v$ be the proper edge path ray at $v$, each of whose edges is labeled $t$. If $e$ is an edge of $\Lambda$ with initial point $v$, let $H_e$ be the proper homotopy of Lemma \[push\]. By Lemma \[straight\] there are only finitely many edges $e$ of $\Lambda$ such that the image of $H_e$ intersects $t^{-N}C$. Choose $D$ compact such that $D$ contains $t^{-N}C$ and all of these edges. If $s$ and $s'$ are proper $\mathcal A$-rays at $v\in \Lambda-D$ then the proper homotopies $H_s$ and $H_{s'}$ of Lemma \[string\] both avoid $t^{-N}C$ so that both $s$ and $s'$ are properly homotopic $rel \{v\}$ to $r_v$ by homotopies in $X-t^{-N}C$. Combining $H_s$ and $H_{s'}$ we have $s$ is properly homotopic $rel\{v\}$ to $s'$ by a homotopy $H$ in $X-t^{-N}C$. Now $t^NH$ is a proper homotopy $rel\{t^Nv\}$ of $t^Ns$ to $t^Ns'$ in $X-C$ and $t^NAt^{-N}$ is semistable at $\infty$ in $X$. Ascending HNN extension combinatorics {#HNNcomb} ===================================== Suppose $\mathcal A$ is a finite set, $\phi:F(\mathcal A)\to F(\mathcal A)$ is a homomorphism of the free group, $\mathcal R$ is a finite set of words in $F(\mathcal A)$ and $G$ is the (finitely presented) ascending HNN extension with the following HNN presentation: $$(\ast) \ \ \ \ \ \ \ \ \ \ \ \ \ \mathcal P=\langle t, \mathcal A: \mathcal R, t^{-1}at=\phi(a)\hbox{ for all }a\in \mathcal A\rangle$$ The base group of this HNN extension is $A$, the subgroup of $G$ generated by $\mathcal A$. In this paper, we are only interested in the case when $\mathcal A$ is finite. In order to define what it means for an ascending HNN extension to have bounded depth, we must first understand $ker(p)$ where $p$ is the homomorphism $p:F(\mathcal A)\to A$ (defined by $p(a)=a$ for $a\in \mathcal A$). Certainly $ker(p)$ contains $N_0(\mathcal R,\phi)\equiv N( \cup _{i=0}^\infty \phi^i (\mathcal R))$, where $N( \cup _{i=0}^\infty \phi^i (\mathcal R))$ is the normal closure of $ \cup _{i=0}^\infty \phi^i (\mathcal R)$ in $F(\mathcal A)$. But it may be that for some word $w\in F(\mathcal A)$ and some integer $m$, $\phi^m(w)\in N_0(\mathcal R,\phi)$, and $w\not \in N_0(\mathcal R,\phi)$. Then $w\in ker(p)$. Consider the normal subgroup of $F(\mathcal A)$: $$N^{\infty}(\mathcal R,\phi)\equiv \cup _{i=0}^{\infty} \phi^{-i}(N_0(\mathcal R,\phi))\triangleleft F(\mathcal A).$$ It is well known to experts that $\phi^{-i}(N_0(\mathcal R,\phi))< \phi^{-i-1}(N_0(\mathcal R,\phi))$ (see theorem \[rel\]) so that $N^{\infty}(\mathcal R,\phi)$ is an ascending union of normal subgroups of $F(\mathcal A)$ and that $N^{\infty}(\mathcal R,\phi)$ is the kernel of $p$, so $$A=\langle \mathcal A:N^{\infty}(\mathcal R,\phi)\rangle$$ If there is an integer $B$ such that $N^\infty(\mathcal R,\phi)=\cup _{i=0}^B\phi^{-i}(N_0(\mathcal R,\phi))$ then the presentation $\mathcal P$ of $G$ has [*bounded depth*]{}. Our main theorem shows that if $\mathcal P$ has bounded depth, then $G$ is semistable at $\infty$ (Theorem \[mainbd\]). It is not always the case that such ascending HNN extensions have bounded depth. (See Theorem \[Osin\].) As in $\S$\[basess\], $P_0:G\to \mathbb Z$ is the homomorphism that kills the normal closure of $A$. If $X$ is the Cayley 2-complex for the presentation $\mathcal P$ of $G$ given in $(\ast)$ (with vertex set $G$), then $P_0$ extends to $P:X\to \mathbb R$. If $g\in G$ and $P_0(g)=N$, $g$ is in [*level*]{} $N$. \[loopkill\]An edge path loop in level $L$ of $X$, whose labeling defines an element of $\cup _{i=0}^B\phi^{-i}(N_0(\mathcal R,\phi))$, is homotopically trivial by a combinatorial homotopy $H$ such that $P(H)$ has image in $(-\infty,L+B]$. Note that if $\alpha$ is an edge path loop in level $L$ labeled by an element of $N(\mathcal R)$ (the normal closure of $\mathcal R$ in $F(\mathcal A)$) then $\alpha$ can be killed by a homotopy in level $L$. If $\alpha$ has initial vertex $v$ in level $L$ and labeling $\phi(r)$ for $r\in N(\mathcal R)$, then using only conjugation relations, $\alpha$ is homotopic to an edge path loop at $v$ with labeling $(t^{-1},\beta, t)$ where $\beta$ has labeling $r$ and image in level $L-1$. Since $\beta$ is homotopically trivial in level $L-1$, the loop $\alpha$ can be killed by a homotopy $H$ such that $P(H)$ has image in $[L-1,L]$. This homotopy only uses the homotopy that kills $\beta$ in level $L-1$ and the conjugation relation 2-cells connecting $\alpha$ and $\beta$. If $\alpha$ has label in $\phi^{-1}(N(\mathcal R))$ (so $\phi(\alpha)=r\in N(\mathcal R)$) then $\alpha$ can be killed by a homotopy $H$ such that $P(H)$ has image in $[L,L+1]$. In the case that $A$ is finitely generated and the image of $\phi:A\to A$ is of finite index in $A$, then $A$ is “commensurated" in $G$ and $G$ is semistable at $\infty$ (see Corollary 4.9 of [@CM2]). For $\mathcal A$ finite, the group $G=\langle t, \mathcal A: \mathcal R', t^{-1}at=\phi(a) \hbox{ for } a\in \mathcal A\rangle$ (with $\mathcal R' \subset F(\mathcal A)$) is an ascending HNN extension with [*bounded depth $D$ and root $\mathcal R$*]{} if the kernel of the homomorphism $p:F(\mathcal A)\to A$ (defined by $p(a)=a$ for all $a\in \mathcal A$) is $\phi^{-D}(N_0(\mathcal R,\phi))\equiv \phi^{-D}(N(\cup _{i=0}^\infty \phi^i (\mathcal R)))$ for some finite set of words $\mathcal R$ in $F(\mathcal A)$. In this case, $G$ has finite presentation: $\langle t, \mathcal A:\mathcal R,t^{-1}at=\phi(a) \hbox{ for all } a \in \mathcal A\rangle$. \[G\]R. Grigorchuk ([@GR1] and [@GR2]) constructed a finitely generated infinite torsion group $G$ of intermediate growth having solvable word problem. He also showed that $G$ was the base group of a finitely presented ascending HNN extension (which is the first example of a finitely presented cyclic extension of an infinite torsion group). I. Lys" enok [@L] produced the following recursive presentation of $G$: $$G\equiv \langle a,c,d:\sigma^n(a^2), \sigma^n((ad)^4), \sigma^n(adacac)^4), n\geq 0\rangle$$ where $\sigma (a)=aca, \sigma (c)=cd$ and $\sigma (d)=c$. It can be shown that the ascending HNN extension $E$ with presentation: $$\langle a,c,d,t: a^2=(ad)^4=(adacac)^4=1, t^{-1}at=aca, t^{-1}ct=dc, t^{-1}dt=c\rangle$$ has base group $G$ generated by $\{a,c,d\}$ and $E$ has bounded depth with root $\{a^2,c^2, d^2, (ad)^4, (adacac)^4\}$. The group $E$ was the first example of a finitely presented amenable but not elementary amenable group. In $\S$5 of [@M6], M. Mihalik shows that $E$ is simply connected at $\infty$. The notion of a finitely generated group being simply connected at $\infty$ is introduced in [@M6], and the group $G$ is shown to be simply connected at $\infty$. \[OS\] A. Ol’shanskii and M. Sapir [@OS1] and [@OS2] construct a finitely presented ascending HNN extension $\mathcal G$, where the base group $\bar{\mathcal H}$ is a finitely generated infinite torsion group. In contrast to Grigorchuk’s group (Example \[G\]) the base group has finite exponent, and $\mathcal G$ is not amenable (see Theorem 1.1 of [@OS1]). The group $\mathcal G$ has been suggested as a possible non-semistable at $\infty$ group, but it is clear from the equations (5)-(8) in $\S$1.2 of [@OS1] that $\mathcal G$ has an ascending HNN presentation with depth one, and so by our main theorem is semistable at $\infty$. We give a brief summary. A finite set of words $\mathcal R$ is determined in $F_C=\langle c_1,\ldots, c_m\rangle$ a free group of rank $m$. A monomorphism $\phi:F_C\to F_C$ is defined and $\mathcal R'$ is defined to be $\cup_{i=1}^{\infty}\{\phi^i(r):r\in \mathcal R\}$. The base group of their ascending HNN extension has presentation $$\bar {\mathcal H}=\langle c_1,\ldots, c_m: \mathcal R\cup \mathcal V\cup \mathcal R'\rangle$$ where $\mathcal V$ is the set of elements $u^n$ for all $u\in F_C$ (and $n$ a fixed large odd number). In particular, $\bar{\mathcal H}$ is an infinite torsion group. A finitely presented ascending HNN extension of $\bar{\mathcal H}$ has infinite presentation $$\mathcal G=\langle t, c_1,\ldots, c_m: t^{-1}c_it=\phi(c_i), \mathcal R\cup \mathcal R'\cup \mathcal V\rangle$$ (This follows equation (7) of [@OS1].) Clearly the relations $\mathcal R'$ are a consequence of $\mathcal R$ and the conjugation relations and so can be removed. It is then argued that each relation $v^n$ of $\mathcal V$ is $\phi^{-1} (v')$ where $v'$ is a consequence of $\mathcal R$ and the conjugation relations. In particular, the above presentation of $\mathcal G$ can be reduced to the presentation $$\mathcal G=\langle t, c_1,\ldots, c_m: t^{-1}c_it=\phi(c_i), \mathcal R\rangle$$ and this presentation has depth 1. It seems unlikely that $\mathcal G$ has an ascending HNN presentation with depth 0. One must wonder if for every integer $N>0$ there are finitely presented ascending HNN groups $\mathcal G_N$ with ascending HNN presentations of depth $N$ but $\mathcal G_N$ does not have such a presentation of depth $N-1$. \[rel\] Suppose $G$ is the ascending HNN extension with finite presentation: $$\mathcal P=\langle t,\mathcal A: \mathcal R, t^{-1}at=\phi(a) \hbox{ for all } a\in \mathcal A\rangle$$ where $\phi:F(\mathcal A)\to F(\mathcal A)$ is a (finite rank) free group homomorphism. Then $A$, the subgroup of $G$ generated by $\mathcal A$, has presentation: $$A=\langle \mathcal A:N^{\infty}(\mathcal R,\phi)\equiv \cup_{i=0}^\infty\phi^{-i}(N(\cup _{j=0}^\infty \phi^j(\mathcal R)))\rangle.$$ Furthermore, we have the relations: 1. $\phi^{-i}(N(\cup _{j=0}^\infty \phi^j(\mathcal R)))\subset \phi^{-(i+1)}(N(\cup _{j=0}^\infty \phi^j(\mathcal R))) \hbox{ for all }i\geq 0,\hbox{ and}$ 2. $\phi(N^{\infty} (\mathcal R,\phi))\subset N^\infty(\mathcal R,\phi)=\phi^{-1}(N^\infty(\mathcal R,\phi))$ Note that $$\phi(N(\cup _{j=0}^\infty \phi^j(\mathcal R)))\subset N(\cup _{j=1}^\infty \phi^j(\mathcal R)))\subset N(\cup _{j=0}^\infty \phi^j(\mathcal R))\hbox{ so that}$$ $$N(\cup _{j=0}^\infty \phi^j(\mathcal R)))\subset \phi^{-1}( N(\cup _{j=0}^\infty \phi^j(\mathcal R)))$$ and so relation 1) follows. To simplify notation, let $N^\infty=N^\infty(\mathcal R,\phi)$ and $N_i=\phi^{-i}(N(\cup _{j=0}^\infty \phi^j(\mathcal R)))$ for $i\geq 0$, so that $N^\infty=\cup_{i=0}^\infty N_i$ and by 1), $N_i\subset N_{i+1}=\phi^{-1}(N_i)$. Suppose $a\in \phi^{-1}(N^{\infty})$. Then $\phi(a)\in N^{\infty}$ and so $\phi(a)\in N_i$ for some $i\geq 0$. Then $a\in \phi^{-1}(N_i)=N_{i+1}\subset N^{\infty}$ and we have shown that $\phi^{-1}(N^\infty)\subset N^\infty$. Next suppose $a\in N^\infty$. Then for some $i\geq 0$, $a\in N_i$. By 1), $a\in N_{i+1}=\phi^{-1}(N_i)\subset \phi^{-1}(N^\infty)$. We have shown that $N^\infty(\mathcal R,\phi)\subset \phi^{-1}(N^\infty(\mathcal R,\phi))$. Combining we have $N^\infty= \phi^{-1}(N^\infty)$ and relation 2) follows. Let $A_1$ be the group with presentation $\langle \mathcal A:N^{\infty}(\mathcal R,\phi)\rangle$. To finish the theorem we must show that $A=A_1$. Let $p_1:F(\mathcal A)\to A_1$ (determined by $p_1(a)=a$ for all $a\in \mathcal A$) be the quotient homomorphism. By 2), the map $\phi_1:A_1\to A_1$ that extends the map $\phi_1(p_1(a))=p_1(\phi(a))$ for all $a\in \mathcal A$ is a homomorphism. This gives a commutative diagram: $$F(\mathcal A){\buildrel \phi\over \longrightarrow}F(\mathcal A)$$ $$\downarrow p_1\ \ \ \ \ \ \downarrow p_1$$ $$A_1\ \ \ {\buildrel \phi_1\over \longrightarrow}\ \ A_1$$ Next we show that $\phi_1$ is a monomorphism. Suppose $w_1\in ker(\phi_1)$. Let $w\in F(\mathcal A)$ be such that $p_1(w)=w_1$. Then $p_1(\phi(w))=1$ and so $\phi(w)\in ker(p_1)=N^\infty$ and $w\in \phi^{-1}(N^\infty) = N^\infty$. Then $w_1=p_1(w)=1\in A_1$ and $\phi_1$ is a monomorphism. Consider the ascending HNN extension: $$A_1\ast_\phi=\langle t, \mathcal A: N^\infty(\mathcal R,\phi), t^{-1}at=\phi(a)\rangle \hbox{ for all } a\in \mathcal A$$ with base group $A_1$. Since each relation in $N^\infty(\mathcal R,\phi)$ is a consequence of $\mathcal R$ and the conjugation relations, this group also has presentation $\mathcal P$. By Britton’s lemma $A=A_1$. Suppose $G$ has finite presentation $\langle t, \mathcal A: \mathcal R, t^{-1}at=\phi(a)\hbox{ for } a\in \mathcal A\rangle$. Here $\phi:F(\mathcal A)\to F(\mathcal A)$ is a homomorphism. Let $N_0\equiv N(\cup _{j=0}^\infty \phi^j(\mathcal R))\triangleleft F(\mathcal A)$, $N_i\equiv \phi^{-i}(N_0)$ and $A$ be the subgroup of $G$ generated by $\mathcal A$, so that $G$ is the ascending HNN extension, with base $A$ and stable letter $t$. Let $p:F(\mathcal A)\to A$ be the homomorphism extending the map taking $a$ to $a$ for all $a\in \mathcal A$. It seems that there is some potential to find a finitely presented group that is not semistable at $\infty$ if one could find a finitely presented ascending HNN extension $\langle t, \mathcal A: \mathcal R, t^{-1}at=\phi(a)\hbox{ for } a\in \mathcal A\rangle $, such that the ascending chain of normal subgroups $N_k$ of $F(A)$ do not stabilize. The following approach gives a general method of constructing infinite depth ascending HNN presentations. In particular, when $A_0$ is a non-Hopfian group and $\phi_0:A_0\to A_0$ is an epimorphism with non-trivial kernel, then there is a corresponding ascending HNN extension with infinite depth. \[Osin\] Suppose the group $A_0$ has finite presentation $\langle \mathcal A:\mathcal R\rangle$ and $\phi_0:A_0\to A_0$ is a homomorphism with non-trivial kernel $K_0$ such that the following diagram (with $F({\mathcal A})$ the free group on $\mathcal A$ and $q(a)=a$ for $a\in \mathcal A$) commutes: $$F(\mathcal A){\buildrel \phi\over \longrightarrow}F(\mathcal A)$$ $$\ \ \downarrow q\ \ \ \ \ \ \downarrow q$$ $$\ \ A_0\ \ {\buildrel \phi_0\over \longrightarrow}\ A_0$$ If the ascending sequence $\{K_i=\phi_0^{-i}(K_0)=ker(\phi_0^{i+1})\}$ of normal subgroups of $A_0$ does not stabilize (in particular when $\phi_0$ is an epimorphism), then the group $G$ with ascending HNN presentation $$\mathcal P\equiv \langle t,\mathcal A:\mathcal R, t^{-1}at=\phi(a) \hbox{ for all }a\in \mathcal A\rangle$$ has unbounded depth. First observe that if $\phi_0$ is an epimorphism, and $k\in K_0-1$, then there is $k_n$ such that $\phi_0^n(k_n)=k$. In particular, $k_n\in ker(\phi_0^{n+1})-ker(\phi_0^n)$. Note that $ker(q)=N(\mathcal R)\triangleleft F(\mathcal A)$. If $r\in N(\mathcal R)$, then $q(\phi(r))=1$ and so $\phi(N(\mathcal R))\subset N(\mathcal R)$ and (retaining the notation of Theorem \[rel\]) $$N_0=N(\cup_{i=0}^\infty\phi^i(\mathcal R))=N(\mathcal R)=ker(q).$$ For the subgroup $A$ of $G$ determined by $\mathcal A$ there is a commutative diagram: $$F(\mathcal A){\buildrel \phi\over \longrightarrow}F(\mathcal A)$$ $$\downarrow p\ \ \ \ \ \ \downarrow p$$ $$A\ \ \ {\buildrel \phi_1\over \longrightarrow}\ \ A$$ Observed that $A$ is a quotient of $A_0$ where the element $q(a)$ is mapped to $p(a)$ for all $a\in\mathcal A$ and the following diagram commutes: $$A_0\ \ {\buildrel \phi_0\over \longrightarrow}\ \ A_0$$ $$\ \ \ \downarrow q_0\ \ \ \ \ \ \downarrow q_0$$ $$A\ \ \ {\buildrel \phi_1\over \longrightarrow}\ \ A$$ $(\ast)$ If $\phi_0$ is an epimorphism, then since $q_0$ is an epimorphism $\phi_1$ is also an epimorphism. In any case, $G =A\ast_{\phi_1}$ and when $\phi_0$ is an epimorphism, $\phi_1$ is an isomorphism. Let $N_i=\phi^{-i}(N_0)\triangleleft F(\mathcal A)$. By Theorem \[rel\].1 $N_{i-1}\leq N_i$. For $i>0$ we show $N_i\ne N_{i-1}$ when $K_i\ne K_{i-1}$, so that $\mathcal P$ has unbounded depth when $\{K_i\}$ does not stabilize. Choose $a_n\in K_n-K_{n-1}$. Choose $\bar a_n\in F(\mathcal A)$ such $q(\bar a_n)=a_n$. Then $$q(\phi^{n-1}(\bar a_n))=\phi_0^{n-1}q(\bar a_n)=\phi_0^{n-1}(a_n)\ne 1$$ so $\phi^{n-1}(\bar a_n)\not\in N_0=ker(q)$ and $\bar a_n\not\in N_{n-1}$. But, $$q\phi^{n}(\bar a_n)=\phi_0(q(\phi^{n-1}(\bar a_n)))=\phi_0(a)=1$$ so $\phi^n(\bar a_n)\in ker(q)=N_0$ and $\bar a_n\in N_n-N_{n-1}$. When $A_0$ is non-Hopfian and $\phi_0$ maps $A_0$ onto $A_0$ with non-trivial kernel, Theorem \[Osin\] produces a corresponding ascending HNN extension with unbounded depth. Let $A_0=BS(2,3)=\langle a,b:b^{-1}a^2b=a^3\rangle$, and $\phi:F(\{a,b\})\to F(\{a,b\})$ by $a\to a^2$ and $b\to b$, observe that $\phi^i([b^{-i}ab^i,a])=[b^{-i}a^{2^i}b^i,a^{2^i}]\approx [a^{3^i},a^{2^i}]=1$, so that $[b^{-i}ab^i,a]\in N_i$. If $[b^{-i}ab^i,a]\in N_{i-1}$ then $\phi^{i-1}([b^{-i}ab^i,a])\in N_0$ where $N_0=N(b^{-1}a^2ba^{-3})\triangleleft F(\{a,b\}))$. But $$\phi^{i-1}([b^{-i}ab^i,a])=[b^{-i}a^{2^{i-1}}b^{i}, a^{2^{i-1}}]\approx [b^{-1}a^{3^{i-1}}b,a^{2^{i-1}}]$$ a reduced word of syllable length $8$ in (the HNN extension) $\langle a,b:b^{-1}a^2=a^3\rangle$. In particular, the following ascending HNN extension presentation with stable letter $t$ and base group generated by $\{a,b\}$ has infinite depth: $$\langle t,a,b:b^{-1}a^2b=a^3, t^{-1}at=a^2, t^{-1}bt=b\rangle.$$ Since $\phi_1$ is an isomorphism (see $(\ast)$), $\langle A\rangle=\langle a,b\rangle$ is normal in $G$ and the main theorem of M. Mihalik’s paper [@M1] implies $G$ is semistable at $\infty$. So this particular approach cannot yield a non-semistable at $\infty$ ascending HNN extension of unbounded depth when $\phi_0$ is an epimorphism. The remainder of this section is of general interest in understanding presentations of ascending HNN extensions, but not important to the proof of our main theorem. \[R2\] Consider a homomorphisms $\phi: F(\mathcal A)\to F(\mathcal A)$ for $\mathcal A$ finite where $\phi$ has non-trivial kernel. One might wonder if it is possible to have a such a homomorphism so that (even with $\mathcal R=\emptyset$), the presentation $\langle t, \mathcal A: t^{-1}at=\phi(a) \hbox{ for } a\in \mathcal A\rangle$ does not have finite depth? I.e. is it possible that the ascending collection of normal subgroups of $F(\mathcal A) $ defined by $N_k=\langle \cup _{i=1}^kker(\phi^{i})\rangle$ does not stabilize? The answer is no. Consider the sequence $F(\mathcal A)\to \phi(F(\mathcal A))\to \phi^2(\mathcal F(\mathcal A))\to \cdots$ of epimorphisms where each map is $\phi$. For $i>0$, $\phi^i(F(\mathcal A))$ is a free group of rank $\leq rank(\phi^{i-1}(F(\mathcal A)))$. So, for some integer $m\geq 0$, $rank (\phi^m(F(\mathcal A)))=rank (\phi^{m+1}(F(\mathcal A)))$. As finitely generated free groups are Hopfian, the epimorphism $\phi:\phi^m(F(\mathcal A))\to \phi^{m+1}(F(\mathcal A))$ is an isomorphism and $ker(\phi^m)=ker(\phi^{m+1})$. Next we show that any homomorphism $\phi :F(\mathcal A)\to F(\mathcal A)$ defining an ascending HNN extension can be replaced by a monomorphism. Suppose $\mathcal A$ is a finite set, $\mathcal R$ is a finite subset of the free group $F(\mathcal A)$ and $\phi:F(\mathcal A)\to F(\mathcal A)$ is a homomorphism. Then there is a finite set $\mathcal B$, a finite set $\mathcal R'\subset F(\mathcal B)$, a monomorphism $\phi':F(\mathcal B)\to F(\mathcal B)$ and an isomorphism of ascending HNN extensions: $$\langle t,\mathcal A: \mathcal R, t^{-1}at=\phi(a)\hbox{ for }a\in \mathcal A\rangle{\buildrel \rho\over \longrightarrow } \langle t,\mathcal B: \mathcal R', t^{-1}bt=\phi'(b) \hbox{ for } b\in \mathcal B\rangle$$ Furthermore, if $$q_{\mathcal A}:F(\mathcal A\cup \{t\})\to \langle t,\mathcal A: \mathcal R, t^{-1}at=\phi(a)\hbox{ for }a\in \mathcal A\rangle\hbox{ and}$$ $$q_{\mathcal B}:F(\mathcal B\cup \{t\})\to \langle t,\mathcal B: \mathcal R', t^{-1}bt=\phi'(b) \hbox{ for } b\in \mathcal B\rangle$$ are the natural projections, then there is a epimorphism $$\rho':F(\mathcal A\cup \{t\})\to F(\mathcal B\cup \{t\})\hbox{ such that:}$$ 1\) $\rho'(t)=t$ 2\) $\rho' \circ q_{\mathcal B}=q_{\mathcal A}\circ \rho$ and 3\) $\rho'(\mathcal R)=\mathcal R'$, (for $N_G(\mathcal R)$ the normal closure of $\mathcal R$ in $G$) $\ \ \ \rho'(N_{F(\mathcal A)}(\mathcal R))= N_{F(\mathcal B)}(\mathcal R')$ and $\rho'(N_{F(\mathcal A\cup \{t\})}(\mathcal R))= N_{F(\mathcal B\cup \{t\})}(\mathcal R')$ In particular, the following diagram commutes: $$F(\mathcal A\cup \{t\}) {\buildrel \rho'\over \longrightarrow} F(\mathcal B\cup \{t\})$$ $$\downarrow q_{\mathcal A} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \downarrow q_{\mathcal B}$$ $$\langle t,\mathcal A: \mathcal R, t^{-1}at=\phi(a)\rangle {\buildrel \rho\over \longrightarrow} \langle t,\mathcal B: \mathcal R', t^{-1}bt=\phi'(b)\rangle$$ (Basically $\rho$ is conjugation by $t^m$ for some $m\geq 0$.) Since free groups are Hopfian, there is an integer $m\geq 0$ such that $\phi:\phi^m(F(\mathcal A))\to \phi^{m+1}(F(\mathcal A))$ is an isomorphism (see Remark \[R2\]). Let $\mathcal B$ be a finite set of free generators for $\phi^m(F(\mathcal A))$ (so $F(\mathcal B)\equiv \phi^m(F(\mathcal A))$) and let $\phi':F(\mathcal B)\to F(\mathcal B)$ be defined so that $\phi'(b)$ is a $\mathcal B$-word for $\phi(b)$ for each $b\in \mathcal B$. Note that $\phi'$ is a monomorphism, since $\phi:\phi^m(F(\mathcal A))\to\phi^{m+1}(F(\mathcal A))<F(\mathcal B)$ is a monomorphism. Define $\rho':F(\mathcal A\cup \{t\})\to F(\mathcal B\cup \{t\})$ such that $\rho'(t)=t$ and $\rho'(a)=\phi^m(a)$ for all $a\in \mathcal A$. Note that $\rho'$ is an epimorphism. Let $\mathcal R'=\phi^m(\mathcal R)$ (written as $\mathcal B$-words) and then 3) holds. Since $\rho'$ of each relation of $\langle t,\mathcal A: \mathcal R, t^{-1}at=\phi(a)\rangle$ is a relator of $\langle t,\mathcal B: \mathcal R', t^{-1}bt=\phi'(b)\rangle$, the homomorphism $\rho$ can be defined so that 2) holds. Since $\rho'$ is an epimorphism, $\rho$ is an epimorphism. (Basically, $\rho$ is conjugation by $t^m$.) To show $\rho$ is an isomorphism, it remains to show that if $w\in ker(\rho q_{\mathcal A})$ then $w\in ker (q_{\mathcal A})$ (i.e. $\rho$ is a monomorphism). First observe that the exponent sum of $t$ in $w$ is zero. Next observe that, $w\in ker(\rho q_{\mathcal A})$ (respectively $w\in ker(q_{\mathcal A})$) iff $t^{-j}wt^j\in ker(\rho q_{\mathcal A})$ (respectively $t^{-j}wt^j\in ker( q_{\mathcal A})$) for every integer $j\geq 0$. Select a positive integer $j$ such that any initial segment of $t^{-j}wt^j$ has $t$-exponent sum $\leq 0$. In $F(\mathcal A\cup\{t\})$, $w=(t^{-n_1}w_1t^{n_1})\cdots (t^{-n_s}w_st^{n_s})$ where $n_i\geq 0$ and each $w_i\in F(\mathcal A)$. Let $\bar w \equiv \phi^{n_1}(w_1)\cdots \phi^{n_s}(w_s)(\in F(\mathcal A))$. Now, $q_{\mathcal A}(w)=q_{\mathcal A}(\bar w)$ and $\bar w\in ker(q_{\mathcal B}\rho')$. Note that $\rho'(\bar w)=\phi^m(\bar w)\in ker(q_{\mathcal B}) (<F(\mathcal B))$. By Theorem \[rel\], $\phi^m(\bar w)\in (\phi')^{-k}(N(\cup_{i=0}^{\infty}(\phi')^i(\mathcal R')))$ for some integer $k\geq 0$. By 3) we have, $\phi^m(\bar w)\in \phi^{-k}(N(\cup_{i=0}^{\infty}\phi^i(\phi^m(\mathcal R))))$ and so $\bar w\in \phi^{-k-m}(N(\cup_{i=m}^{\infty}\phi^i(\mathcal R)))$. By Theorem \[rel\], $\bar w$ (and hence $w$) is an element of $ker (q_{\mathcal A})$. Bounded Depth HNN extensions are semistabile at $\infty$ {#Smain} ======================================================== The group $G$ is an ascending HNN extension of a finitely generated group $A$ and $G$ has bounded depth. We use the notation of $\S$\[basess\]. Let $\mathcal A=\{a_1,\ldots , a_n\}$ be a finite generating set for $A$ and $$\mathcal P\equiv \langle t,\mathcal A:\mathcal R,t^{-1}at=\phi(a) \hbox { for all } a\in \mathcal A\rangle$$ a finite presentation for $G$, where each element of $\mathcal R$ is an $\mathcal A$-word. Let $X$ be the Cayley 2-complex for this presentation, and $\Lambda$ be the Cayley graph of $A$ with generating set $\mathcal A$. We assume $\ast\in \Lambda\subset X$ where $\ast$ is the identity vertex for $X$. We must show condition (2) of Theorem \[GGM\] is satisfied for each compact set $C$ in $X$. We will show that there is an integer $N(C)\geq 0$ (defined in Lemma \[below\]) such that $t^NAt^{-N}$ is co-semistable at $\infty$ in $X$ with respect to $C$. This requires that we find a compact set $D(C)$ such that loops in $X-(t^NAt^{-N} ) D(C)$ can be pushed to infinity by proper homotopies in $X-C$. In every instance $D(C)$ will have the form $t^{N(C)}\{\ast, t^{-1},\ldots ,t^{-M}\}$ for some integer $M$ that depends on $C$ and the depth of the presentation $\mathcal P$ for $G$. \[stcoax\] In the case that $A$ is finitely presented, it is interesting to note that our proof will show that for our choice of $D(C)$, each loop in $X-(t^{-N}At^N)D$ is homotopically trivial in $X-(t^{-N}At^N) D$ (see Theorem \[FP\]). This sort of behavior is related to the main theorems of [@W92], [@GGM16] and [@GG12], and is called [*strongly coaxial*]{} when $A$ is infinite cyclic. Recall that $P:X\to \mathbb R$ is such that for each vertex $v\in G\subset X$, $P(v)$ is the exponent sum of $t$ in $v$ and we say $v$ is in [*level*]{} $P(v)$. The next lemma is a direct consequence of the normal form for elements of $G$ (each element $g\in G$ has the form $t^nat^{-m}$ for some $n, m\geq 0$ and $a\in A$). \[below\] Suppose $C$ is a finite subcomplex of $X$. For each vertex $v\in C$, write $$v=t^{n(v)}a_vt^{-m(v)}\hbox{ for }a_v\in A\hbox{ and }n(v), m(v)\geq 0, \ and$$ $$N(C)=max\{n(v):v\in C\}\hbox{ and }M(v,C)=N(C)-n(v)+m(v)(\geq 0).$$ Then $vt^{M(v,C)}\in t^{N(C)}A$. Note that by definition, $N(C)-M(v,C)=n(v)-m(v)=P(v)$. For $v\in C$, $$v=t^{N(C)}(t^{n(v)-N(C)}a_vt^{N(C)-n(v)})t^{-M(v,C)}.$$ If $a'_v=t^{n(v)-N(C)}a_vt^{N(C)-n(v)}(\in \phi^{N(C)-n(v)}(A)<A)$ then $vt^{M(v,C)}=t^{N(C)}a'$. Geometrically this say that for each vertex $v$ of $C$, the edge path at $v$ with each edge labeled $t$ and of length $M(v,C)$ ends in $t^{N(C)}A$. \[below2\] Suppose $C$ is a finite subcomplex of $X$. Let $$M(C)=max\{M(v,C): v\hbox{ is a vertex of } C\}.$$ Then for each vertex $v\in C$ $$v\in (t^{N(C)}At^{-N(C)})(t^{N(C)}\{\ast, t^{-1},\ldots ,t^{-M(C)}\}) \ \hbox{and}$$ for positive integers $M,N$ and $w\in (t^{N}At^{-N})(t^{N}\{\ast, t^{-1},\ldots ,t^{-M}\})$ we have $$wA\subset (t^{N}At^{-N})(t^{N}\{\ast, t^{-1},\ldots ,t^{-M}\}).$$ The first conclusion follows from Lemma \[below\]. Note that $w=t^{N}at^{-m}$ for some $a\in A$ and $m\in \{0,\ldots, M\}$. Then $wA\subset t^{N}a(t^{-m}At^m)t^{-m}$ and as $t^{-m}At^m\subset A$: $$wA\subset t^{N}At^{-m}\subset (t^{N}At^{-N})(t^{N}\{\ast, t^{-1},\ldots ,t^{-M}\}).$$ For integers $N,M\geq 0$ define $ D(N,M)\equiv t^NA\{\ast,t^{-1},\ldots, t^{-M}\}$. If $C$ is compact in $X$, and $B$ is the bounded depth of our ascending HNN presentation $\mathcal P$ we will use the set $D(N(C),M(C)+B+1)$ to play the roll of the compact set $D$ in $X$ and $t^{N(C)}At^{-N(C)}$ to play the roll of $J$ when applying Theorem \[GGM\]. First we must understand the set $(t^NAt^{-N})D(N,M)=t^NA\{\ast, t^{-1},\ldots, t^{-M}\}$ and a few geometric definitions will help. If $v,w\in G$, we say the coset $wA$ is [*$n$ levels directly below*]{} $vA$ if there is an edge path of length $n$ with each edge labeled $t$ from a vertex of $wA$ to a vertex of $vA$. Note that if $wA$ is $n$ levels directly below $vA$ then for every vertex $u$ of $wA$, the edge path at $u$ of length $n$ and with each edge labeled $t$ ends in $vA$. We say $vA$ is [*$n$ levels directly above*]{} $wA$. Any coset $wA$ has exactly one coset $n(\geq 0)$ levels directly above it, but the cosets one level directly below $vA$ are in 1-1 correspondence with the cosets of $A$ in $G$. This means \[directly\] The set $D(N,M)=t^NA\{\ast,t^{-1},\ldots, t^{-M}\}$ is the union of cosets $vA$ that are $n$ levels directly below $t^NA$ for $n\in\{0,1,\ldots, M\}$. [**Note.**]{} In order to avoid confusion we may use the notation $H\cdot E$ instead of $HE$ when $H$ is a subgroup of $G$ and $E$ a subset of $X$. Let $Q(M)=\{\ast, t^{-1}, t^{-2},\ldots, t^{-M}\}$ ($M\geq 0$) and notice that the next lemma says that it is easy to check if a vertex $v$ of $X$ is in either $A \cdot Q(M)$, $K_0$ (a special component of $X-A\cdot Q(M)$) or a component of $X-A \cdot Q(M)$ other than $K_0$. If $v$ is in a level $>0$ then $v\in K_0$. If $v$ is in level $0$ through $-M$ then $v$ is in $A\cdot Q$ if the edge path from $v$ to level $0$, with each edge labeled $t$, ends in $A$ (i.e. $vA$ is $-P(v)$ levels directly below A); and $v$ is in $K_0$ otherwise. If $v$ is in a level $<-M$, then $v$ is in $K_0$ if the edge path from $v$ to level $0$, with each edge labeled $t$, does not end in $A$; and otherwise, $v$ belongs to a component of $X-A\cdot Q$ other than $K_0$. Note that $t^n\in K_0$ for all $n>0$, so that under the quotient of $X$ by $A$, the image of $K_0$ is not contained in a compact set. If $v\in K$ where $K$ is a component of $X-A\cdot Q$ other than $K_0$ then $vt^n\in K$ for all $n<0$, so under the quotient of $X$ by $A$, the image of $K$ is not contained in a compact set. Our terminology for this is that $K$ and $K_0$ are $A$-unbounded components of $X-A\cdot Q$. \[nbhd\] Let $Q(M)=\{\ast, t^{-1}, t^{-2},\ldots, t^{-M}\}$ for $M\geq 0$. Then 1\) $A\cdot Q(M)$ is the set of all vertices $v\in X$ such that $P(v)\in \{-M,\ldots, 0\}$ and $vt^{-P(v)}\in A$. Furthermore, if $v\in A\cdot Q(M)$ then $vA\subset A\cdot Q(M)$. 2\) $X-A\cdot Q(M)$ has an $A$-unbounded component $K_0$ with stabilizer $A$ and the vertex $v$ of $X-A\cdot Q(M)$ is in $K_0$ if and only if either $P(v)\geq -M$ or both $P(v)<-M$ and $vt^{-P(v)}\not\in A$, 3\) if $K$ is any component of $X-A\cdot Q(M)$ other than $K_0$, then $K$ is $A$-unbounded, and if $v$ is a vertex of $K$, then $P(v)<-M$ and $vt^{-P(v)}\in A$. Part 1): This part follows directly from Lemma \[directly\] (with $N=0$). Part 2): Let $K_0$ be the component of $X-A\cdot Q$ that contains the vertex $t$. Let $v$ be a vertex of $X$, then by normal forms, $v=t^lat^{-m}$ where $a\in A$ and $l,m\geq 0$. If $P(v)>0$, then $l>m$ and the normal form for $v$ defines an edge path from $t$ to $v$ in levels 1 and above, and hence avoiding $A\cdot Q$. So if $P(v)>0$, then $v\in K_0$. Note that $P(at)=1$ for all $a\in A$, so that $A$ stabilizes $K_0$. Suppose $v\in X-A\cdot Q$ and $P(v)\in \{-M,\ldots, 0\}$, then by part 1), $vt^{-P(v)}\not\in A$ and no point of the edge path beginning at $v$ with labeling $t^{-P(v)}$ is a point of $A\cdot Q$. Since $P(vt^{-P(v)+1})=1$, the edge path at $v$ with labeling $t^{-P(v)+1}$ avoids $A\cdot Q$ and ends at a point of $K_0$. So if $v\in X-A\cdot Q$ and $P(v)\in \{-M,\ldots,0\}$ then $v\in K_0$. Suppose $v\in X-A\cdot Q$ and $P(v)<-M$. Note that $P(vt^{-P(v)})=0$. If $vt^{-P(v)}\not \in A$, then we have already shown that $vt^{-P(v)}\in K_0$, and by part 1), no point of the path with labeling $t^{-P(v)}$ at $v$ intersects $A\cdot Q$. Hence $v\in K_0$. For the converse, suppose $v\in K_0$ and $P(v)<-M$. We must show $vt^{-P(v)}\not\in A$. Let $\alpha$ be an edge path in $X-A\cdot Q$ from $t$ to $v$. Let $\beta$ be a tail of $\alpha$ where $w$, the initial point of $\beta$, is the last point of $\alpha$ with $P(w)=-M$. The first edge of $\beta$ is labeled $t^{-1}$. Note that conjugation relations allow us to move each $A$-edge of $\beta$ up to level $-M$ so there is an edge path from $w$ to $v$ labeled $ (x_1,\ldots x_i, t^{-k})$ where $k> 0$ and $x_i\in \{a_1,\ldots, a_n\}^{\pm 1}$. Hence $vt^k\in wA$ and $P(vt^k)=-M$. By Part 1), $w\not\in A\cdot Q$ implies $wA \cap A\cdot Q=\emptyset$, so $vt^k\not\in A\cdot Q$. Again by Part 1), $vt^kt^{-P(vt^k)}\not\in A$. Then $vt^{-P(v)}=vt^kt^{-P(v)-k}=vt^kt^{-P(vt^k)}\not\in A$. This completes part 2). Part 3): If $v\in K\ne K_0$ then by Part 2), $P(v)<-M$ and $vt^{-P(v)}\in A$. We need a slightly stronger version of Lemma \[nbhd\]. Recall that $Q(M)=\{\ast, t^{-1}, t^{-2},\ldots, t^{-M}\}$. Then $$t^NA\cdot Q(M)=t^NAt^{-N} (t^N(Q(M))).$$ Observe that for any integer $m\geq 0$ the stabilizer of $t^m\Lambda$ is $t^mAt^{-m}$. \[nbhd2\] Let $M,N\geq 0$ be integers: 1\) The set $t^NA\cdot Q(M)(=D(N,M))$ consists of the vertices $v\in X$ such that $P(v)\in \{N,N-1,\ldots,N-M\}$ and $vt^{N-P(v)}\in t^NA$. Furthermore, if $v\in t^NA\cdot Q(M)$ then $vA\subset t^NA\cdot Q(M)$. 2\) Let $K_0$ be the component of $X-A\cdot Q(N)$ described by part 2 of Lemma \[nbhd\]. Then $t^NK_0$ is a $(t^NAt^{-N})$-unbounded component of $X-t^NA\cdot Q(N)$ with stabilizer $t^NAt^{-N}$, and the vertex $v$ of $X-t^NA\cdot Q$ is in $t^NK_0$ if and only if either $P(v)\geq N-M$ or $P(v)<N-M$ and $vt^{N-P(v)}\not\in t^NA$, 3\) if $K$ is any component of $X-A\cdot Q$ other than $K_0$ then $t^NK$ is a $(t^NAt^{-N})$-unbounded component of $X-t^NA\cdot Q(M)$, and if $v$ is a vertex of $t^NK$, then $P(v)<N-M$ and $vt^{N-P(v)}\in t^NA$. Part 1): If $v\in t^NA\cdot Q(M)$, then $P(v)\in \{N,N-1,\ldots, N-M\}$. Note that $P(t^{-N}v)=-N+P(v)\in \{0,\ldots, -M\}$. Lemma \[nbhd\] implies, $t^{-N}v\in A\cdot Q$ if and only if $t^{-N}vt^{-P(t^{-N}v)}\in A$ if and only if $vt^{N-P(v)}\in t^NA$. Furthermore if $v\in t^NA\cdot Q(M)$ then $t^{-N}v\in A\cdot Q(M)$ and by Lemma \[nbhd\], $t^{-N}vA\subset A\cdot Q(M)$ so that $vA\subset t^NA\cdot Q(M)$. Part 2): By Lemma \[nbhd\], $t^NK_0$ is a component of $X-t^N(A\cdot Q(M))$. Since $t\in K_0$, $t^{N+1}\in t^NK_0$, and so the proper ray at $t^{N+1}$ with all edge labels $t$ belongs to $t^NK_0$. In particular, $t^NK_0$ is $t^NAt^{-N}$-unbounded. Since $A$ stabilizes $A\cdot Q(M)$, $t^NAt^{-N}$ stabilizes $t^NA\cdot Q(M)$. The vertex $v$ of $X$ belongs to $t^NK_0$ if and only if $t^{-N}v\in K_0$, (by Lemma \[nbhd\]) if and only if $P(t^{-N}v)\geq -M$ or both $P(t^{-N}v)< -M$ and $t^{-N}vt^{-P(t^{-N}v)}\not\in A$, if and only if $P(v)\geq N-M$ or both $P(v)<N-M$ and $vt^{N-P(v)}\not\in t^NA$. Part 3): Suppose $v$ is a vertex of $t^NK$ then $t^{-N}v\in K$. By Lemma \[nbhd\], $P(t^{-N}v)<-M$ (so $P(v)<N-M$) and $t^{-N}vt^{-P(t^{-N}v)}\in A$ (so $vt^{N-P(v)}\in t^NA$). Geometrically, the only difference between Lemma \[nbhd2\] and Lemma \[nbhd\] is that in order to check if a vertex $v$ in a level of $X$ less than $N$, belongs to either $t^NA\cdot Q(M)$, $t^NK_0$ or $t^NK$ for $K$ a component of $X-A\cdot Q(M)$ different than $K_0$, one simply checks if the end point of the path at $v$ with each edge labeled $t$ and ending in level $N$, ends in $t^NA$ or not. It is also important to observe the following remark. \[coset\] For any integers $M,N\geq 0$ the set $t^NA\cdot Q(M)(=D(N,M))$ and any component of $X-D(N,M)$ is a union of cosets $vA$. \[up\] Suppose $M,N\geq 0$ are integers and $v$ is a vertex of the component $t^NK_0$ of $X-t^NA\cdot Q(M)$. Then for any integer $n\geq 0$, $(vt^nA)\cap A\cdot Q(M)=\emptyset$. By 1) of Lemma \[nbhd2\], it suffices to show that $vt^n\not\in t^NA\cdot Q(M)$. But this follows directly from parts 1) and 2) of Lemma \[nbhd2\]. $(\ast )$ From this point on we assume the presentation $\mathcal P$ has bounded depth $B\geq 0$. \[killD\] If $\alpha$ is an edge path loop in $X$ and $im(P(\alpha))\subset (-\infty,L]$, then $\alpha$ is homotopically trivial by a homotopy $H$ such that $im(P(H))\subset (-\infty, L+B]$. Using only conjugation 2-cells, $\alpha$, is homotopic (by a homotopy $H_1$) to an edge path loop $\beta$, each of whose vertices is in level $L$. In particular, each edge of $\beta$ is labeled by an element of $\mathcal A$ and $im(P(H_1))\subset (-\infty, L]$. The word $w$ determined by the edge labeling of $\beta$ is in the kernel of the epimorphism $p:F(\mathcal A)\to A$. So $w\in\cup_{i=0}^{B}\phi^{-i}(N_0(\mathcal R,\phi))$. By Remark \[loopkill\], the loop $\beta$ (and hence $\alpha$) is homotopically trivial by a homotopy $H$ such that $im(P(H))\subset (-\infty, L+B]$. \[hup\] Suppose $M, N\geq 0$ are integers, $\alpha$ is a loop in $X-t^NA\cdot Q(M)$ and $B$ is the bounded depth of the presentation $P$. 1\) If $\alpha$ has image in a component of $X-t^NA\cdot Q(M)$ other than $t^NK_0$, then $\alpha$ is homotopically trivial by a homotopy $H$ such that $P(H)$ has image in $(-\infty, B+N-M]$, 2\) if $\alpha$ has image in $t^NK_0$, $v$ is a vertex of $\alpha$ and $r_v$ is the proper edge path ray at $v$ with each edge labeled $t$, then there is proper homotopy $H:[0,\infty)\times [0,1]\to t^NK_0$ where $H(x,0)=H(x,1)=r_v(x)$, and $H(0,y)=\alpha (y)$. Part 1): By 3) of Lemma \[nbhd2\], $im(P(\alpha))\subset (-\infty, N-M]$. Lemma \[killD\] finishes part 1). Part 2): Let $H$ be the homotopy that strings together the homotopies $H_e$ of Lemma \[push\] for each $\mathcal A$-edge $e$ of $\alpha$. The image of $H$ avoids $t^NA\cdot Q(M)$ by Lemma \[up\] and so is in $t^NK_0$. The homotopy $H$ is proper since it is a combination of finitely many proper homotopies. By Lemma \[below2\], if $C$ is a compact subset of $X$, there are integers $M(C)$ and $N(C)$ such that $C\subset t^NA\cdot Q(M)$. \[FP\] Suppose $G$ is an ascending HNN extension of the finitely presented group $A$ and $X$ is the Cayley 2-complex for the HNN presentation with stable letter $t$ and base $A$ (with a finite presentation of $A$ as a sub-presentation). If $M, N\geq 0$ are integers and $\alpha$ is a loop in $X-t^NA\cdot Q(M)$ $(=X-t^NAt^{-N}\cdot (t^NQ(M)))$ then $\alpha$ is homotopically trivial in $X-t^NA\cdot Q(M)$. We present the case where $N=0$ as all others are completely analogous. Let $\Lambda$ be the Cayley 2-complex for $A$, determined by the presentation of $A$ within our HNN presentation of $G$. If $K$ is a component of $X-A\cdot Q$ other than $K_0$ and $\alpha$ is an edge path loop in $K$, then each vertex $v$ of $\alpha$ is such that $P(v)<-M$. Using conjugation relations $\alpha$ is homotopic in $K$ to an $A$-loop $\alpha_1$ in level $-M-1$. Then $\alpha_1$ lies in a copy of $\Lambda$ in level $-M-1$ and so is homotopically trivial in level $-M-1$. If $\alpha$ is an edge path loop in $K_0$, then by Lemma \[up\], conjugation relations can be used to show that $\alpha$ is homotopic to a loop $\alpha_1$ in a single level and this homotopy avoids $A\cdot Q$. Lemma \[up\] also implies that $\alpha_1$ is in a copy of $\Lambda$ that avoids $A\cdot Q$. As $\alpha_1$ is homotopically trivial in that copy of $\Lambda$, $\alpha_1$ (and hence $\alpha$) is homotopically trivial in $X-A\cdot Q$. Suppose $M,N\geq 0$ are integers and $s$ is a proper edge path ray in $X-t^NA\cdot Q(M)$ with initial vertex $v\in K_0$. If $q$ is the quotient of $X$ by the action of $t^NAt^{-N}$ and $qs$ has image in a compact subset of $(t^NAt^{-M})\backslash X$ (so $s$ is $t^NAt^{-N}$-bounded), then each vertex of $s$ is within edge path distance $\leq K$ of $t^NA$ and $Ps$ has image in the closed interval $[N-K, N+K]$. \[corner\] Suppose $M,N\geq 0$ are integers, $s$ is a proper edge path ray in the $t^NK_0$ component of $X-t^NA\cdot Q(M)$ and $s(0) =v$. Let $r_v$ be the proper edge path ray at $v$, each of whose edges is labeled $t$. If $Ps$ has image in a closed interval then $s$ is properly homotopic to $r_v$ by a homotopy with image in $t^NK_0$. Assume that the image of $Ps$ is $[L,M]$. By Lemma \[up\], one can use conjugation relations to slide each $A$-edge of $s$ along $t$-edges to level $M$, by a homotopy with image in $t^NK_0$. So $s$ is properly homotopic to $s'$, the resulting proper ray which (after removing any backtracking edges $(t,t^{-1})$ or $(t^{-1},t)$) is a proper $\mathcal A$-ray. Let $r'$ be the proper edge path ray at the initial point of $s'$ with all edges labeled $t$ (so $r'$ is a sub-ray of $r_v$). Let $H$ be the proper homotopy of $s'$ to $r'$ defined in Lemma \[string\]. By Lemma \[up\], $H$ has image in $t^NK_0$. [**(of Theorem \[mainbd\])**]{} Let $X$ be the Cayley 2-complex of $\mathcal P$. By Proposition \[strongss\], $t^NAt^{-N}$ is semistable at $\infty$ in $X$ for all $N\geq 0$ and in $\S$\[ss\] we reduced the proof of Theorem \[mainbd\] to showing that for each compact set $C$ in $X$ there is an integer $N\geq 0$ such that $t^NAt^{-N}$ is co-semistable at $\infty$ in $X$ with respect to $C$. That means: For any finite subcomplex $C$ of $X$ there is an integer $N\geq 0$, and compact set $D$ such that for any proper $t^NAt^{-N}$-bounded ray $s$ in $X-t^NAt^{-N}D$ and loop $\alpha$ in $X-t^NAt^{-N}D$ such that $\alpha(0)=s(0)$, there is a proper homotopy $H:[0,1]\times [0,\infty)\to X-C$ such that $H(0,t)=H(1,t)=s(t)$ and $H(t,0)=\alpha$. Start with a finite subcomplex $C$ of $X$. The integer $N(C)\geq 0$ will play the part of $N$. Recall that $B$ is the bounded depth of the presentation $\mathcal P$. Let $$D=t^{N(C)} Q(M(C)+B+1)$$ Recall $Q(M)=\{\ast, t^{-1},\ldots ,t^{-M}\}$. By Lemma \[below2\], for each vertex $v\in C$: $$vA\subset t^{N(C)}At^{-N(C)}(t^{N(C)}Q(M(C)))$$ $$\subset t^{N(C)}At^{-N(C)}(t^{N(C)}Q(M(C)+B+1))=$$ $$t^{N(C)}At^{-N(C)}D=t^{N(C)}A\cdot Q(M(C)+B+1).$$ If $v\in C$, then $v\in t^{N(C)}A Q(M(C))$ so that $P(v)\in [N(C)-M(C),N(C)]$. Suppose $\alpha$ is a loop in $X-t^{N(C)}At^{-N(C)}D$. Then $\alpha$ is either in $t^{N(C)} K_0$ where $K_0$ is the special component of $X-A\cdot Q(M(C)+B+1)$ (described in part 2) of Lemma \[nbhd\]) or $\alpha$ is in $t^{N(C)}K$ for some component $K$ of $X-A\cdot Q(M(C)+B+1)$ other than $K_0$. If $\alpha$ belongs to $t^{N(C)}K$, then by part 1) of Lemma \[hup\], $\alpha$ is homotopically trivial by a homotopy $H$ such that $$im(P(H))\subset (-\infty, B+N(C)-(M(C)+B+1)]=(-\infty, N(C)-M(C)-1].$$ Since $P(C)\subset [N(C)-M(C),N(C)]$, the homotopy $H$ kills $\alpha$ in $X-C$ (actually in $X-A\cdot C$). If $\alpha$ is in $t^{N(C)}K_0$, and $s$ is a $t^{N(C)}Dt^{-N(C)}$-bounded proper ray in $t^{N(C)}K_0$ such that $\alpha(0)=s(0)$, then by Lemma \[corner\], $s$ is properly homotopic (rel$\{s(0)\}$) to $r$ the proper edge path ray at $s(0)$, each of whose edges is labeled $t$, by a homotopy with image in $t^{N(C)}K_0\subset X-C$. Combining the homotopy of $r$ to $s$ with one given by part 2) of Lemma \[hup\] (also in $t^{N(C)}K_0$) completes the proof. Michael Mihalik Department of Mathematics, Vanderbilt University, Nashville, TN 37240 email: michael.l.mihalik@vanderbilt.edu
--- author: - Adrian Flanagan - Were Oyomno - Alexander Grigorievskiy - Kuan Eeik Tan - 'Suleiman A. Khan' - 'Muhammad Ammad-ud-din' bibliography: - 'ref.bib' title: 'Federated Multi-view Matrix Factorization for Personalized Recommendations' ---
--- author: - Marc Bonino bibliography: - '/home2/bonino/biblio.bib' title: 'A topological version of the Poincaré-Birkhoff theorem with two fixed points' --- \[section\] \[thme\][Proposition]{} \[thme\][Lemma]{} \[thme\][Remark]{} \[thme\][Definition]{} \[thme\][Corollary]{} \[thme\][Question]{} Laboratoire Analyse, Géométrie et Applications (LAGA)\ CNRS UMR 7539\ Université Paris 13\ 99 Avenue J.B. Clément\ 93430 Villetaneuse (France)\ e-mail: **Abstract.** The main result of this paper gives a topological property satisfied by any homeomorphism of the annulus ${\mathbb{A}}={\mathbb{S}^1}\times [-1,1]$ isotopic to the identity and with at most one fixed point. This generalizes the classical Poincaré-Birkhoff theorem because this property certainly does not hold for an area preserving homeomorphism $h$ of ${\mathbb{A}}$ with the usual boundary twist condition. We also have two corollaries of this result. The first one shows in particular that the boundary twist assumption may be weakened by demanding that the homeomorphism $h$ has a lift $H$ to the strip ${\widetilde{\mathbb{A}}}= {\mathbb{R}}\times [-1,1]$ possessing both a forward orbit unbounded on the right and a forward orbit unbounded on the left. As a second corollary we get a new proof of a version of the Conley-Zehnder theorem in ${\mathbb{A}}$: if a homeomorphism of ${\mathbb{A}}$ isotopic to the identity preserves the area and has mean rotation zero, then it possesses two fixed points.  \ **MSC 2000:** 37E30 37C25 Introduction {#section1} ============ Preliminaries {#section2} ============= Statement of the main results {#section3} ============================= Proof of Theorem \[t2\] {#section4} ======================= Relationship with previous works {#section5} ================================  \ **Acknowledgements.** I would like to thank Sylvain Crovisier for several conversations and for bringing to my attention Proposition 5.2 of [@Beguin/Crovisier/LeRoux:2006]. I also thank the referee for her/his careful reading of the manuscript.
--- author: - '<span style="font-variant:small-caps;">Mihailo Stojnic [^1]</span>' bibliography: - 'gscompyxRefs.bib' title: Fully bilinear generic and lifted random processes comparisons --- [**Abstract**]{} In our companion paper [@Stojnicgscomp16] we introduce a collection of fairly powerful statistical comparison results. They relate to a general comparison concept and its an upgrade that we call lifting procedure. Here we provide a different generic principle (which we call fully bilinear) that in certain cases turns out to be stronger than the corresponding one from [@Stojnicgscomp16]. Moreover, we also show how the principle that we introduce here can also be pushed through the lifting machinery of [@Stojnicgscomp16]. Finally, as was the case in [@Stojnicgscomp16], here we also show how the well known Slepian’s max and Gordon’s minmax comparison principles can be obtained as special cases of the mechanisms that we present here. We also create their lifted upgrades which happen to be stronger than the corresponding ones in [@Stojnicgscomp16]. A fairly large collection of results obtained through numerical experiments is also provided. It is observed that these results are in an excellent agreement with what the theory predicts. [**Index Terms: Random processes; comparison principles, lifting**]{}. Introduction {#sec:back} ============ The main topic of this paper are random processes comparisons. This topic has been studied for quite some time and many excellent results were obtained in various directions over the last half a century. In our view the major highlights that have found a large spectrum of applications are the Slepian’s max [@Slep62] and the Gordon’s minmax [@Gordon85] principle (see also [@Sudakov71; @Fernique74; @Fernique75; @Kahane86]). The list of applications in various fields is of course pretty much endless. As comparison principles are also the main topic of our companion paper [@Stojnicgscomp16] we will refrain from further detailing about their importance and the history of their development (more on this can be found in e.g. [@Adler90; @Lifshits85; @LedTal91; @Tal05]). Instead, we here single out that, through studying the performance characterizations of many hard random optimization problems, we in recent years also fairly often utilized as the main probabilistic foundation the comparison principles (see, e.g. [@StojnicISIT2010binary; @StojnicCSetam09; @StojnicUpper10; @StojnicCSetamBlock09; @StojnicICASSP10knownsupp] and references therein). In fact, not only were our techniques strong enough to handle many of these problems they also turned out to be capable of doing it on an ultimate precision level. On the other hand, some of the results that we initially created in e.g. [@StojnicISIT2010binary; @StojnicCSetam09; @StojnicUpper10; @StojnicCSetamBlock09; @StojnicICASSP10knownsupp], we later on managed to substantially upgrade (more on this can be found in, e.g. [@StojnicLiftStrSec13; @StojnicMoreSophHopBnds10; @StojnicRicBnds13] and references therein). The foundational blocks of these upgrades were actually rooted in core upgrades in the underlying random processes’ comparisons. As it will be rather clear on quite a few occasions throughout the paper, we view, the Slepian’s max [@Slep62] and the Gordon’s minmax [@Gordon85] comparison principles as two of the most influential results not only in the comparison theory but pretty much in a large section of the general probability theory. Both of them are derived basically starting almost from the axioms and with very minimal prior knowledge (a fairly short line of work, e.g. [@Schlafli858; @Placket54; @Chover61], precedes Slepain’s on the one hand and almost nothing besides Slepian’s work precedes the direction of the Gotrdon’s work on the other hand). In this paper we will deal with generic comparison principles that will not directly relate to the extrema of the random processes as, to a large degree, do Slepian’s and Gordon’s work. However, we will also show how easily a set of particularly useful forms of both of these classical achievements can be deduced from what we will present here. In our companion paper [@Stojnicgscomp16] we also introduce a generic comparison principle that can be simplified in certain scenarios to include the above mentioned classical max and minmax forms. The mechanism that we introduce here is conceptually different and in certain cases of particular interest (such as dealing with the extrema of the random processes) it will produce a stronger set of results than those presented in [@Stojnicgscomp16]. Nonetheless, quite a few observations made in [@Stojnicgscomp16] will turn out to be of use here as well and we will try to follow the style of the presentation given in [@Stojnicgscomp16] so that all the similarities and differences are easier to see. Along the same lines and following into the footsteps of [@Stojnicgscomp16], we will split the presentation into two main parts: 1) the first part where we will discuss a generic comparison principle (to which we will refer as fully bilinear) and its connections with the well-known Slepian’s max and Gordon’s minmax principles; and 2) the second part where we will discuss a way to upgrade these generic methods through a lifting procedure similar to the one that we consider in [@Stojnicgscomp16]. A bilinear comparison form {#sec:gencon} ========================== We start with two given sets, say set $\calX=\{\x^{(1)},\x^{(2)},\dots,\x^{(l)}\}$, where $\x^{(i)}\in \mR^n,1\leq i\leq l$, and set $\calY=\{\y^{(1)},\y^{(2)},\dots,\y^{(l)}\}$, where $\y^{(i)}\in \mR^m,1\leq i\leq l$, and consider the following function $$\begin{aligned} \label{eq:genanal1} f(G,u^{(4)},\calX,\calY,\beta,s)= \frac{1}{\beta|s|\sqrt{n}} \log\lp \sum_{i_1=1}^{l}\lp\sum_{i_2=1}^{l}e^{\beta \lp (\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2 u^{(4)}\rp} \rp^{s}\rp,\end{aligned}$$ where $s$ and $\beta>0$ are real parameters. Similarly to what we did in our companion paper [@Stojnicgscomp16], we will study this function in a random medium. Namely, we will consider $(m\times n)$ dimensional matrices $G\in \mR^{m\times n}$ with i.i.d. standard normal components. Moreover, we will assume that $u^{(4)}$ is also a standard normal random variable but independent of $G$. In such a random medium (and especially if the dimensions of $G$ are large) the expected value of the above function is usually its most relevant value. Let this expected value be $\xi(\calX,\calY,\beta,s)$. Then we set $$\begin{aligned} \label{eq:genanal2} \xi(\calX,\calY,\beta,s) & \triangleq & \mE_{G,u^{(4)}} f(G,u^{(4)},\calX,\calY,\beta,s) \nonumber \\ & = & \mE_{G,u^{(4)}}\frac{1}{\beta|s|\sqrt{n}} \log\lp \sum_{i_1=1}^{l}\lp\sum_{i_2=1}^{l}e^{\beta \lp (\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2 u^{(4)}\rp} \rp^{s}\rp.\end{aligned}$$ Following into the footsteps of [@Stojnicgscomp16], we will consider the following interpolating function $\psi(\cdot)$ as an object convenient for studying properties of $\xi(\calX,\calY,\beta,s)$ $$\begin{gathered} \label{eq:genanal3} \psi(\calX,\calY,\beta,s,t) = \mE_{G,u^{(4)},\u^{(2)},\h} \frac{1}{\beta|s|\sqrt{n}} \\ \times \log\lp \sum_{i_1=1}^{l}\lp\sum_{i_2=1}^{l}e^{\beta \lp \sqrt{t}(\y^{(i_2)})^T G\x^{(i_1)}+\sqrt{1-t}\|\x^{(i_2)}\|_2 (\y^{(i_2)})^T\u^{(2)}+\sqrt{t}\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2,u^{(4)} +\sqrt{1-t}\|\y^{(i_2)}\|_2\h^T\x^{(i)}\rp} \rp^{s}\rp.\end{gathered}$$ In (\[eq:genanal3\]), $\u^{(2)}$ and $\h$ are $m$ and $n$ dimensional vectors of i.i.d standard normals, respectively; they are assumed to be independent of each other and of $G$ and $u^{(4)}$ ($\mE$ denotes the expectation with respect to any randomness under the expectation; sometimes $\mE$ will have a subscript to emphasize the underlying randomness). Clearly, $\xi(\calX,\calY,\beta,s)=\psi(\calX,\calY,\beta,s,1)$ and given that $\psi(\calX,\calY,\beta,s,0)$ is typically easier to study than $\psi(\calX,\calY,\beta,s,1)$ we will try to connect $\psi(\calX,\calY,\beta,s,1)$ to $\psi(\calX,\calY,\beta,s,0)$ as a way of connecting $\xi(\calX,\calY,\beta,s)$ to $\psi(\calX,\calY,\beta,s,0)$. We will find it convenient below to set $$\begin{aligned} \label{eq:genanal4} \u^{(i_1,1)} & = & \frac{G\x^{(i_1)}}{\|\x^{(i_1)}\|_2} \nonumber \\ \u^{(i_1,3)} & = & \frac{\h^T\x^{(i_1)}}{\|\x^{(i_1)}\|_2}.\end{aligned}$$ Denoting by $G_{j,1:n}$ the $j$-th row of $G$ and by $\u_j^{(i_1,1)}$ the $j$-th component of $\u^{(i_1,1)}$ from (\[eq:genanal4\]) we have $$\begin{aligned} \label{eq:genanal5} \u_j^{(i_1,1)} & = & \frac{G_{j,1:n}\x^{(i_1)}}{\|\x^{(i_1)}\|_2},1\leq j\leq m.\end{aligned}$$ Also, one trivially has for any fixed $i_1$ that the elements of $\u^{(i_1,1)}$, $\u^{(2)}$, and $\u^{(i_1,3)}$ are i.i.d. standard normals. (\[eq:genanal3\]) can then be rewritten as $$\begin{gathered} \label{eq:genanal6} \psi(\calX,\calY,\beta,s,t) = \mE_{G,u^{(4)},\u^{(2)},\h} \frac{1}{\beta|s|\sqrt{n}} \\ \times \log\lp \sum_{i_1=1}^{l}\lp\sum_{i_2=1}^{l}e^{\beta_{i_1} \lp \sqrt{t}(\y^{(i_2)})^T \u^{(i_1,1)}+\sqrt{1-t} (\y^{(i_2)})^T\u^{(2)} +\sqrt{t}\|\y^{(i_2)}\|_2,u^{(4)}+\sqrt{t}\|\y^{(i_2)}\|_2\u^{(i_1,3)}\rp} \rp^{s}\rp,\end{gathered}$$ where $\beta_{i_1}=\beta\|\x^{(i_1)}\|_2$. To facilitate the exposition we also set $$\begin{aligned} \label{eq:genanal7} B^{(i_1,i_2)} & \triangleq & \sqrt{t}(\y^{(i_2)})^T\u^{(i_1,1)}+\sqrt{1-t} (\y^{(i_2)})^T\u^{(2)} \nonumber \\ A^{(i_1,i_2)} & \triangleq & e^{\beta_{i_1}(B^{(i_1,i_2)}+\sqrt{t}\|\y^{(i_2)}\|_2 u^{(4)}+\sqrt{1-t}\|\y^{(i_2)}\|_2\u^{(i_1,3)})}\nonumber \\ C^{(i_1)} & \triangleq & \sum_{i_2=1}^{l}A^{(i_1,i_2)}\nonumber \\ Z & \triangleq & \sum_{i_1=1}^{l}\lp\sum_{i_2=1}^{l}e^{\beta_{i_1} \lp \sqrt{t}(\y^{(i_2)})^T \u^{(i_1,1)}+\sqrt{1-t} (\y^{(i_2)})^T\u^{(2)} +\sqrt{t}\|\y^{(i_2)}\|_2 u^{(4)}+\sqrt{t}\|\y^{(i_2)}\|_2\u^{(i_1,3)}\rp} \rp^{s}\nonumber \\ & = & \sum_{i_1=1}^{l} \lp \sum_{i_2=1}^{l} A^{(i_1,i_2)}\rp^s =\sum_{i_1=1}^{l} (C^{(i_1)})^s.\end{aligned}$$ It is now relatively easy to see that (\[eq:genanal6\]) and (\[eq:genanal7\]) give $$\begin{aligned} \label{eq:genanal8} \psi(\calX,\calY,\beta,s,t) & = & \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{1}{\beta|s|\sqrt{n}} \log(Z).\end{aligned}$$ Our main topic of studying below will be the properties of $\psi(\calX,\calY,\beta,s,t)$. In particular we will study its monotonicity and show that $\psi(\calX,\calY,\beta,s,t)$ is a non-increasing (basically decreasing) function of $t$. We start with the analysis of its derivative $$\begin{aligned} \label{eq:genanal9} \frac{d\psi(\calX,\calY,\beta,s,t)}{dt} & = & \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{1}{\beta|s|\sqrt{n}} \log Z\nonumber \\ & = & \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{1}{Z\beta|s|\sqrt{n}} \frac{d\lp \sum_{i_1=1}^{l} \lp \sum_{i_2=1}^{l} A^{(i_1,i_2)}\rp^s \rp }{dt}\nonumber \\ & = & \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{s}{Z\beta|s|\sqrt{n}} \sum_{i=1}^{l} (C^{(i_1)})^{s-1} \nonumber \\ & & \times \sum_{i_2=1}^{l}\beta_{i_1}A^{(i_1,i_2)}\lp \frac{dB^{(i_1,i_2)}}{dt}+\frac{\|\y^{(i_2)}\|_2 u^{(4)}}{2\sqrt{t}}-\frac{\|\y^{(i_2)}\|_2 \u^{(i_1,3)}}{2\sqrt{1-t}}\rp.\end{aligned}$$ Utilizing (\[eq:genanal7\]) we find $$\label{eq:genanal10} \frac{dB^{(i_1,i_2)}}{dt} = \frac{d\lp\sqrt{t}(\y^{(i_2)})^T\u^{(i_1,1)}+\sqrt{1-t} (\y^{(i_2)})^T\u^{(2)}\rp}{dt}= \sum_{j=1}^{m}\lp \frac{\y_j^{(i_2)}\u_j^{(i_1,1)}}{2\sqrt{t}}-\frac{\y_j^{(i_2)}\u_j^{(2)}}{2\sqrt{1-t}}\rp.$$ Combining (\[eq:genanal9\]) and (\[eq:genanal10\]) we obtain $$\begin{aligned} \label{eq:genanal11} \frac{d\psi(\calX,\calY,\beta,s,t)}{dt} & = & \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{1}{\beta|s|\sqrt{n}} \log Z\nonumber \\ & = & \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{1}{Z\beta|s|\sqrt{n}} \frac{d\lp \sum_{i_1=1}^{l} \lp \sum_{i_2=1}^{l} A^{(i_1,i_2)}\rp^s \rp }{dt}\nonumber \\ & = & \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{s}{Z\beta|s|\sqrt{n}} \sum_{i_1=1}^{l} (C^{(i_1)})^{s-1} \nonumber \\ & & \times \sum_{i_2=1}^{l}\beta_{i_1}A^{(i_1,i_2)}\lp \sum_{j=1}^{m}\lp \frac{\y_j^{(i_2)}\u_j^{(i_1,1)}}{2\sqrt{t}}-\frac{\y_j^{(i_2)}\u_j^{(2)}}{2\sqrt{1-t}}\rp+\frac{\|\y^{(i_2)}\|_2 u^{(4)}}{2\sqrt{t}}-\frac{\|\y^{(i_2)}\|_2 \u^{(i_1,3)}}{2\sqrt{1-t}}\rp.\nonumber \\\end{aligned}$$ Each of the terms in the above sum we will handle separately. To do so and to facilitate the presentation as much as possible we will try to parallel what was done in [@Stojnicgscomp16]. The calculations though will be substantially different. Computing $\frac{d\psi(\calX,\calY,\beta,s,t)}{dt}$ {#sec:compderivative} --------------------------------------------------- As mentioned above, we will separately handle all the terms appearing in (\[eq:genanal11\]). ### Finding $\mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\u_j^{(i_1,1)}\y_j^{(i_2)}}{Z}$ {#sec:hand1} We start with the following standard utilization of the Gaussian integration by parts. $$\begin{gathered} \label{eq:genanal12} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\u_j^{(i_1,1)}\y_j^{(i_2)}}{Z} = \mE (\sum_{p_1=1,p_1\neq i_1}^{l} \mE (\u_j^{(i_1,1)}\u_j^{(p_1,1)})\frac{d}{d\u_j^{(p_1,1)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z}\rp \\ +\mE (\u_j^{(i_1,1)}\u_j^{(i_1,1)})\frac{d}{d\u_j^{(i_1,1)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z}\rp).\end{gathered}$$ Clearly, $\mE (\u_j^{(i_1,1)}\u_j^{(p_1,1)})=\frac{(\x^{(i_1)})^T\x^{(p_1)}}{\|\x^{(i_1)}\|_2\|\x^{(p_1)}\|_2}$ and we also have $$\begin{gathered} \label{eq:genanal13} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\u_j^{(i_1,1)}\y_j^{(i_2)}}{Z} = \mE (\sum_{p_1=1,p_1\neq i_1}^{l} \frac{(\x^{(i_1)})^T\x^{(p_1)}}{\|\x^{(i_1)}\|_2\|\x^{(p_1)}\|_2}\frac{d}{d\u_j^{(p_1,1)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z}\rp \\ +\frac{(\x^{(i_1)})^T\x^{(i_1)}}{\|\x^{(i_1)}\|_2\|\x^{(i_1)}\|_2}\frac{d}{d\u_j^{(i_1,1)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z}\rp ).\end{gathered}$$ For $p_1\neq i_1$ we obtain the following $$\label{eq:genanal14} \frac{d}{d\u_j^{(p_1,1)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z}\rp=(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}\frac{d}{d\u_j^{(p_1,1)}}\lp \frac{1}{Z}\rp=-\frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z^2}\frac{dZ}{d\u_j^{(p_1,1)}}.$$ Now we also have $$\begin{gathered} \label{eq:genanal14a} \frac{dZ}{d\u_j^{(p_1,1)}}=\frac{d\sum_{i_1=1}^{l} (C^{(i_1)})^s}{d\u_j^{(p_1,1)}} =s\sum_{i_1=1}^{l} (C^{(i_1)})^{s-1}\frac{d(C^{(i_1)})}{d\u_j^{(p_1,1)}}\\ =s\sum_{i_1=1}^{l} (C^{(i_1)})^{s-1}\sum_{i_2=1}^{l}\frac{d(A^{(i_1,i_2)})}{d\u_j^{(p_1,1)}} =s (C^{(p_1)})^{s-1}\sum_{i_2=1}^{l}\frac{d(A^{(p_1,i_2)})}{d\u_j^{(p_1,1)}}.\end{gathered}$$ Moreover, from (\[eq:genanal7\]) we have $$\label{eq:genanal15} \frac{d B^{(p_1,i_2)}}{d\u_j^{(p_1,1)}} = \y^{(i_2)}\sqrt{t},$$ and then $$\label{eq:genanal16} \frac{d(A^{(p_1,i_2)})}{d\u_j^{(p_1,1)}}=\beta_{p_1}A^{(p_1,i_2)}\frac{d(B^{(p_1,i_2)})}{d\u_j^{(p_1,1)}} =\beta_{p_1}A^{(p_1,i_2)}\y_j^{(i_2)}\sqrt{t}.$$ Combining (\[eq:genanal14\]), (\[eq:genanal14a\]), and (\[eq:genanal16\]) we obtain $$\begin{aligned} \label{eq:genanal17} \frac{d}{d\u_j^{(p_1,1)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z}\rp & = & -\frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z^2} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\frac{d(A^{(p_1,i_2)})}{d\u_j^{(p_1,1)}}\nonumber \\ & = & -\frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z^2} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\y_j^{(p_2)}\sqrt{t}.\end{aligned}$$ For $p=i$ we have $$\begin{aligned} \label{eq:genanal18} \frac{d}{d\u_j^{(i_1,1)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z}\rp & = & \frac{\y_j^{(i_2)}}{Z}\frac{d}{d\u_j^{(i_1,1)}}\lp (C^{(i_1)})^{s-1} A^{(i_i,i_2)}\rp-\frac{ (C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z^2} \frac{dZ}{d\u_j^{(i_1,1)}}.\nonumber \\\end{aligned}$$ From (\[eq:genanal14a\]) and (\[eq:genanal17\]) we have $$\begin{gathered} \label{eq:genanal18a} \frac{dZ}{d\u_j^{(i_1,1)}}=\frac{d\sum_{i_1=1}^{l} (C^{(i_1)})^s}{d\u_j^{(p_1,1)}} =s (C^{(i_1)})^{s-1}\sum_{p_2=1}^{l}\frac{d(A^{(i_1,p_2)})}{d\u_j^{(i_1,1)}}=s (C^{(i_1)})^{s-1}\sum_{p_2=1}^{l} \beta_{i_1}A^{(i_1,p_2)}\y_j^{(p_2)}\sqrt{t}.\end{gathered}$$ Also, $$\begin{aligned} \label{eq:genanal18b} \frac{d}{d\u_j^{(i_1,1)}}\lp (C^{(i_1)})^{s-1} A^{(i_1,i_2)}\rp & = & (C^{(i_1)})^{s-1} \frac{dA^{(i_1,i_2)} }{d\u_j^{(i_1,1)}}+ A^{(i_i,i_2)}\frac{d(C^{(i_1)})^{s-1}}{d\u_j^{(i_1,1)}} \nonumber \\ & = & (C^{(i_1)})^{s-1}\beta_{i_1}A^{(i_1,i_2)}\y_j^{(i_2)}\sqrt{t}+(s-1)(C^{(i_1)})^{s-2}\beta_{i_1}\sum_{p_2=1}^{l}A^{(i_1,p_2)}\y_j^{(p_2)}\sqrt{t}.\nonumber \\\end{aligned}$$ A combination of (\[eq:genanal13\]), (\[eq:genanal17\]), (\[eq:genanal18\]), (\[eq:genanal18a\]), and (\[eq:genanal18b\]) gives $$\begin{gathered} \label{eq:genanal19} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\u_j^{(i_1,1)}\y_j^{(i_2)}}{Z} \\ = \mE \lp \frac{\y_j^{(i_2)}}{Z}\lp(C^{(i_1)})^{s-1}\beta_{i_1}A^{(i_1,i_2)}\y_j^{(i_2)}\sqrt{t}+(s-1)(C^{(i_1)})^{s-2}\beta_{i_1}\sum_{p_2=1}^{l}A^{(i_1,p_2)}\y_j^{(p_2)}\sqrt{t}\rp \rp \\ - \mE \lp\sum_{p_1=1}^{l} \frac{(\x^{(i_1)})^T\x^{(p_1)}}{\|\x^{(i_1)}\|_2\|\x^{(p_1)}\|_2} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z^2} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\y_j^{(p_2)}\sqrt{t}\rp.\end{gathered}$$ ### Finding $\mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\u_j^{(2)}\y_j^{(i_2)}}{Z}$ {#sec:hand2} We start with the following standard utilization of the Gaussian integration by parts. $$\begin{gathered} \label{eq:genAanal12} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\u_j^{(2)}\y_j^{(i_2)}}{Z} = \mE(\mE (\u_j^{(2)}\u_j^{(2)})\frac{d}{d\u_j^{(2)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z}\rp).\end{gathered}$$ Obviously $\mE (\u_j^{(2)}\u_j^{(2)})=1$ and we also have $$\begin{aligned} \label{eq:genAanal18} \frac{d}{d\u_j^{(2)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z}\rp & = & \frac{\y_j^{(i_2)}}{Z}\frac{d}{d\u_j^{(2)}}\lp (C^{(i_1)})^{s-1} A^{(i_1,i_2)}\rp-\frac{ (C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z^2} \frac{dZ}{d\u_j^{(2)}}.\nonumber \\\end{aligned}$$ Moreover, we find $$\begin{gathered} \label{eq:genAanal18a} \frac{dZ}{d\u_j^{(2)}}=\frac{d\sum_{i_1=1}^{l} (C^{(i_1)})^s}{d\u_j^{(2)}} =s (C^{(i_1)})^{s-1}\sum_{p_2=1}^{l}\frac{d(A^{(i_1,p_2)})}{d\u_j^{(2)}}=s (C^{(i_1)})^{s-1}\sum_{p_2=1}^{l} \beta_{i_1}A^{(i_1,p_2)}\y_j^{(p_2)}\sqrt{1-t}.\end{gathered}$$ It is not that hard to obtain the following as well $$\begin{gathered} \label{eq:genAanal18b} \frac{d}{d\u_j^{(2)}}\lp (C^{(i_1)})^{s-1} A^{(i_1,i_2)}\rp = (C^{(i_1)})^{s-1} \frac{dA^{(i_1,i_2)} }{d\u_j^{(2)}}+ A^{(i_i,i_2)}\frac{d(C^{(i_1)})^{s-1}}{d\u_j^{(2)}} \\ = (C^{(i_1)})^{s-1}\beta_{i_1}A^{(i_1,i_2)}\y_j^{(i_2)}\sqrt{1-t}+(s-1)(C^{(i_1)})^{s-2}\beta_{i_1}\sum_{p_2=1}^{l}A^{(i_1,p_2)}\y_j^{(p_2)}\sqrt{1-t}.\end{gathered}$$ Combining (\[eq:genAanal12\]), (\[eq:genAanal18\]), (\[eq:genAanal18a\]), and (\[eq:genAanal18b\]) we have $$\begin{gathered} \label{eq:genAanal19} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\u_j^{(2)}\y_j^{(i_2)}}{Z} \\ = \mE \lp\frac{\y_j^{(i_2)}}{Z}\lp(C^{(i_1)})^{s-1}\beta_{i_1}A^{(i_1,i_2)}\y_j^{(i_2)}\sqrt{1-t}+(s-1)(C^{(i_1)})^{s-2}\beta_{i_1}\sum_{p_2=1}^{l}A^{(i_1,p_2)}\y_j^{(p_2)}\sqrt{1-t}\rp \rp \\ - \mE \lp\sum_{p_1=1}^{l} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z^2} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\y_j^{(p_2)}\sqrt{1-t}\rp.\end{gathered}$$ ### Finding $\mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\u^{(i_1,3)}}{Z}$ {#sec:hand3} We closely follow what we presented above and start with the following utilization of the Gaussian integration by parts $$\begin{gathered} \label{eq:genBanal12} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\u^{(i_1,3)}}{Z} =\mE(\sum_{p_1=1,p_1\neq i_1}^{l}\mE (\u^{(i_1,3)}\u^{(p_1,3)})\frac{d}{d\u^{(p_1,3)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z}\rp)\\ +\mE(\mE (\u^{(i_1,3)}\u^{(i_1,3)})\frac{d}{d\u^{(i_1,3)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z}\rp).\end{gathered}$$ Clearly, $\mE (\u^{(i_1,3)}\u^{(p_1,3)})=\frac{(\x^{(i_1)})^T\x^{(p_1)}}{\|\x^{(i_1)}\|_2\|\x^{(p_1)}\|_2}$ and for $p_1\neq i_1$ we obtain the following $$\label{eq:genBanal14} \frac{d}{d\u^{(p_1,3)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z}\rp=(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\frac{d}{d\u^{(p_1,3)}}\lp \frac{1}{Z}\rp=-\frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z^2}\frac{dZ}{d\u^{(p_1,3)}}.$$ Following (\[eq:genBanal14a\]) we also have $$\label{eq:genBanal14a} \frac{dZ}{d\u^{(p_1,3)}}=\frac{d\sum_{i_1=1}^{l} (C^{(i_1)})^s}{d\u^{(p_1,3)}} =s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\frac{d(A^{(p_1,p_2)})}{d\u^{(p_1,3)}}.$$ From (\[eq:genanal7\]) we find $$\label{eq:genBanal16} \frac{d(A^{(p_1,p_2)})}{d\u^{(p_1,3)}} =\beta_{p_1}A^{(p_1,p_2)}\|\y^{(p_2)}\|_2\sqrt{1-t}.$$ Combining (\[eq:genBanal14\]), (\[eq:genBanal14a\]), and (\[eq:genBanal16\]) we obtain $$\begin{aligned} \label{eq:genBanal17} \frac{d}{d\u^{(p_1,3)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z}\rp & = & -\frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z^2} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\frac{d(A^{(p_1,p_2)})}{d\u^{(p_1,3)}}\nonumber \\ & = & -\frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z^2} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\|\y^{(p_2)}\|_2\sqrt{1-t}.\end{aligned}$$ Also, we easily have $\mE (\u^{(i_1,3)}\u^{(i_1,3)})=1$ and $$\begin{aligned} \label{eq:genBanal18} \frac{d}{d\u^{(i_1,3)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z}\rp & = & \frac{1}{Z}\frac{d}{d\u^{(i_1,3)}}\lp (C^{(i_1)})^{s-1} A^{(i_1,i_2)}\rp-\frac{ (C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z^2} \frac{dZ}{d\u^{(i_1,3)}}.\nonumber \\\end{aligned}$$ Moreover, $$\begin{gathered} \label{eq:genBanal18a} \frac{dZ}{d\u^{(i_1,3)}}=\frac{d\sum_{p_1=1}^{l} (C^{(p_1)})^s}{d\u^{(i_1,3)}} =s (C^{(i_1)})^{s-1}\sum_{p_2=1}^{l}\frac{d(A^{(i_1,p_2)})}{d\u^{(i_1,3)}}=s (C^{(i_1)})^{s-1}\sum_{p_2=1}^{l} \beta_{i_1}A^{(i_1,p_2)}\|\y^{(p_2)}\|_2\sqrt{1-t}.\end{gathered}$$ Similarly to what was done in (\[eq:genanal18b\]) we find $$\begin{aligned} \label{eq:genBanal18b} \frac{d}{d\u^{(i_1,3)}}\lp (C^{(i_1)})^{s-1} A^{(i_1,i_2)}\rp & = & (C^{(i_1)})^{s-1} \frac{dA^{(i_1,i_2)} }{d\u^{(i_1,3)}}+ A^{(i_i,i_2)}\frac{d(C^{(i_1)})^{s-1}}{d\u^{(i_1,3)}} \nonumber \\ & = & (C^{(i_1)})^{s-1}\beta_{i_1}A^{(i_1,i_2)}\|\y^{(i_2)}\|_2\sqrt{1-t}\nonumber \\ & & +(s-1)(C^{(i_1)})^{s-2}\beta_{i_1}\sum_{p_2=1}^{l}A^{(i_1,p_2)}\|\y^{(p_2)}\|_2\sqrt{1-t}.\nonumber \\\end{aligned}$$ Combining (\[eq:genBanal12\]), (\[eq:genBanal18\]), (\[eq:genBanal18a\]), and (\[eq:genBanal18b\]) we find $$\begin{gathered} \label{eq:genBanal19} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\u^{(i_1,3)}}{Z} \\ = \mE \lp\frac{1}{Z}\lp(C^{(i_1)})^{s-1}\beta_{i_1}A^{(i_1,i_2)}\|\y^{(i_2)}\|_2\sqrt{1-t}+(s-1)(C^{(i_1)})^{s-2}\beta_{i_1}\sum_{p_2=1}^{l}A^{(i_1,p_2)}\|\y^{(p_2)}\|_2\sqrt{1-t}\rp \rp \\ - \mE \lp\sum_{p_1=1}^{l}\frac{(\x^{(i_1)})^T\x^{(p_1)}}{\|\x^{(i_1)}\|_2\|\x^{(p_1)}\|_2} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z^2} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\|\y^{(p_2)}\|_2\sqrt{1-t}\rp.\end{gathered}$$ ### Finding $\mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}u^{(4)}}{Z}$ {#sec:hand4} We again closely follow what we presented above and start with the following utilization of the Gaussian integration by parts $$\label{eq:genCanal12} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}u^{(4)}}{Z} =\mE(\mE (u^{(4)}u^{(4)})\frac{d}{du^{(4)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z}\rp).$$ Clearly, $\mE (u^{(4)}u^{(4)})=1$. Further, we have $$\begin{aligned} \label{eq:genCanal18} \frac{d}{du^{(4)}}\lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z}\rp & = & \frac{1}{Z}\frac{d}{d u^{(4)}}\lp (C^{(i_1)})^{s-1} A^{(i_1,i_2)}\rp-\frac{ (C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z^2} \frac{dZ}{d u^{(4)}}.\nonumber \\\end{aligned}$$ Similarly to (\[eq:genBanal18a\]) we find $$\label{eq:genCanal18a} \frac{dZ}{du^{(4)}}=\frac{d\sum_{p_1=1}^{l} (C^{(p_1)})^s}{du^{(4)}} =s \sum_{p_1=1}^{l} (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\frac{d(A^{(p_1,p_2)})}{du^{(4)}}=s \sum_{p_1=1}^{l} (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l} \beta_{p_1}A^{(p_1,p_2)}\|\y^{(p_2)}\|_2\sqrt{t}.$$ Following closely (\[eq:genBanal18b\]) (and earlier (\[eq:genanal18b\])) we also find $$\begin{gathered} \label{eq:genCanal18b} \frac{d}{du^{(4)}}\lp (C^{(i_1)})^{s-1} A^{(i_1,i_2)}\rp = (C^{(i_1)})^{s-1} \frac{dA^{(i_1,i_2)} }{du^{(4)}}+ A^{(i_i,i_2)}\frac{d(C^{(i_1)})^{s-1}}{d u^{(4)}} \\ = (C^{(i_1)})^{s-1}\beta_{i_1}A^{(i_1,i_2)}\|\y^{(i_2)}\|_2\sqrt{t} +(s-1)(C^{(i_1)})^{s-2}\beta_{i_1}\sum_{p_2=1}^{l}A^{(i_1,p_2)}\|\y^{(p_2)}\|_2\sqrt{t}.\end{gathered}$$ A combination of (\[eq:genCanal12\]), (\[eq:genCanal18\]), (\[eq:genCanal18a\]), and (\[eq:genCanal18b\]) gives $$\begin{gathered} \label{eq:genCanal19} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}u^{(4)}}{Z} \\ = \mE \lp\frac{1}{Z}\lp(C^{(i_1)})^{s-1}\beta_{i_1}A^{(i_1,i_2)}\|\y^{(i_2)}\|_2\sqrt{t}+(s-1)(C^{(i_1)})^{s-2}\beta_{i_1}\sum_{p_2=1}^{l}A^{(i_1,p_2)}\|\y^{(p_2)}\|_2\sqrt{t}\rp \rp \\ - \mE \lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z^2} s \sum_{p_1=1}^{l} (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\|\y^{(p_2)}\|_2\sqrt{t}\rp.\end{gathered}$$ ### Connecting all pieces together {#sec:conalt} Using (\[eq:genanal11\]), (\[eq:genanal19\]), (\[eq:genAanal19\]), (\[eq:genBanal19\]), and (\[eq:genCanal19\]) we obtain $$\begin{aligned} \label{eq:conalt1} \frac{\psi(\calX,\calY,\beta,s,t)}{dt} = \frac{s}{2\beta|s|\sqrt{n}} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} (-S_1+S_2+S_3-S_4)\end{aligned}$$ where $$\begin{aligned} \label{eq:conalt1a} S1 & = & \sum_{i_1=1}^{l} \sum_{i_2=1}^{l}\beta_{i_1}\sum_{j=1}^{m} \lp\sum_{p_1=1}^{l} \frac{(\x^{(i_1)})^T\x^{(p_1)}}{\|\x^{(i_1)}\|_2\|\x^{(p_1)}\|_2} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z^2} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\y_j^{(p_2)}\rp\nonumber \\ S_2 & = & \sum_{i_1=1}^{l} \sum_{i_2=1}^{l}\beta_{i_1}\sum_{j=1}^{m} \lp\sum_{p_1=1}^{l} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z^2} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\y_j^{(p_2)}\rp\nonumber \\ S_3 & = & \sum_{i_1=1}^{l} \sum_{i_2=1}^{l}\beta_{i_1} \|\y^{(i_2)}\|_2 \lp\sum_{p_1=1}^{l}\frac{(\x^{(i_1)})^T\x^{(p_1)}}{\|\x^{(i_1)}\|_2\|\x^{(p_1)}\|_2} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z^2} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\|\y^{(p_2)}\|_2\rp\nonumber \\ S_4 & = & \sum_{i_1=1}^{l} \sum_{i_2=1}^{l}\beta_{i_1} \|\y^{(i_2)}\|_2 \lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z^2} s \sum_{p_1=1}^{l} (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\|\y^{(p_2)}\|_2\rp.\end{aligned}$$ From (\[eq:conalt1a\]) we further have $$\begin{gathered} \label{eq:conalt1b} S_2-S_1 = s\beta^2 \sum_{i_1=1}^{l} \sum_{p_1=1}^{l} \frac{(C^{(i_1)})^{s}(C^{(p_1)})^{s}(\|\x^{(i_1)}\|_2\|\x^{(p_1)}\|_2-(\x^{(i_1)})^T\x^{(p_1)})}{Z^2} \\ \times \lp\sum_{i_2=1}^{l}\sum_{p_2=1}^{l} \frac{A^{(i_1,i_2)}A^{(p_1,p_2)}}{C^{(i_1)}C^{(p_1)}} (\y^{(i_2)})^T\y^{(p_2)}\rp,\end{gathered}$$ and in a similar fashion $$\begin{gathered} \label{eq:conalt1c} S_4-S_3 = s\beta^2 \sum_{i_1=1}^{l} \sum_{p_1=1}^{l} \frac{(C^{(i_1)})^{s}(C^{(p_1)})^{s}(\|\x^{(i_1)}\|_2\|\x^{(p_1)}\|_2-(\x^{(i_1)})^T\x^{(p_1)})}{Z^2} \\ \times \lp\sum_{i_2=1}^{l}\sum_{p_2=1}^{l} \frac{A^{(i_1,i_2)}A^{(p_1,p_2)}}{C^{(i_1)}C^{(p_1)}} \|\y^{(i_2)})\|_2\|\y^{(p_2)}\|_2\rp.\end{gathered}$$ Combining (\[eq:conalt1a\]), (\[eq:conalt1b\]), and (\[eq:conalt1c\]) we finally have $$\begin{gathered} \label{eq:conalt2} \frac{\psi(\calX,\calY,\beta,s,t)}{dt} = -\frac{s^2\beta}{2|s|\sqrt{n}} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \sum_{i_1=1}^{l} \sum_{p_1=1}^{l} \frac{(C^{(i_1)})^{s}(C^{(p_1)})^{s}(\|\x^{(i_1)}\|_2\|\x^{(p_1)}\|_2-(\x^{(i_1)})^T\x^{(p_1)})}{Z^2} \\ \times \lp\sum_{i_2=1}^{l}\sum_{p_2=1}^{l} \frac{A^{(i_1,i_2)}A^{(p_1,p_2)}}{C^{(i_1)}C^{(p_1)}} (\|\y^{(i_2)}\|_2\|\y^{(p_2)}\|_2-(\y^{(i_2)})^T\y^{(p_2)})\rp.\end{gathered}$$ Now it easily follows that $\frac{\psi(\calX,\calY,\beta,s,t)}{dt}\leq 0$ and function $\psi(\calX,\calY,\beta,s,t)$ is indeed non-increasing (decreasing) in $t$. We summarize the obtained results in the following theorem. \[thm:thm1\] Let $G\in\mR^{m \times n},u^{(4)}\in\mR^1,\u^{(2)}\in\mR^{m\times 1}$, and $\h\in\mR^{n\times 1}$ all have i.i.d. standard normal components ($G$, $u^{(4)}$, $\u^{(2)}$, and $\h$ are then independent of each other as well). Assume that set $\calX=\{\x^{(1)},\x^{(2)},\dots,\x^{(l)}\}$, where $\x^{(i)}\in \mR^{n},1\leq i\leq l$, and set $\calY=\{\y^{(1)},\y^{(2)},\dots,\y^{(l)}\}$, where $\y^{(i)}\in \mR^{m},1\leq i\leq l$ are given and that $\beta\geq 0$ and $s$ are real numbers. One then has that function $\psi(\calX,\calY,\beta,s,t)$ $$\begin{gathered} \label{eq:thm1eq1} \psi(\calX,\calY,\beta,s,t)= \mE_{G,u^{(4)},\u^{(2)},\h} \frac{1}{\beta|s|\sqrt{n}} \\ \times \log\lp \sum_{i_1=1}^{l}\lp\sum_{i_2=1}^{l}e^{\beta \lp \sqrt{t}(\y^{(i_2)})^T G\x^{(i_1)}+\sqrt{1-t}\|\x^{(i_2)}\|_2 (\y^{(i_2)})^T\u^{(2)}+\sqrt{t}\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2,u^{(4)} +\sqrt{1-t}\|\y^{(i_2)}\|_2\h^T\x^{(i)}\rp} \rp^{s}\rp,\end{gathered}$$ is non-increasing (decreasing) in $t$. Follows from the above presentation. Assume the setup of Theorem \[thm:thm1\]. Then we also have $$\begin{aligned} \label{eq:co1eq1} \psi(\calX,\calY,\beta,s,t)= \psi(\calX,\calY,\beta,s,0)+\int_{0}^{t}\frac{d\psi(\calX,\calY,\beta,s,t)}{dt}dt,\end{aligned}$$ as well as the following comparison principle $$\begin{aligned} \label{eq:co1eq2} \psi(\calX,\calY,\beta,s,0) \geq \psi(\calX,\calY,\beta,s,t)\geq \psi(\calX,\calY,\beta,s,1).\end{aligned}$$ It is automatic by the above theorem and after one notes that $\frac{\psi(\calX,\calY,\beta,s,t)}{dt}\leq 0$. Numerical experiments {#sec:genconsim} --------------------- The theoretical results that we presented above establish a very powerful tool for dealing with random processes. Below we look at them from a numerical point of view, i.e. through numerical simulations. For the simplicity we chose $m=5$, $n=5$, $l=10$, and selected set $\calX$ as the columns of the following matrix (basically $\calX$ was selected the same way as the corresponding set in [@Stojnicgscomp16]) $$X^{+}=\begin{bmatrix} -0.7998 & 0.1004 & -0.7599 & 0.6616 & 0.5864 & -0.4010 & -0.0148 & -0.8320 & 0.3187 & -0.4861 \\ 0.1760 & 0.0704 & 0.1056 & -0.1369 & -0.6259 & -0.5289 & -0.3740 & 0.3140 & 0.6299 & -0.5494 \\ 0.0806 & -0.9085 & -0.3381 & -0.1970 & -0.1438 & 0.4863 & 0.5832 & 0.0840 & -0.2299 & -0.2647 \\ 0.5487 & -0.3120 & -0.5447 & 0.5673 & 0.4870 & -0.5239 & 0.0407 & -0.2955 & 0.3913 & 0.5113 \\ -0.1476 & 0.2497 & -0.0208 & 0.4276 & 0.0808 & -0.2202 & -0.7198 & 0.3389 & 0.5438 & -0.3611 \end{bmatrix}.$$ One then obviously has $$\label{eq:sim1} \calX^{+}=\{X^{+}_{:,1},X^{+}_{:,2},\dots,X^{+}_{:,l}\}.$$ We do recall the observation from [@Stojnicgscomp16] that set $\calX^{+}$ (and matrix $X^{+}$) are practically randomly chosen (an added scaling makes $\|X^{+}_{:,i_1}\|_2=1,1\leq i_1\leq l$). Also we selected set $\calY$ as the columns of the following matrix $$Y^{+}=\begin{bmatrix} -0.4639 & 0.7324 & -0.4828 & 0.0280 & -0.4016 & -0.6764 & 0.6161 & 0.4281 & -0.3831 & 0.0699 \\ 0.0416 & -0.3678 & 0.0144 & -0.4856 & 0.4880 & -0.6861 & 0.1266 & 0.5132 & 0.0350 & -0.0308 \\ -0.6522 & 0.1775 & 0.2449 & -0.2417 & -0.1255 & 0.2355 & 0.0859 & -0.1498 & 0.2410 & -0.7208 \\ -0.5981 & -0.1078 & 0.4879 & -0.3456 & 0.5796 & -0.0856 & 0.6892 & 0.1325 & 0.8628 & -0.1637 \\ -0.0037 & 0.5340 & 0.6846 & 0.7652 & -0.4989 & -0.0946 & -0.3492 & -0.7165 & -0.2225 & -0.6692 \end{bmatrix}.$$ Clearly, $$\label{eq:sim1a} \calY^{+}=\{Y^{+}_{:,1},Y^{+}_{:,2},\dots,Y^{+}_{:,l}\}.$$ Similarly to what was mentioned above for set $\calX^{+}$, we also add that set $\calY^{+}$ (and matrix $Y^{+}$) are again for all practical purposes randomly chosen (to make everything a bit neater we again scaled all the columns of $Y^{+}$ so that $\|Y^{+}_{:,i_1}\|_2=1,1\leq i_1\leq l$). The numerical experiments were conducted in a fashion very similar to the one from [@Stojnicgscomp16]. Namely, we simulated derivatives $\frac{d\psi(\calX,\calY,\beta,s,t)}{dt}$ using both (\[eq:genanal11\]) and (\[eq:conalt2\]). We refer to the use of (\[eq:genanal11\]) as the standard interpolation and to the use of (\[eq:conalt2\]) as the computed interpolation. We then computed $\psi(\calX,\calY,\beta,s,t)$ using (\[eq:co1eq1\]). Moreover, we additionally simulated $\psi(\calX,\calY,\beta,s,t)$ using (\[eq:genanal8\]) which we view as a direct way of simulation without any interpolating computations. We set $\beta=10$ and averaged all random quantities over a set of $5e4$ experiments. To parallel the presentation given in [@Stojnicgscomp16] as much as possible, we here also simulated two different scenarios with all other parameters being the same, except that in one of the scenarios $s=1$ and in the other $s=-1$. **** Figure \[fig:gensplus1xnorm1psi\] and Table \[tab:gensplus1xnorm1psi\] contain the results obtained for $s=1$. Following the standard that we set in [@Stojnicgscomp16], Figure \[fig:gensplus1xnorm1psi\] shows the entire range for $t$ (i.e. its shows the values for $t\in(0,1)$) whereas Table \[tab:gensplus1xnorm1psi\] focuses on several particular values of $t$ and shows concrete values of all key quantities. As both, Figure \[fig:gensplus1xnorm1psi\] and Table \[tab:gensplus1xnorm1psi\], show, there is a solid agreement between all presented results. $ t$ $\frac{d\psi}{dt}$; (\[eq:genanal11\]) $\frac{d\psi}{dt}$; (\[eq:conalt2\]) $\psi$; (\[eq:genanal11\]) and (\[eq:co1eq1\]) $\psi$; (\[eq:conalt2\]) and (\[eq:co1eq1\]) $\psi$; (\[eq:genanal8\]) ------------ ---------------------------------------- -------------------------------------- ------------------------------------------------ ---------------------------------------------- --------------------------- $ 0.1000 $ $ -0.1438 $ $ -0.1384 $ ${\textcolor{blue}{\mathbf{ 1.4514 }}}$ ${\textcolor{blue}{\mathbf{ 1.4511 }}}$ $\mathbf{ 1.4514 }$ $ 0.2000 $ $ -0.1613 $ $ -0.1574 $ ${\textcolor{blue}{\mathbf{ 1.4365 }}}$ ${\textcolor{blue}{\mathbf{ 1.4361 }}}$ $\mathbf{ 1.4379 }$ $ 0.3000 $ $ -0.1819 $ $ -0.1794 $ ${\textcolor{blue}{\mathbf{ 1.4193 }}}$ ${\textcolor{blue}{\mathbf{ 1.4190 }}}$ $\mathbf{ 1.4162 }$ $ 0.4000 $ $ -0.2003 $ $ -0.2019 $ ${\textcolor{blue}{\mathbf{ 1.4002 }}}$ ${\textcolor{blue}{\mathbf{ 1.3997 }}}$ $\mathbf{ 1.3988 }$ $ 0.5000 $ $ -0.2252 $ $ -0.2269 $ ${\textcolor{blue}{\mathbf{ 1.3784 }}}$ ${\textcolor{blue}{\mathbf{ 1.3781 }}}$ $\mathbf{ 1.3746 }$ $ 0.6000 $ $ -0.2569 $ $ -0.2554 $ ${\textcolor{blue}{\mathbf{ 1.3540 }}}$ ${\textcolor{blue}{\mathbf{ 1.3537 }}}$ $\mathbf{ 1.3518 }$ $ 0.7000 $ $ -0.2957 $ $ -0.2934 $ ${\textcolor{blue}{\mathbf{ 1.3263 }}}$ ${\textcolor{blue}{\mathbf{ 1.3259 }}}$ $\mathbf{ 1.3192 }$ $ 0.8000 $ $ -0.3359 $ $ -0.3452 $ ${\textcolor{blue}{\mathbf{ 1.2942 }}}$ ${\textcolor{blue}{\mathbf{ 1.2936 }}}$ $\mathbf{ 1.2964 }$ $ 0.9000 $ $ -0.4137 $ $ -0.4164 $ ${\textcolor{blue}{\mathbf{ 1.2558 }}}$ ${\textcolor{blue}{\mathbf{ 1.2552 }}}$ $\mathbf{ 1.2531 }$ : Simulated results — $m=5$, $n=5$, $l=10$, $\calX=\calX^{+}$, $\calY=\calY^{+}$, $\beta=3$, $s=1$ \[tab:gensplus1xnorm1psi\] **** Figure \[fig:gensmin1xnorm1psi\] and Table \[tab:gensmin1xnorm1psi\] contain the results obtained for $s=-1$. Figure \[fig:gensmin1xnorm1psi\] again shows the entire range for $t$, whereas Table \[tab:gensmin1xnorm1psi\] focuses on several particular values of $t$. Similarly to what we had above for $s=1$, here we again have that both, Figure \[fig:gensmin1xnorm1psi\] and Table \[tab:gensmin1xnorm1psi\], show that the agreement between all presented results is fairly strong. $ t$ $\frac{d\psi}{dt}$; (\[eq:genanal11\]) $\frac{d\psi}{dt}$; (\[eq:conalt2\]) $\psi$; (\[eq:genanal11\]) and (\[eq:co1eq1\]) $\psi$; (\[eq:conalt2\]) and (\[eq:co1eq1\]) $\psi$; (\[eq:genanal8\]) ------------ ---------------------------------------- -------------------------------------- ------------------------------------------------ ---------------------------------------------- --------------------------- $ 0.1000 $ $ -0.1422 $ $ -0.1471 $ ${\textcolor{blue}{\mathbf{ -0.0204 }}}$ ${\textcolor{blue}{\mathbf{ -0.0206 }}}$ $\mathbf{ -0.0172 }$ $ 0.2000 $ $ -0.1726 $ $ -0.1735 $ ${\textcolor{blue}{\mathbf{ -0.0368 }}}$ ${\textcolor{blue}{\mathbf{ -0.0370 }}}$ $\mathbf{ -0.0349 }$ $ 0.3000 $ $ -0.2026 $ $ -0.2010 $ ${\textcolor{blue}{\mathbf{ -0.0561 }}}$ ${\textcolor{blue}{\mathbf{ -0.0561 }}}$ $\mathbf{ -0.0518 }$ $ 0.4000 $ $ -0.2291 $ $ -0.2296 $ ${\textcolor{blue}{\mathbf{ -0.0780 }}}$ ${\textcolor{blue}{\mathbf{ -0.0780 }}}$ $\mathbf{ -0.0769 }$ $ 0.5000 $ $ -0.2555 $ $ -0.2594 $ ${\textcolor{blue}{\mathbf{ -0.1026 }}}$ ${\textcolor{blue}{\mathbf{ -0.1026 }}}$ $\mathbf{ -0.1000 }$ $ 0.6000 $ $ -0.2889 $ $ -0.2923 $ ${\textcolor{blue}{\mathbf{ -0.1303 }}}$ ${\textcolor{blue}{\mathbf{ -0.1304 }}}$ $\mathbf{ -0.1254 }$ $ 0.7000 $ $ -0.3269 $ $ -0.3331 $ ${\textcolor{blue}{\mathbf{ -0.1620 }}}$ ${\textcolor{blue}{\mathbf{ -0.1619 }}}$ $\mathbf{ -0.1546 }$ $ 0.8000 $ $ -0.3769 $ $ -0.3818 $ ${\textcolor{blue}{\mathbf{ -0.1977 }}}$ ${\textcolor{blue}{\mathbf{ -0.1979 }}}$ $\mathbf{ -0.1981 }$ $ 0.9000 $ $ -0.4533 $ $ -0.4503 $ ${\textcolor{blue}{\mathbf{ -0.2398 }}}$ ${\textcolor{blue}{\mathbf{ -0.2400 }}}$ $\mathbf{ -0.2324 }$ : Simulated results — $m=5$, $n=5$, $l=10$, $\calX=\calX^{+}$, $\beta=3$, $s=-1$ \[tab:gensmin1xnorm1psi\] $\beta\rightarrow \infty$ {#sec:betainf} ------------------------- In [@Stojnicgscomp16], we showed that the comparison concepts introduced there in $\beta\rightarrow\infty$ regime simplify to well known forms of Slepian’s max and Gordon’s minmax principles. Below we show that the comparison principles introduced above behave so to say in a similar way and also contain as a special case (obtained again in $\beta\rightarrow\infty$ regime) both, Slepian’s max and Gordon’s minmax principles (this time though, the resulting forms are more general). Now, we easily have for the limiting behavior of $\xi(\calX,\calY,\beta,s)$ $$\begin{aligned} \label{eq:betainf1} \lim_{\beta\rightarrow\infty} \xi(\calX,\calY,\beta,s) & = & \lim_{\beta\rightarrow\infty} \mE_{G,u^{(4)}} \frac{1}{|s|\beta\sqrt{n}} \log\lp\sum_{i=1_2}^{l}\lp \sum_{i=1_2}^{l}e^{\beta\lp (\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2u^{(4)}\rp} \rp^s\rp \nonumber \\ & = & \lim_{\beta\rightarrow\infty} \mE_{G,u^{(4)}} \frac{1}{|s|\beta\sqrt{n}} \log\lp\sum_{i=1_2}^{l}\lp e^{\beta \max_{\y^{(i_2)}\in \calY}\lp (\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2u^{(4)}\rp} \rp^s\rp \nonumber \\ & = & \lim_{\beta\rightarrow\infty} \mE_{G,u^{(4)}} \frac{1}{|s|\beta\sqrt{n}} \log\lp e^{\max_{\x^{(i_2)}\in \calX}s\beta \max_{\y^{(i_2)}\in \calY}\lp (\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2u^{(4)}\rp}\rp \nonumber \\ & = & \mE_{G,u^{(4)}} \frac{\max_{\x^{(i_1)}\in \calX} \lp \mbox{sign}(s) \max_{\y^{(i)}\in \calY}\lp(\y^{(i_2)})^T G\x^{(i)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2u^{(4)}\rp\rp}{\sqrt{n}}.\end{aligned}$$ ### $s>0$ – reestablishing a Slepian’s max comparison {#sec:betainfsplus1} If $s>0$ then (\[eq:betainf1\]) gives $$\begin{aligned} \label{eq:betainfsplus1} \lim_{\beta\rightarrow\infty} \xi(\calX,\beta,s,1)=\mE_{G,u^{(4)}} \frac{\max_{\x^{(i_1)}\in \calX,\y^{(i_2)}\in \calY}\lp(\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2u^{(4)}\rp}{\sqrt{n}}.\end{aligned}$$ We now recall that $\xi(\calX,\calY,\beta,s)=\psi(\calX,\calY,\beta,s,1)$ and utilize the above machinery to find $$\begin{gathered} \label{eq:betainfsplus2} \mE_{G,u^{(4)}} \frac{\max_{\x^{(i_1)}\in \calX,\y^{(i_2)}\in \calY}\lp(\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2u^{(4)}\rp}{\sqrt{n}} = \lim_{\beta\rightarrow\infty} \xi(\calX,\beta,s,1)= \lim_{\beta\rightarrow\infty} \psi(\calX,\calY,\beta,s,1) \\ \leq \lim_{\beta\rightarrow\infty} \psi(\calX,\calY,\beta,s,0)= \mE_{\u^{(2)},\h} \frac{\max_{\x^{(i_1)}\in \calX,\y^{(i_2)}\in \calY} \lp \|\x^{(i_1)}\|_2(\y^{(i_2)})^T\u^{(2)} +\|\y^{(i_2)}\|_2\h^T\x^{(i_1)}\rp}{\sqrt{n}}.\end{gathered}$$ Connecting beginning and end in (\[eq:betainfsplus2\]) we obtain a well-known form of the Slepian comparison principle (see, e.g. [@Gordon85; @Stojnicgscomp16; @Slep62]). As stated above, this form is a stronger counterpart of the corresponding result in [@Stojnicgscomp16], and of course only a special case of a much stronger general concept introduced in Theorem \[thm:thm1\]. **** As in [@Stojnicgscomp16], we below provide a set of numerical results designed to shed a bit more light on $\beta\rightarrow\infty$ regime. The obtained simulation results are shown in Figure \[fig:genbetainfsplus1xnorm1psi\] and Table \[tab:genbetainfsplus1xnorm1psi\]. We kept all parameters the same as above ($s=1$ is chosen for the concreteness; such a choice is also in alignment with the choice made in the simulations shown earlier), with only one change. Now, instead of having $\beta=3$ we have $\beta=10$, which in a way emulates $\beta\rightarrow\infty$. Both, Figure \[fig:genbetainfsplus1xnorm1psi\] and Table \[tab:genbetainfsplus1xnorm1psi\], show an excellent agreement between all presented results. We also note that a fairly small value of $\beta$, namely, $\beta=10$, seems as a pretty solid approximation of $\beta\rightarrow\infty$. This is especially clear from the right part of Figure \[fig:genbetainfsplus1xnorm1psi\] where one can observe that for $\beta=10$ the resulting curves are much closer to the purple circles (which effectively represent the $\beta\rightarrow\infty$ regime). $ t$ $\frac{d\psi}{dt}$; (\[eq:genanal11\]) $\frac{d\psi}{dt}$; (\[eq:conalt2\]) $\psi$; (\[eq:genanal11\]) and (\[eq:co1eq1\]) $\psi$; (\[eq:conalt2\]) and (\[eq:co1eq1\]) $\psi$; (\[eq:genanal8\]) $\lim_{\beta\rightarrow\infty}\psi$; (\[eq:genanal8\]) ------------ ---------------------------------------- -------------------------------------- ------------------------------------------------ ---------------------------------------------- --------------------------- -------------------------------------------------------- $ 0.1000 $ $ -0.1002 $ $ -0.0943 $ ${\textcolor{blue}{\mathbf{ 1.2950 }}}$ ${\textcolor{blue}{\mathbf{ 1.2946 }}}$ $\mathbf{ 1.2955 }$ ${\textcolor{mypurple}{\mathbf{ 1.2803 }}}$ $ 0.2000 $ $ -0.1287 $ $ -0.1264 $ ${\textcolor{blue}{\mathbf{ 1.2836 }}}$ ${\textcolor{blue}{\mathbf{ 1.2832 }}}$ $\mathbf{ 1.2859 }$ ${\textcolor{mypurple}{\mathbf{ 1.2715 }}}$ $ 0.3000 $ $ -0.1593 $ $ -0.1580 $ ${\textcolor{blue}{\mathbf{ 1.2690 }}}$ ${\textcolor{blue}{\mathbf{ 1.2687 }}}$ $\mathbf{ 1.2659 }$ ${\textcolor{mypurple}{\mathbf{ 1.2519 }}}$ $ 0.4000 $ $ -0.1847 $ $ -0.1902 $ ${\textcolor{blue}{\mathbf{ 1.2517 }}}$ ${\textcolor{blue}{\mathbf{ 1.2512 }}}$ $\mathbf{ 1.2503 }$ ${\textcolor{mypurple}{\mathbf{ 1.2367 }}}$ $ 0.5000 $ $ -0.2181 $ $ -0.2182 $ ${\textcolor{blue}{\mathbf{ 1.2310 }}}$ ${\textcolor{blue}{\mathbf{ 1.2306 }}}$ $\mathbf{ 1.2276 }$ ${\textcolor{mypurple}{\mathbf{ 1.2143 }}}$ $ 0.6000 $ $ -0.2590 $ $ -0.2511 $ ${\textcolor{blue}{\mathbf{ 1.2069 }}}$ ${\textcolor{blue}{\mathbf{ 1.2067 }}}$ $\mathbf{ 1.2054 }$ ${\textcolor{mypurple}{\mathbf{ 1.1924 }}}$ $ 0.7000 $ $ -0.3061 $ $ -0.3036 $ ${\textcolor{blue}{\mathbf{ 1.1785 }}}$ ${\textcolor{blue}{\mathbf{ 1.1783 }}}$ $\mathbf{ 1.1719 }$ ${\textcolor{mypurple}{\mathbf{ 1.1589 }}}$ $ 0.8000 $ $ -0.3628 $ $ -0.3778 $ ${\textcolor{blue}{\mathbf{ 1.1444 }}}$ ${\textcolor{blue}{\mathbf{ 1.1440 }}}$ $\mathbf{ 1.1469 }$ ${\textcolor{mypurple}{\mathbf{ 1.1338 }}}$ $ 0.9000 $ $ -0.4719 $ $ -0.4776 $ ${\textcolor{blue}{\mathbf{ 1.1016 }}}$ ${\textcolor{blue}{\mathbf{ 1.1011 }}}$ $\mathbf{ 1.0997 }$ ${\textcolor{mypurple}{\mathbf{ 1.0865 }}}$ : Simulated results — $m=5$, $n=5$, $l=10$, $\calX=\calX^{+}$, $\calY=\calY^{+}$, $\beta=10$, $s=1$ \[tab:genbetainfsplus1xnorm1psi\] ### $s<0$ – reestablishing a Gordon’s minmax comparison {#sec:betainfsminus1} For $s<0$, (\[eq:betainf1\]) gives $$\begin{aligned} \label{eq:betainfsminus1} \lim_{\beta\rightarrow\infty} \xi(\calX,\beta,s,1) & = & \mE_{G,u^{(4)}} \frac{\max_{\x^{(i_1)}\in \calX}\lp-\max_{\y^{(i_2)}\in \calY}\lp(\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2u^{(4)}\rp\rp}{\sqrt{n}}\nonumber \\ & = & - \mE_{G,u^{(4)}} \frac{\min_{\x^{(i_1)}\in \calX}\max_{\y^{(i_2)}\in \calY}\lp(\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2u^{(4)}\rp}{\sqrt{n}}.\end{aligned}$$ We can now again rely on $\xi(\calX,\calY,\beta,s)=\psi(\calX,\calY,\beta,s,1)$ and the above machinery to obtain $$\begin{gathered} \label{eq:betainfsminus2} - \mE_{G,u^{(4)}} \frac{\min_{\x^{(i_1)}\in \calX}\max_{\y^{(i_2)}\in \calY}\lp(\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2u^{(4)}\rp}{\sqrt{n}} = \lim_{\beta\rightarrow\infty} \xi(\calX,\calY,\beta,s)\\= \lim_{\beta\rightarrow\infty} \psi(\calX,\calY,\beta,s,1) \leq \lim_{\beta\rightarrow\infty} \psi(\calX,\beta,s,0)\\= \mE_{\u^{(2)},\h} \frac{\max_{\x^{(i_1)}\in \calX} \lp -\max_{\y^{(i_2)}\in \calY}\lp \|\x^{(i_1)}\|_2(\y^{(i_2)})^T\u^{(2)} +\|\y^{(i_2)}\|_2\h^T\x^{(i_1)}\rp\rp}{\sqrt{n}} \\ = - \mE_{\u^{(2)},\h} \frac{\min_{\x^{(i_1)}\in \calX} \lp \max_{\y^{(i_2)}\in \calY}\lp \|\x^{(i_1)}\|_2(\y^{(i_2)})^T\u^{(2)} +\|\y^{(i_2)}\|_2\h^T\x^{(i_1)}\rp\rp}{\sqrt{n}} .\end{gathered}$$ Connecting beginning and end in (\[eq:betainfsminus2\]) one obtains a form of the well-known Gordon comparison principle [@Gordon85] which is an upgrade on the above mentioned Slepian’s comparison principle. As was the case above when we discussed specialization to the Slepian’s max principle, (\[eq:betainfsminus2\]) is a stronger counterpart of the corresponding result in [@Stojnicgscomp16], and only a special case of a much stronger concept presented in Theorem \[thm:thm1\]. **** In Figure \[fig:genbetainfsmin1xnorm1psi\] and Table \[tab:genbetainfsmin1xnorm1psi\] results obtained through simulations are shown. All parameters are again the same as earlier (this time though, for the concreteness we set $s=-1$). From both, Figure \[fig:genbetainfsmin1xnorm1psi\] and Table \[tab:genbetainfsmin1xnorm1psi\], one can again observe a solid agreement between all the presented results with $\beta=10$ being a pretty good approximation of $\beta\rightarrow\infty$. $ t$ $\frac{d\psi}{dt}$; (\[eq:genanal11\]) $\frac{d\psi}{dt}$; (\[eq:conalt2\]) $\psi$; (\[eq:genanal11\]) and (\[eq:co1eq1\]) $\psi$; (\[eq:conalt2\]) and (\[eq:co1eq1\]) $\psi$; (\[eq:genanal8\]) $\lim_{\beta\rightarrow\infty}\psi$; (\[eq:genanal8\]) ------------ ---------------------------------------- -------------------------------------- ------------------------------------------------ ---------------------------------------------- --------------------------- -------------------------------------------------------- $ 0.1000 $ $ -0.1196 $ $ -0.1069 $ ${\textcolor{blue}{\mathbf{ -0.0135 }}}$ ${\textcolor{blue}{\mathbf{ -0.0138 }}}$ $\mathbf{ -0.0141 }$ ${\textcolor{mypurple}{\mathbf{ -0.0128 }}}$ $ 0.2000 $ $ -0.1437 $ $ -0.1478 $ ${\textcolor{blue}{\mathbf{ -0.0270 }}}$ ${\textcolor{blue}{\mathbf{ -0.0269 }}}$ $\mathbf{ -0.0261 }$ ${\textcolor{mypurple}{\mathbf{ -0.0241 }}}$ $ 0.3000 $ $ -0.1839 $ $ -0.1840 $ ${\textcolor{blue}{\mathbf{ -0.0436 }}}$ ${\textcolor{blue}{\mathbf{ -0.0438 }}}$ $\mathbf{ -0.0438 }$ ${\textcolor{mypurple}{\mathbf{ -0.0411 }}}$ $ 0.4000 $ $ -0.2203 $ $ -0.2194 $ ${\textcolor{blue}{\mathbf{ -0.0643 }}}$ ${\textcolor{blue}{\mathbf{ -0.0644 }}}$ $\mathbf{ -0.0646 }$ ${\textcolor{mypurple}{\mathbf{ -0.0615 }}}$ $ 0.5000 $ $ -0.2642 $ $ -0.2587 $ ${\textcolor{blue}{\mathbf{ -0.0888 }}}$ ${\textcolor{blue}{\mathbf{ -0.0887 }}}$ $\mathbf{ -0.0861 }$ ${\textcolor{mypurple}{\mathbf{ -0.0826 }}}$ $ 0.6000 $ $ -0.3036 $ $ -0.3070 $ ${\textcolor{blue}{\mathbf{ -0.1176 }}}$ ${\textcolor{blue}{\mathbf{ -0.1175 }}}$ $\mathbf{ -0.1160 }$ ${\textcolor{mypurple}{\mathbf{ -0.1120 }}}$ $ 0.7000 $ $ -0.3563 $ $ -0.3662 $ ${\textcolor{blue}{\mathbf{ -0.1514 }}}$ ${\textcolor{blue}{\mathbf{ -0.1514 }}}$ $\mathbf{ -0.1511 }$ ${\textcolor{mypurple}{\mathbf{ -0.1471 }}}$ $ 0.8000 $ $ -0.4352 $ $ -0.4442 $ ${\textcolor{blue}{\mathbf{ -0.1918 }}}$ ${\textcolor{blue}{\mathbf{ -0.1920 }}}$ $\mathbf{ -0.1905 }$ ${\textcolor{mypurple}{\mathbf{ -0.1867 }}}$ $ 0.9000 $ $ -0.5560 $ $ -0.5548 $ ${\textcolor{blue}{\mathbf{ -0.2420 }}}$ ${\textcolor{blue}{\mathbf{ -0.2424 }}}$ $\mathbf{ -0.2378 }$ ${\textcolor{mypurple}{\mathbf{ -0.2343 }}}$ : Simulated results — $m=5$, $n=5$, $l=10$, $\calX=\calX^{+}$, $\beta=10$, $s=-1$ \[tab:genbetainfsmin1xnorm1psi\] A lifting procedure {#sec:lifting} =================== We start again with sets $\calX$ and $\calY$ and consider the following function $$\begin{aligned} \label{eq:liftgenanal1} f_*(G,u^{(4)},\calX,\calY,\beta,s)= \lp \sum_{i_1=1}^{l}\lp\sum_{i_2=1}^{l}e^{\beta \lp (\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2 u^{(4)}\rp} \rp^{s}\rp^{c_3},\end{aligned}$$ where all quantities are as in Section \[sec:gencon\] and $c_3>0$ is a real parameter. Following (\[eq:liftgenanal2\]) we then introduce $$\begin{aligned} \label{eq:liftgenanal2} \xi_*(\calX,\calY,\beta,s) \triangleq \mE_{G,u^{(4)}} f(G,u^{(4)},\calX,\calY,\beta,s),\end{aligned}$$ and consider the following interpolating function $\psi_*(\cdot)$ as an object convenient for studying properties of $\xi_*(\calX,\calY,\beta,s)$ $$\begin{gathered} \label{eq:liftgenanal3} \psi_*(\calX,\calY,\beta,s,t) = \mE_{G,u^{(4)},\u^{(2)},\h} \\ \times \lp \sum_{i_1=1}^{l}\lp\sum_{i_2=1}^{l}e^{\beta \lp \sqrt{t}(\y^{(i_2)})^T G\x^{(i_1)}+\sqrt{1-t}\|\x^{(i_2)}\|_2 (\y^{(i_2)})^T\u^{(2)}+\sqrt{t}\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2,u^{(4)} +\sqrt{1-t}\|\y^{(i_2)}\|_2\h^T\x^{(i)}\rp} \rp^{s}\rp^{c_3},\end{gathered}$$ where again, all quantities are exactly the same as earlier with the above mentioned addition of $c_3$. Following (\[eq:liftgenanal8\]) (and clearly relying on (\[eq:genanal7\])) we write $$\begin{aligned} \label{eq:liftgenanal8} \psi_*(\calX,\calY,\beta,s,t) & = & \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} Z^{c_3}.\end{aligned}$$ Following further the strategy of Section \[sec:gencon\], below we study the monotonicity of $\psi_*(\calX,\calY,\beta,s,t)$ when viewed as a function of $t$. As it will be soon clear, many of the results that we created in Section \[sec:gencon\] with fairly straightforward modifications will be applicable here as well. As usual, we will try to skip all the details that remain the same and instead will put an emphasis on those that bring a difference. We start with the following derivative (basically an analogous version of (\[eq:genanal9\])) $$\begin{aligned} \label{eq:liftgenanal9} \frac{d\psi_*(\calX,\calY,\beta,s,t)}{dt} & = & \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{dZ^{c_3}}{dt}\nonumber \\ & = & \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{sc_3}{Z^{1-c_3}} \sum_{i=1}^{l} (C^{(i_1)})^{s-1} \nonumber \\ & & \times \sum_{i_2=1}^{l}\beta_{i_1}A^{(i_1,i_2)}\lp \frac{dB^{(i_1,i_2)}}{dt}+\frac{\|\y^{(i_2)}\|_2 u^{(4)}}{2\sqrt{t}}-\frac{\|\y^{(i_2)}\|_2 \u^{(i_1,3)}}{2\sqrt{1-t}}\rp.\end{aligned}$$ Relying on (\[eq:genanal10\]) we further find $$\begin{aligned} \label{eq:liftgenanal11} \frac{d\psi_*(\calX,\calY,\beta,s,t)}{dt} & = & \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{sc_3}{Z^{1-c_3}} \sum_{i_1=1}^{l} (C^{(i_1)})^{s-1} \nonumber \\ & & \times \sum_{i_2=1}^{l}\beta_{i_1}A^{(i_1,i_2)}\lp \sum_{j=1}^{m}\lp \frac{\y_j^{(i_2)}\u_j^{(i_1,1)}}{2\sqrt{t}}-\frac{\y_j^{(i_2)}\u_j^{(2)}}{2\sqrt{1-t}}\rp+\frac{\|\y^{(i_2)}\|_2 u^{(4)}}{2\sqrt{t}}-\frac{\|\y^{(i_2)}\|_2 \u^{(i_1,3)}}{2\sqrt{1-t}}\rp.\nonumber \\\end{aligned}$$ As in Section \[sec:gencon\], each of the terms in the above sum can be handled separately. However, this time the calculations will be done in a much faster fashion as one can utilize quite a few of the results already obtained earlier. Computing $\frac{d\psi_*(\calX,\calY,\beta,s,t)}{dt}$ {#sec::liftder} ----------------------------------------------------- As mentioned above, we will split the computation into several parts. Now, the key observation that we will employ here (and quite a few times again below) is that all the main calculations from Section \[sec:gencon\] can be repeated and not only conceptually but pretty much literally with very small modifications. These modifications will be in the powers of $Z$ and the constants that multiply them. Namely where we used to have $Z$ in Section \[sec:hand1\] we will now have $Z^{1-c_3}$ and where we used to have $-Z^{-2}$ we will now have $(c_3-1)Z^{c_3-2}$. All other adjustments are trivial and one finds $$\begin{gathered} \label{eq:liftgenanal19} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\u_j^{(i_1,1)}\y_j^{(i_2)}}{Z^{1-c_3}} \\ = \mE \lp \frac{\y_j^{(i_2)}}{Z^{1-c_3}}\lp(C^{(i_1)})^{s-1}\beta_{i_1}A^{(i_1,i_2)}\y_j^{(i_2)}\sqrt{t}+(s-1)(C^{(i_1)})^{s-2}\beta_{i_1}\sum_{p_2=1}^{l}A^{(i_1,p_2)}\y_j^{(p_2)}\sqrt{t}\rp \rp \\ -(1-c_3) \mE \lp\sum_{p_1=1}^{l} \frac{(\x^{(i_1)})^T\x^{(p_1)}}{\|\x^{(i_1)}\|_2\|\x^{(p_1)}\|_2} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z^{2-c_3}} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\y_j^{(p_2)}\sqrt{t}\rp.\end{gathered}$$ Repeating all the calculations from Section \[sec:hand2\] with the above mentioned modifications we also find $$\begin{gathered} \label{eq:liftgenAanal19} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\u_j^{(2)}\y_j^{(i_2)}}{Z^{1-c_3}} \\ = \mE \lp\frac{\y_j^{(i_2)}}{Z^{1-c_3}}\lp(C^{(i_1)})^{s-1}\beta_{i_1}A^{(i_1,i_2)}\y_j^{(i_2)}\sqrt{1-t}+(s-1)(C^{(i_1)})^{s-2}\beta_{i_1}\sum_{p_2=1}^{l}A^{(i_1,p_2)}\y_j^{(p_2)}\sqrt{1-t}\rp \rp \\ -(1-c_3) \mE \lp\sum_{p_1=1}^{l} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z^{2-c_3}} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\y_j^{(p_2)}\sqrt{1-t}\rp.\end{gathered}$$ Similarly to what we did above, one can also repeat all the calculations from Section \[sec:hand3\] while accounting for the above mentioned change of powers and multiplying constants we have the following analogue of (\[eq:genBanal19\]) $$\begin{gathered} \label{eq:liftgenBanal19} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\u^{(i_1,3)}}{Z^{1-c_3}} \\ = \mE \lp\frac{1}{Z^{1-c_3}}\lp(C^{(i_1)})^{s-1}\beta_{i_1}A^{(i_1,i_2)}\|\y^{(i_2)}\|_2\sqrt{1-t}+(s-1)(C^{(i_1)})^{s-2}\beta_{i_1}\sum_{p_2=1}^{l}A^{(i_1,p_2)}\|\y^{(p_2)}\|_2\sqrt{1-t}\rp \rp \\ -{1-c_3} \mE \lp\sum_{p_1=1}^{l}\frac{(\x^{(i_1)})^T\x^{(p_1)}}{\|\x^{(i_1)}\|_2\|\x^{(p_1)}\|_2} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z^{2-c_3}} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\|\y^{(p_2)}\|_2\sqrt{1-t}\rp.\end{gathered}$$ Finally, after repeating all the calculations from Section \[sec:hand4\] we have the following analogue to (\[eq:genCanal19\]) $$\begin{gathered} \label{eq:liftgenCanal19} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}u^{(4)}}{Z^{1-c_3}} \\ = \mE \lp\frac{1}{Z^{1-c_3}}\lp(C^{(i_1)})^{s-1}\beta_{i_1}A^{(i_1,i_2)}\|\y^{(i_2)}\|_2\sqrt{t}+(s-1)(C^{(i_1)})^{s-2}\beta_{i_1}\sum_{p_2=1}^{l}A^{(i_1,p_2)}\|\y^{(p_2)}\|_2\sqrt{t}\rp \rp \\ -{1-c_3} \mE \lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z^{2-c_3}} s \sum_{p_1=1}^{l} (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\|\y^{(p_2)}\|_2\sqrt{t}\rp.\end{gathered}$$ Combining (\[eq:liftgenanal11\]), (\[eq:liftgenanal19\]), (\[eq:liftgenAanal19\]), (\[eq:liftgenBanal19\]), and (\[eq:liftgenCanal19\]) we can also establish the following set of results (basically fairly similar to the corresponding set obtained in Section \[sec:conalt\]) $$\begin{aligned} \label{eq:liftconalt1} \frac{d\psi_*(\calX,\calY,\beta,s,t)}{dt} = \frac{sc_3(1-c_3)}{2} \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} (-S_1+S_2+S_3-S_4)\end{aligned}$$ where $$\begin{aligned} \label{eq:liftconalt1a} S_{1,*} & = & \sum_{i_1=1}^{l} \sum_{i_2=1}^{l}\beta_{i_1}\sum_{j=1}^{m} \lp\sum_{p_1=1}^{l} \frac{(\x^{(i_1)})^T\x^{(p_1)}}{\|\x^{(i_1)}\|_2\|\x^{(p_1)}\|_2} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z^{2-c_3}} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\y_j^{(p_2)}\rp\nonumber \\ S_{2,*} & = & \sum_{i_1=1}^{l} \sum_{i_2=1}^{l}\beta_{i_1}\sum_{j=1}^{m} \lp\sum_{p_1=1}^{l} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}\y_j^{(i_2)}}{Z^{2-c_3}} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\y_j^{(p_2)}\rp\nonumber \\ S_{3,*} & = & \sum_{i_1=1}^{l} \sum_{i_2=1}^{l}\beta_{i_1} \|\y^{(i_2)}\|_2 \lp\sum_{p_1=1}^{l}\frac{(\x^{(i_1)})^T\x^{(p_1)}}{\|\x^{(i_1)}\|_2\|\x^{(p_1)}\|_2} \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z^{2-c_3}} s (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\|\y^{(p_2)}\|_2\rp\nonumber \\ S_{4,*} & = & \sum_{i_1=1}^{l} \sum_{i_2=1}^{l}\beta_{i_1} \|\y^{(i_2)}\|_2 \lp \frac{(C^{(i_1)})^{s-1} A^{(i_1,i_2)}}{Z^{2-c_3}} s \sum_{p_1=1}^{l} (C^{(p_1)})^{s-1}\sum_{p_2=1}^{l}\beta_{p_1}A^{(p_1,p_2)}\|\y^{(p_2)}\|_2\rp.\end{aligned}$$ Repeating (\[eq:conalt1b\]) and (\[eq:conalt1c\]) and combining these steps with (\[eq:liftconalt1a\]) we finally obtain $$\begin{aligned} \label{eq:liftconalt2} \frac{d\psi_*(\calX,\calY,\beta,s,t)}{dt} & = & -\frac{s^2\beta^2c_3(1-c_3)}{2} \nonumber \\ & & \times \mE_{\u^{(i_1,1)},\u^{(2)},\u^{(i_1,3)},u^{(4)}} \sum_{i_1=1}^{l} \sum_{p_1=1}^{l} \frac{(C^{(i_1)})^{s}(C^{(p_1)})^{s}(\|\x^{(i_1)}\|_2\|\x^{(p_1)}\|_2-(\x^{(i_1)})^T\x^{(p_1)})}{Z^{2-c_3}} \nonumber \\ & & \times \lp\sum_{i_2=1}^{l}\sum_{p_2=1}^{l} \frac{A^{(i_1,i_2)}A^{(p_1,p_2)}}{C^{(i_1)}C^{(p_1)}} (\|\y^{(i_2)}\|_2\|\y^{(p_2)}\|_2-(\y^{(i_2)})^T\y^{(p_2)})\rp.\end{aligned}$$ Depending on the value of $c_3$ one can now discuss the sign of $\frac{d\psi_*(\calX,\calY,\beta,s,t)}{dt}$ and whether function $\psi_*(\calX,\calY,\beta,s,t)$ is non-increasing (decreasing) or non-decreasing (increasing) in $t$. The obtained results are summarized in the following theorem and its a corollary. \[thm:liftthm2\] Assume the setup of Theorem \[thm:thm1\]. We then have $$\begin{aligned} \label{eq:liftco1eq1} \psi_*(\calX,\calY,\beta,s,c_3,t)= \psi_*(\calX,\calY,\beta,s,c_3,0)+\int_{0}^{t}\frac{d\psi_*(\calX,\calY,\beta,s,c_3,t)}{dt}dt,\end{aligned}$$ where $\frac{d\psi_*(\calX,\calY,\beta,s,c_3,t)}{dt}$ is given by (\[eq:liftconalt2\]). Follows automatically through the above discussion. \[cor:liftcor1\] Assume the setup of Theorem \[thm:liftthm2\]. 1\) If $0< c_3< 1$ then $\frac{d\psi_*(\calX,\calY,\beta,s,c_3,t)}{dt}<0$ and $\psi_*(\calX,\calY,\beta,s,c_3,t)$ is decreasing in $t$ and one finds the following comparison principle $$\begin{aligned} \label{eq:liftco2aeq1} \lim_{\beta\rightarrow\infty}\psi_*(\calX,\calY,\beta,s,c_3,0) \geq \lim_{\beta\rightarrow\infty} \psi_*(\calX,\calY,\beta,s,c_3,t)\geq \lim_{\beta\rightarrow\infty}\psi_*(\calX,\calY,\beta,s,c_3,1).\end{aligned}$$ 2\) If $c_3> 1$ or $c_3< 0$ then $\frac{d\psi_*(\calX,\calY,\beta,s,c_3,t)}{dt}>0$ and $\psi_*(\calX,\calY,\beta,s,c_3,t)$ is increasing in $t$ and one finds the following comparison principle $$\begin{aligned} \label{eq:liftco2aeq2} \lim_{\beta\rightarrow\infty}\psi_*(\calX,\calY,\beta,s,c_3,0) \leq \lim_{\beta\rightarrow\infty} \psi_*(\calX,\calY,\beta,s,c_3,t)\leq \lim_{\beta\rightarrow\infty}\psi_*(\calX,\calY,\beta,s,c_3,1).\end{aligned}$$ Follows again automatically by the arguments presented above. $\beta\rightarrow \infty$ {#sec:liftbetainf} ------------------------- Following into the footsteps of [@Stojnicgscomp16], in this section we discuss in a bit deeper detail one of the key consequences of the lifting procedure introduced above (as it will be soon clear, it will connect to some of the comparison principles that we utilized in e.g. [@StojnicLiftStrSec13; @StojnicMoreSophHopBnds10; @StojnicRicBnds13]). As in [@Stojnicgscomp16], we will assume that $\beta$ is large, say $\beta\rightarrow\infty$ and that the scaling $c_3\leftarrow \frac{c^{(s)}_3}{\beta}$, where $c^{(s)}_3$ is a finite positive real number, is in place as well. Clearly, one then has $c_3(1-c_3)\geq 0$ which implies $\frac{\psi_*(\calX,\calY,\beta,s,c_3,t)}{dt}\leq 0$. That, on the other hand, also means that function $\psi_*(\calX,\calY,\beta,s,c_3,t)$ is decreasing in $t$. We summarize this adaptation into the following corollary of Theorem \[thm:liftthm2\]. \[cor:liftcor2\] Assume the setup of Theorem \[thm:liftthm2\]. Let $c_3\leftarrow \frac{c^{(s)}_3}{\beta}$, where $c^{(s)}_3$ is a finite positive real number. Then $\psi_*(\calX,\calY,\beta,s,c_3,t)$ is decreasing in $t$ and we have the following comparison principle $$\begin{aligned} \label{eq:liftliftco2eq2} \lim_{\beta\rightarrow\infty}\psi_*(\calX,\calY,\beta,s,\frac{c^{(s)}_3}{\beta},0) \geq \lim_{\beta\rightarrow\infty} \psi_*(\calX,\calY,\beta,s,\frac{c^{(s)}_3}{\beta},t)\geq \lim_{\beta\rightarrow\infty}\psi_*(\calX,\calY,\beta,s,\frac{c^{(s)}_3}{\beta},1).\end{aligned}$$ Follows automatically by the above arguments. Paralleling further [@Stojnicgscomp16], we below also study the following limiting behavior of $\xi_*(\calX,\beta,s,\frac{c^{(s)}_3}{\beta})$, i.e. $$\begin{aligned} \label{eq:liftliftbetainf1} \log\lim_{\beta\rightarrow\infty} \xi_*(\calX,\calY,\beta,s,\frac{c^{(s)}_3}{\beta}) & = & \log\lim_{\beta\rightarrow\infty} \mE_{G,u^{(4)}}\lp \sum_{i_1=1}^{l}\lp\sum_{i_2=1}^{l}e^{\beta \lp (\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2 u^{(4)}\rp} \rp^{s}\rp^{\frac{c_3^{(s)}}{\beta}}\nonumber \\ & = & \log \mE_{G,u^{(4)}}\lp e^{c^{(s)}_3\max_{\x^{(i_1)}\in\calX} s\max_{\y^{(i_2)}\in\calY}\lp (\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2 u^{(4)}\rp} \rp.\nonumber \\\end{aligned}$$ ### $s=1$ – a lifted Slepian’s (fully bilinear) max comparison {#sec:liftliftbetainfsplus1} Choosing $s=1$ in (\[eq:liftliftbetainf1\]) gives $$\begin{aligned} \label{eq:liftliftbetainfsplus1} \log\lim_{\beta\rightarrow\infty} \xi_*(\calX,\calY,\beta,1,\frac{c^{(s)}_3}{\beta}) & = & \log \mE_{G,u^{(4)}}\lp e^{c^{(s)}_3\max_{\x^{(i_1)}\in\calX} \max_{\y^{(i_2)}\in\calY}\lp (\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2 u^{(4)}\rp} \rp.\nonumber \\\end{aligned}$$ Now, we recall that $\xi_*(\calX,\calY,\beta,1,\frac{c^{(s)}_3}{\beta})=\psi_*(\calX,\calY,\beta,1,\frac{c^{(s)}_3}{\beta},1)$ and based on the above we also find $$\begin{gathered} \label{eq:liftliftbetainfsplus2} \log \mE_{G,u^{(4)}}\lp e^{c^{(s)}_3\max_{\x^{(i_1)}\in\calX} \max_{\y^{(i_2)}\in\calY}\lp (\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2 u^{(4)}\rp} \rp = \log\lim_{\beta\rightarrow\infty} \psi_*(\calX,\calY,\beta,1,\frac{c^{(s)}_3}{\beta},1) \\ \leq \log\lim_{\beta\rightarrow\infty} \psi_*(\calX,\calY,\beta,1,\frac{c^{(s)}_3}{\beta},t) \leq \log\lim_{\beta\rightarrow\infty} \psi_*(\calX,\calY,\beta,1,\frac{c^{(s)}_3}{\beta},0) \\ = \log \mE_{\u^{(2)},\h}\lp e^{c^{(s)}_3\lp \max_{\x^{(i_1)}\in\calX}\max_{\y^{(i_2)}\in\calY}\lp \|\x^{(i_1)}\|_2(\y^{(i_2)})^T\u^{(2)}+\|\y^{(i_2)}\|_2\h^T\x^{(i_1)}\rp\rp} \rp.\end{gathered}$$ Taking beginning and end in (\[eq:liftliftbetainfsplus2\]) establishes basically the same comparison that we utilized in [@StojnicMoreSophHopBnds10], which is the following Gordon’s upgrade of the Slepian’s (so to say fully bilinear) max principle $$\begin{gathered} \label{eq:liftliftbetainfsplus3} \log \mE_{G,u^{(4)}} e^{c^{(s)}_3\max_{\x^{(i_1)}\in \calX,\y^{(i_2)}\in \calY} \lp (\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2u^{(4)}\rp} \\ \leq \log \mE_{\u^{(2)},\h} e^{c^{(s)}_3\max_{\x^{(i_1)}\in \calX,\y^{(i_2)}\in \calY} \lp \|\x^{(i_1)}\|_2(\y^{(i_2)})^T \u^{(2)}+ \|\y^{(i_2)}\|_2\h^T\x^{(i_1)}\rp}.\end{gathered}$$ Similarly to what we observed in [@Stojnicgscomp16], (\[eq:liftliftbetainfsplus1\]) and (\[eq:liftliftbetainfsplus2\]) can be viewed as a lifted Slepian (fully bilinear) max comparison principle. As discussed in [@Stojnicgscomp16] (see also, e.g. [@StojnicMoreSophHopBnds10]) the above lifting procedure is often the only known tool that can significantly improve on the original Slepian’s principle (needles to say, Theorem \[thm:liftthm2\] is a much stronger concept of which the above form is only a special case). **** We also conducted a set of numerical experiments to complement the theoretical results that we presented above. The numerical results that we obtained through these experiments are shown in Figure \[fig:liftedbetainfsplus1xnorm1psi\] and Table \[tab:liftedbetainfsplus1xnorm1psi\]. We selected all parameters as in Section \[sec:gencon\] with $\beta=10$ as a way to emulate $\beta\rightarrow\infty$ and $c_3=.1$ (to get a bit more reliable results, we this time averaged all random quantities over a set of $8e4$ experiments). The right part of the figure also contains how the obtained results compare to the same scenario but with no lifting. To have that comparison make sense, as in [@Stojnicgscomp16], we worked with the adjusted $\psi_*(\cdot)$. Basically, in Table \[tab:liftedbetainfsplus1xnorm1psi\], the values for $\psi_*(\calX,\calY,\beta,s,c_3,t)$ are given in two forms: 1) the value itself and 2) the adjusted value $\lp\frac{1}{\beta |s| c_3}\log\lp \psi_*(\calX,\calY,\beta,s,c_3,t)\rp-\frac{\beta|s| c_3}{2}\rp/\sqrt{n}$ (as in [@Stojnicgscomp16], the adjusted value acts in a way as a connection between $\psi_*(\calX,\calY,\beta,s,c_3,t)$ and $\psi(\calX,\calY,\beta,s,t)$). As can be seen from both, Figure \[fig:liftedbetainfsplus1xnorm1psi\] and Table \[tab:liftedbetainfsplus1xnorm1psi\], there is a solid agreement between all the presented results. Moreover, $\beta=10$ seems to be a solid approximation of $\beta\rightarrow\infty$ (the values for $\lim_{\beta\rightarrow\infty}\psi_*$ were obtained with $c_3^{(s)}=c_3\beta$ so that the fairness of the comparison is ensured). The so-called flattening effect, discussed in [@Stojnicgscomp16], appears as a consequence of the lifting procedure and tightens the corresponding comparisons from Section \[sec:gencon\]. \[tab:liftedbetainfsplus1xnorm1psi\] ### $s=-1$ – a lifted Gordon’s fully bilinear minmax comparison {#sec:liftbetainfsminus1} Choosing $s=-1$ in (\[eq:liftliftbetainf1\]) gives $$\begin{aligned} \label{eq:liftliftbetainfsmin1} \log\lim_{\beta\rightarrow\infty} \xi_*(\calX,\calY,\beta,-1,\frac{c^{(s)}_3}{\beta}) & = & \log \mE_{G,u^{(4)}}\lp e^{c^{(s)}_3\max_{\x^{(i_1)}\in\calX} -\min_{\y^{(i_2)}\in\calY}\lp (\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2 u^{(4)}\rp} \rp.\nonumber \\\end{aligned}$$ Analogously to (\[eq:liftliftbetainfsplus2\]) we now have $$\begin{gathered} \label{eq:liftliftbetainfsplus2} \log \mE_{G,u^{(4)}}\lp e^{c^{(s)}_3\max_{\x^{(i_1)}\in\calX}- \max_{\y^{(i_2)}\in\calY}\lp (\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2 u^{(4)}\rp} \rp = \log\lim_{\beta\rightarrow\infty} \psi_*(\calX,\calY,\beta,-1,\frac{c^{(s)}_3}{\beta},1) \\ \leq \log\lim_{\beta\rightarrow\infty} \psi_*(\calX,\calY,\beta,-1,\frac{c^{(s)}_3}{\beta},t) \leq \log\lim_{\beta\rightarrow\infty} \psi_*(\calX,\calY,\beta,-1,\frac{c^{(s)}_3}{\beta},0) \\ = \log \mE_{\u^{(2)},\h}\lp e^{c^{(s)}_3\lp \max_{\x^{(i_1)}\in\calX}-\max_{\y^{(i_2)}\in\calY}\lp \|\x^{(i_1)}\|_2(\y^{(i_2)})^T\u^{(2)}+\|\y^{(i_2)}\|_2\h^T\x^{(i_1)}\rp\rp} \rp.\end{gathered}$$ Taking beginning and end in (\[eq:liftliftbetainfsmin2\]) establishes again exactly the same inequality as in the comparison principle we utilized in [@StojnicMoreSophHopBnds10] (as well as in e.g. [@StojnicLiftStrSec13; @StojnicRicBnds13]). Namely, a Gordon’s minmax principle was the key mechanism that we relied on in [@StojnicMoreSophHopBnds10] to obtain $$\begin{gathered} \label{eq:liftliftbetainfsmin2} \log \mE_{G,u^{(4)}} e^{c^{(s)}_3\max_{\x^{(i_1)}\in \calX}\min_{\y^{(i_2)}\in \calY} \lp (\y^{(i_2)})^T G\x^{(i_1)}+\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2u^{(4)}\rp} \\ \leq \log \mE_{\u^{(2)},\h} e^{c^{(s)}_3\max_{\x^{(i_1)}\in \calX}\min_{\y^{(i_2)}\in \calY} \lp \|\x^{(i_1)}\|_2(\y^{(i_2)})^T \u^{(2)}+ \|\y^{(i_2)}\|_2\h^T\x^{(i_1)}\rp}.\end{gathered}$$ Following the reasoning discussed above, one can think of (\[eq:liftliftbetainfsmin1\]) and (\[eq:liftliftbetainfsmin2\]) as being a lifted Gordon’s minmax comparison principle (more on how useful this lifting strategy turns out to be can be found in, e.g. [@StojnicMoreSophHopBnds10; @StojnicLiftStrSec13; @StojnicRicBnds13]). As earlier, we emphasize that this form is only a special case of a much stronger concept introduced in Theorem \[thm:liftthm2\]. Following what we observed in [@Stojnicgscomp16], when $\|\x^{(i_1)}\|_2=1,1\leq i_1\leq l$, and $\|\y^{(i_2)}\|_2=1,1\leq i_2\leq l$, we have the following rather elegant consequence of the above (basically for any $\beta$ and $s=1$) $$\begin{gathered} \label{eq:liftliftbetainfsmin3} \frac{(c_3^{(s)})^2}{2}+c_3^{(s)}\mE_{G} \max_{\x^{(i_1)}\in \calX} s\max_{\y^{(i_2)}\in \calY}\lp (\y^{(i_2)})^T G\x^{(i_1)}\rp = \frac{(c_3^{(s)})^2}{2}+ \mE_{G} \log e^{c_3^{(s)} \max_{\x^{(i_1)}\in \calX} s\max_{\y^{(i_2)}\in \calY}\lp (\y^{(i_2)})^T G\x^{(i_1)}\rp} \\ \leq \log \mE_{G,u^{(4)}} e^{c_3^{(s)} \max_{\x^{(i_1)}\in \calX} s\max_{\y^{(i_2)}\in \calY}\lp (\y^{(i_2)})^T G\x^{(i_1)}+u^{(4)}\rp} \leq \log \mE_{G,u^{(4)}}\lp\sum_{i_1=1}^{l} \lp\sum_{i_2=1}^{l} e^{\beta \lp (\y^{(i_2)})^T G\x^{(i_1)}+u^{(4)}\rp}\rp^s \rp^{c_3} \\ = \log \mE_{G,u^{(4)}} \psi_*(\calX,\calY,\beta,s,c_3,1) \leq \log \mE_{G,u^{(4)},\u^{(2)},\h} \psi_*(\calX,\calY,\beta,s,c_3,t) \\ \leq \log \mE_{G,u^{(4)},\u^{(2)},\h} \psi_*(\calX,\calY,\beta,s,c_3,0) = \log \mE_{G,u^{(4)},\u^{(2)},\h} \lp\sum_{i_1=1}^{l} \lp\sum_{i_2=1}^{l} e^{\beta \lp \|\x^{(i_1)}\|_2(\y^{(i_2)})^T \u^{(2)}+\|\y^{(i_2)}\|_2\h^T\x^{(i_1)}\rp}\rp^s \rp^{c_3}.\\\end{gathered}$$ From (\[eq:liftliftbetainfsmin3\]) we find $$\begin{aligned} \label{eq:liftliftbetainfsmin3a} \mE_{G} \max_{\x^{(i_1)}\in \calX} s\max_{\y^{(i_2)}\in \calY}\lp (\y^{(i_2)})^T G\x^{(i)}\rp & \leq & \frac{1}{c_3^{(s)}} \log \mE_{G,u^{(4)},\u^{(2)},\h} \psi_*(\calX,\calY,\beta,s,c_3,t)-\frac{c_3^{(s)}}{2} \nonumber \\ & = & \frac{1}{\beta c_3} \log \mE_{G,u^{(4)},\u^{(2)},\h} \psi_*(\calX,\calY,\beta,s,c_3,t)-\frac{\beta c_3}{2}.\end{aligned}$$ For $t=1$, (\[eq:liftliftbetainfsmin3\]) and (\[eq:liftliftbetainfsmin3a\]) give $$\begin{gathered} \label{eq:liftliftbetainfsmin4} \mE_{G} \max_{\x^{(i_1)}\in \calX} s\max_{\y^{(i_2)}\in \calY}\lp (\y^{(i_2)})^T G\x^{(i)}\rp \leq \frac{1}{c_3^{(s)}} \log \mE_{G,u^{(4)},\u^{(2)},\h} \psi_*(\calX,\calY,\beta,s,c_3,0)-\frac{c_3^{(s)}}{2} \\ = \frac{1}{c_3^{(s)}} \log \mE_{G,u^{(4)},\u^{(2)},\h} \lp\sum_{i_1=1}^{l} \lp\sum_{i_2=1}^{l} e^{\beta \lp \|\x^{(i_1)}\|_2(\y^{(i_2)})^T \u^{(2)}+\|\y^{(i_2)}\|_2\h^T\x^{(i_1)}\rp}\rp^s \rp^{c_3}-\frac{c_3^{(s)}}{2} \\ = \frac{1}{\beta c_3} \log \mE_{G,u^{(4)},\u^{(2)},\h} \lp\sum_{i_1=1}^{l} \lp\sum_{i_2=1}^{l} e^{\beta \lp \|\x^{(i_1)}\|_2(\y^{(i_2)})^T \u^{(2)}+\|\y^{(i_2)}\|_2\h^T\x^{(i_1)}\rp}\rp^s \rp^{c_3}-\frac{\beta c_3}{2}.\end{gathered}$$ Finally, for $\beta\rightarrow\infty$ (and $c_3=\frac{c_3^{(2)}}{\beta}$) we have $$\begin{gathered} \label{eq:liftliftbetainfsmin5} \mE_{G} \max_{\x^{(i_1)}\in \calX} s\max_{\y^{(i_2)}\in \calY}\lp (\y^{(i_2)})^T G\x^{(i)}\rp \\ \leq \frac{1}{c_3^{(s)}} \log \mE_{u^{(2)},\h} \lp e^{c_3^{(s)}\max_{\x^{(i_1)}\in\calX} s \max_{\y^{(i_2)}\in\calY}\lp \|\x^{(i_1)}\|_2(\y^{(i_2)})^T \u^{(2)}+\|\y^{(i_2)}\|_2\h^T\x^{(i_1)}\rp}\rp-\frac{c_3^{(s)}}{2}.\end{gathered}$$ Connecting the first inequality in (\[eq:liftliftbetainfsmin3\]) and (\[eq:liftliftbetainfsmin2\]) ensures that (\[eq:liftliftbetainfsmin5\]) in particular remains true even when $s=-1$. Of course, (\[eq:liftliftbetainfsmin5\]) is basically one of the key features of the mechanisms we introduced and utilized in e.g. [@StojnicMoreSophHopBnds10; @StojnicLiftStrSec13; @StojnicRicBnds13]. One can also take the $\beta\rightarrow\infty$ limit for any $t$ to obtain for any sign $s$ a bit stronger (though probably often less useful) $$\begin{gathered} \label{eq:liftliftbetainfsmin6} \mE_{G} \max_{\x^{(i_1)}\in \calX} s\max_{\y^{(i_2)}\in \calY}\lp (\y^{(i_2)})^T G\x^{(i)}\rp \\ \leq \frac{1}{c_3^{(s)}} \log \mE_{u^{(2)},\h} \lp e^{c_3^{(s)}\max_{\x^{(i_1)}\in\calX} s \max_{\y^{(i_2)}\in\calY}f_{1,h}(\x^{(i_1)},\y^{(i_2)},G,u^{(4)},\u^{(2)},\h,t)}\rp-\frac{c_3^{(s)}}{2},\end{gathered}$$ where $$\begin{gathered} \label{eq:liftliftbetainfsmin7} f_{1,h}(\x^{(i_1)},\y^{(i_2)},G,u^{(4)},\u^{(2)},\h,t) = \sqrt{t}(\y^{(i_2)})^TG\x^{(i_1)}+ \sqrt{1-t}\|\x^{(i_1)}\|_2(\y^{(i_2)})^T \u^{(2)} \\+\sqrt{t}\|\x^{(i_1)}\|_2\|\y^{(i_2)}\|_2u^{(4)}+\sqrt{1-t}\|\y^{(i_2)}\|_2\h^T\x^{(i_1)}.\end{gathered}$$ **** Figure \[fig:liftedbetainfsmin1xnorm1psi\] and Table \[tab:liftedbetainfsmin1xnorm1psi\] contain the results that we obtained through the numerical simulations. All parameters are again the same as earlier, including $\beta=10$ which again in a way emulates $\beta\rightarrow\infty$ and $c_3=.1$ (everything is averaged over a set of $5e4$ experiments). We again observe from both, Figure \[fig:liftedbetainfsmin1xnorm1psi\] and Table \[tab:liftedbetainfsmin1xnorm1psi\], that there is a solid agreement between all the presented results (with $\beta=10$ again being a pretty good approximation of $\beta\rightarrow\infty$). As earlier, the right part of the figure shows again appearance of the flattening effect which is one of the key consequences of the lifting procedure. Clearly, this then tightens the corresponding comparisons from Section \[sec:gencon\]. We should also add that $c_3=.1$ is not necessarily the best value that one can take to have the flattening effect at its full power (both, here as well as when we discussed the lifting of the Slepian’s max principle earlier). However, we selected a value that is reasonably close to the one that would tighten the corresponding comparisons from Section \[sec:gencon\] the most. \[tab:liftedbetainfsmin1xnorm1psi\] Conclusion {#sec:liftconc} ========== A collection of very powerful statistical comparison results is presented. We first introduced a general comparison concept that we call fully bilinear. Then we showed how such a concept can be upgraded through a lifting procedure. All our theoretical findings we then complemented with an extensive set of numerical results. These were obtained trough simulations and are observed to be in an excellent agreement with the theoretical predictions. Moreover, for both, the general and the lifted strategy, we showed that they contain as special cases the well known Slepian’s max and Gordon’s minmax comparison principles. Since many of the results that we created in various fields of mathematics in recent years utilize as starting points these well-known principles, the results presented here make all of them substantially more general and fully self-contained. The mechanisms that we presented here seem like a very powerful self-sustainable tool which can be used for various extensions. Typically these extensions require a few rather routine modifications of the main concepts presented here and in a couple of our earlier works. For the extensions that we find to be of particular interest we will present the needed modifications as well as the final results that one can obtain through them in a few separate papers. [^1]: e-mail: [flatoyer@gmail.com]{}
--- author: - Lorenz Bartosch and Peter Kopietz date: 'March 15, 2000' title: Exactly solvable toy model for the pseudogap state --- Introduction {#sec:intro} ============ The physical origin of the pseudogap behavior observed in the normal state of the high-temperature cuprates is still controversial. Several mechanisms have been proposed. According to Schmalian et al. [@Schmalian98] the normal state of the underdoped cuprates can be modeled by a nearly antiferromagntic Fermi liquid, and the experimentally observed pseudogap behavior is closely related to strong antiferromagnetic spin fluctuations. An alternative explanation, which has been advanced by Emery and Kivelson [@Emery95] relates the pseudogap behavior to precursor superconducting fluctuations. In this scenario thermal fluctuations of the phase of the superconducting order parameter are responsible for a destruction of superconductivity above the transition temperature $T_c$. However, in a wide range of temperatures $T > T_c$ the local amplitude of the superconducting gap is finite. In this paper we shall propose a simple exactly solvable phenomenological model which describes the destruction of phase coherence due to phase and amplitude fluctuations of the superconducting order parameter in the pseudogap state. To study superconducting fluctuations in a normal metal one can start with the Gorkov equation for the $2 \times 2$ matrix Green’s function for electrons with energy dispersion $\epsilon ( {\bf{k}} )$ that are coupled to a space-dependent complex pairing field $\Delta ( {\bf{r}} )$ [@Abrikosov63], $$[ \omega - \hat{H}_{\bf{r}} ] \, {\cal{G}}^{(d=3)} ( {\bf{r}}, {\bf{r}}^{\prime}, \omega ) = \delta ( {\bf{r}} - {\bf{r}}^{\prime} ) \sigma_0 \label{eq:Gorkov} \; ,$$ $$\hat{H}_{\bf{r}} = \left( \begin{array}{cc} \epsilon ( - i \nabla_{\bf{r}} ) - \mu & \Delta ( {\bf{r}} ) \\ \Delta^{\ast} ( {\bf{r}} ) & \epsilon ( i \nabla_{\bf{r}} ) - \mu \end{array} \right) \label{eq:HamiltonianGorkov} \; .$$ Here, $\sigma_0$ is the $2 \times 2$ unit matrix and $\mu $ is the chemical potential. In the absence of true superconducting long-range order the pairing field $\Delta ({\bf{r}})$ can be considered as a random variable with zero average and correlations that fall off exponentially with distance, $$\langle \Delta ( {\bf{r}} ) \rangle = 0 \; , \label{eq:deltaavG}$$ $$\begin{aligned} \langle \Delta ( {\bf{r}} ) \Delta^{\ast} ( {\bf{r}}^{\prime} ) \rangle & \equiv & \frac{ \int {\cal{D}} \{ \Delta \} e^{- S \{ \Delta \} } \Delta ( {\bf{r}} ) \Delta^{\ast} ( {\bf{r}}^{\prime} ) }{ \int {\cal{D}} \{\Delta \} e^{- S \{ \Delta \} } } \nonumber \\ & = & \Delta_s^2 e^{ - | {\bf{r}} - {\bf{r}}^{\prime} | / \xi } \; . \label{eq:deltacovG} \end{aligned}$$ Here, $S \{ \Delta \}$ is the Ginzburg-Landau functional of the order parameter field, $\xi$ is the correlation length, and the energy scale $\Delta_s$ characterizes the strength of the correlations. To simplify the algebra and to make contact with other theoretical work on pseudogap physics, we shall focus in this work on the semiclassical limit of the Gorkov equation, which are related to the so-called Andreev equation [@Andreev64]. In the weak coupling limit, where $| \Delta ({\bf{r}})|$ is small compared with the chemical potential, we may linearize the energy dispersion in Eq. (\[eq:Gorkov\]) for wave-vectors ${\bf{k}}$ close to the Fermi surface, provided we are only interested in long-wavelength, low-energy properties of the system. In the semiclassical limit it is useful to decompose the position vector as ${\bf{r}} = x {\bf{n}} + {\bf{r}}_{\bot}$ where ${\bf{n}}$ is a unit vector in the direction of the momentum of the electron, and ${\bf{r}}_{\bot}$ is orthogonal to ${\bf{n}}$. Writing $\partial_x = {\bf{n}} \cdot \nabla_{\bf{r}}$, Eqs. (\[eq:Gorkov\]) and (\[eq:HamiltonianGorkov\]) can be replaced by an effective one-dimensional problem [@Andreev64] $$[ \omega - \hat{H}_x ] \, {\cal{G}} ( x, x^{\prime}, \omega ) = \delta ( x -x^{\prime} ) \sigma_0 \label{eq:Andreev} \; ,$$ $$\hat{H}_x = \left( \begin{array}{cc} - i v_F \partial_x & \Delta ( x ) \\ \Delta^{\ast} ( x ) & i v_F \partial_x \end{array} \right) \label{eq:Hamiltonian} \; .$$ We shall refer to Eq. (\[eq:Hamiltonian\]) as the Hamiltonian of the fluctuating gap model (FGM). All quantities depend now parametrically on ${\bf{r}}_{\bot}$ and ${\bf{n}}$. Physical observables should be averaged over all directions of ${\bf{n}}$. In this paper we shall only consider the effective one-dimensional problem defined by Eqs. (\[eq:Andreev\]) and (\[eq:Hamiltonian\]). We require that the first and the second moments of the fluctuating gap $\Delta (x )$ are given by $$\langle \Delta ( x ) \rangle = 0 \; , \label{eq:deltaav}$$ $$\langle \Delta ( x ) \Delta^{\ast} ( x' ) \rangle = \Delta_s^2 e^{ - | x - x^{\prime} | / \xi } \; . \label{eq:deltacov}$$ In the following, we shall construct a special non-Gaussian probability distribution of $\Delta ( x )$ satisfying Eqs. (\[eq:deltaav\]) and (\[eq:deltacov\]) for which Eq. (\[eq:Andreev\]) can be solved exactly. Moreover, as will be briefly discussed in Sec. \[sec:conclusions\], it is straightforward to generalize our model to dimensions $d >1$ and to arbitrary energy dispersions $\epsilon ( {\bf{k}})$, although the calculation of physical quantities becomes more tedious. Apart from its relevance in the semiclassical theory of superconductivity, the problem defined by Eqs. (\[eq:Andreev\]) to (\[eq:deltacov\]) describes also the low-energy physics in quasi-one-dimensional Peierls and spin-Peierls systems [@Lee73; @Bunder99]. Lee, Rice and Anderson [@Lee73] used this model to study fluctuation effects close to the Peierls transition. In this case $\Delta ( x )$ can be identified with the fluctuating Peierls order parameter, and the two diagonal elements in our Hamiltonian (\[eq:Hamiltonian\]) represent the kinetic energy of the electrons in the vicinity of the two Fermi points $\pm k_F$. Physical quantities should again be averaged over the probability distribution of $\Delta (x )$, which can be obtained from the Ginzburg-Landau expansion [@Lee73]. Within the Gaussian approximation, the truncated Ginzburg-Landau functional in the disordered phase is of the form $$S \{ \Delta \} = \int \frac{d q}{ 2 \pi} \, \frac{ 1 + q^2 \xi^2 }{ 2 \Delta_s^2 \xi} \, \Delta_q^{\ast} \Delta_q \; , \label{eq:SGL}$$ where $$\Delta_q = \int dx \, e^{- i q x } \Delta (x ) \; . \label{eq:deltaqdef}$$ One easily verifies that Eqs. (\[eq:deltaav\]) and (\[eq:deltacov\]) are indeed satisfied. Note that for commensurate Peierls chains the order parameter field can be chosen real, while it is complex for incommensurate chains. In this work we shall focus on the incommensurate case, where zero-energy states and the associated Dyson singularities are absent [@Bartosch99a; @Bartosch99b]. Lee, Rice and Anderson treated the effect of the order parameter fluctuations on the average electronic density of states (DOS) $\langle \rho ( \omega ) \rangle$ within the Born approximation. Within this approximation one finds that, in the regime where the dimensionless parameter $$\bar{\gamma} \equiv \frac{v_F}{2 \Delta_s \xi} \label{eq:gammabardef}$$ is small compared with unity, the DOS develops a pseudogap for $ | \omega | { \raisebox{-0.5ex}{$\; \stackrel{<}{\sim} \;$}} \Delta_s$, with a minimum given by [@Chandra89] $$\frac{\langle \rho ( 0 ) \rangle^{\rm pert}}{ \rho_0} = \frac{ \bar{\gamma}}{ \sqrt{1 + \bar{\gamma}^2}} \; . \label{eq:rhoBorn}$$ Here, $$\rho_0 = \frac{1}{ \pi v_F }$$ is the DOS for $\Delta (x) = 0$, which is a constant due to the linearization of the energy dispersion. Note that Eq. (\[eq:rhoBorn\]) predicts for $\bar{\gamma} \ll 1$ to leading order $$\frac{\langle \rho ( 0 ) \rangle^{\rm pert}}{ \rho_0} \sim \bar{\gamma} \propto \xi^{-1} \; , \label{eq:rhoBornsmall}$$ which disagrees with a non-perturbative result by Sadovskii [@Sadovskii79], who found for the model defined by Eqs. (\[eq:Andreev\]) to (\[eq:deltacov\]) for a Gaussian distribution of $\Delta (x )$ $$\frac{ \langle \rho ( 0 ) \rangle^{\rm Sadovskii} }{ \rho_0} \approx 0.541 \times [ {2 \bar{\gamma}} ]^{1/2} \propto \xi^{-1/2} \; . \label{eq:rhoSadovskii}$$ However, the algorithm constructed by Sadovskii [@Sadovskii79] is not exact [@Tchernyshyov99; @Bartosch99a], so that it is not clear whether Eq. (\[eq:rhoSadovskii\]) is correct or not. To clarify this point, we have recently developed an exact numerical algorithm for calculating the DOS of the FGM [@Bartosch99b]. For a Gaussian distribution of $\Delta (x)$ with zero average and covariance given by Eq. (\[eq:deltacov\]) the result is $$\frac{ \langle \rho ( 0 ) \rangle^{\rm Gauss} }{ \rho_0} \approx a [ 2 \bar{\gamma} ]^{b} \propto \xi^{-b} \; , \label{eq:rhoGauss}$$ where $$a = 0.6397 \pm 0.0066 \; \; , \; \; b = 0.6397 \pm 0.0024 \label{eq:bdef} \; .$$ Hence, for Gaussian disorder with a finite correlation length both perturbation theory and Sadovskii’s algorithm do not give the correct $\xi$-dependence of the average DOS at the Fermi energy. Another attempt to investigate the discrepancy between Eqs. (\[eq:rhoBorn\]) and (\[eq:rhoSadovskii\]) numerically was recently made by Millis and Monien [@Millis99]. They found for the exponent $b$ in Eq. (\[eq:rhoGauss\]) a value between $2/3$ and $1$, which is outside our error-bars in Eq. (\[eq:bdef\]). Note, however, that Millis and Monien studied a lattice regularization of the continuum model (\[eq:Hamiltonian\]), and no attempt was made to carefully relate the bare parameters that appear in the lattice and the continuum models. In this work we shall show that the exponent characterizing the behavior of the DOS at the Fermi energy on $\xi$ is non-universal in the sense that it depends on the precise form of the probability distribution of the fluctuating gap. In particular, the non-Gaussian terms in the Ginzburg-Landau functional can change the numerical value of this exponent, so that the behavior given in Eqs. (\[eq:rhoGauss\]) and (\[eq:bdef\]) can only be expected to be correct for Gaussian disorder. Finally, it should be mentioned that a generalization of the model defined in Eqs. (\[eq:Andreev\]) to (\[eq:deltacov\]) has been used in Ref. [@Schmalian98] to explain the pseudogap behavior in the cuprates within antiferromagntic Fermi liquid theory. Then the scalar field $\Delta (x )$ should be replaced by a matrix field $\sum_i {{S}}_i ( x ) {\sigma}_i $, where ${\sigma}_i $ are the Pauli matrices, and the fields ${{S}}_i ( x)$ represent the components of the antiferromagnetic spin density field. In fact, the recent interest in the non-perturbative approach invented many years ago by Sadovskii [@Sadovskii79] is motivated by its possible relevance to the cuprate superconductors. Exact Green’s function of the fluctuating gap model for $\Delta (x ) = A e^{ i Q x}$ ==================================================================================== In this section we shall solve Eq. (\[eq:Andreev\]) exactly for a special form of the probability distribution of $\Delta ( x )$ which is constructed such that its covariance is given by Eq. (\[eq:deltacov\]). To begin with, let us perform the following gauge transformation [@Brazovskii76], $${\cal{G}} ( x , x^{\prime} , \omega ) = e^{\frac{i}{2} \alpha ( x ) \sigma_3 } \tilde{\cal{G}} ( x , x^{\prime} , \omega ) e^{- \frac{i}{2} \alpha ( x^{\prime} ) \sigma_3 } \; , \label{eq:gaugetrafo}$$ where the gauge function $\alpha ( x )$ will be specified shortly. From Eq. (\[eq:Andreev\]) we find that the transformed Green’s function $ \tilde{\cal{G}} ( x , x^{\prime} , \omega )$ satisfies $$\begin{aligned} \Big[ \omega - \frac{v_F}{2} \frac{ d \alpha ( x )}{dx} + i v_F \partial_x \sigma_3 - \Delta (x ) e^{- i \alpha ( x ) } \sigma_{+} \Big. % \right. \nonumber \\ & & \hspace{-67mm} \Big. % \left. {} - \Delta^{\ast} ( x ) e^{ i \alpha ( x ) } \sigma_{-} \Big] \tilde{{\cal{G}}} ( x , x^{\prime} , \omega ) = \delta ( x - x^{\prime} ) \sigma_0 \; . \label{eq:G1def} \end{aligned}$$ Suppose now that $\Delta ( x )$ is of the form $$\Delta ( x ) = A e^{ i Q x } \; , \label{eq:deltasimple}$$ where $A$ and $Q$ are both random but independent of $x$. Then the $x$-dependence of $\Delta (x )$ in Eq. (\[eq:G1def\]) can be removed by choosing $\alpha ( x ) = Q x$. Moreover, with this choice the second term on the left-hand side of Eq. (\[eq:G1def\]) reduces to a constant $$\frac{v_F}{2} \frac{ d \alpha (x)}{dx} = \frac{v_F Q}{2} \equiv \eta \label{eq:etadef} \; ,$$ so that $$\begin{aligned} \left[ \omega - \eta + i v_F \partial_x \sigma_3 - A \sigma_{+} - A^{\ast} \sigma_{-} \right] \tilde{\cal{G}} ( x , x^{\prime} , \omega ) & & \nonumber \\ & & \hspace{-20mm} = \delta ( x - x^{\prime} ) \sigma_0 \; . \label{eq:G1def2} \end{aligned}$$ Thus, a phase of the order-parameter varying linearly in space can be absorbed by a finite shift of the energy. Eq. (\[eq:G1def2\]) is translational invariant and is easily solved by a Fourier transformation, $$\tilde{{\cal{G}}} ( x , x^{\prime} , \omega ) = \int \frac{ d q}{2 \pi} e^{i q ( x - x^{\prime} ) } \tilde{\cal{G}} ( q , \omega ) \; , \label{eq:G1FT}$$ $$\begin{aligned} \tilde{\cal{G}} ( q , \omega ) & = & \frac{1}{ ( \omega - \eta )^2 - (v_F q )^2 - | A |^2 } \nonumber \\ & & \times \left( \begin{array}{cc} \omega - \eta + v_F q & A \\ A^{\ast} & \omega - \eta - v_F q \end{array} \right) \; . \label{eq:G1res} \end{aligned}$$ Combining Eqs. (\[eq:gaugetrafo\]), (\[eq:G1FT\]) and (\[eq:G1res\]) and defining $${\cal{G}} ( q , q^{\prime} , \omega ) = \int dx \int d x^{\prime} e^{- i ( q x - q^{\prime} x^{\prime} ) } {\cal{G}} ( x , x^{\prime} , \omega ) \; , \label{eq:Gft}$$ we finally obtain $$\begin{aligned} {\cal{G}} ( q , q^{\prime} , \omega ) & = & \left( \begin{array}{cc} {\displaystyle \frac{ 2 \pi \delta ( q - q^{\prime} ) [ \omega - 2 \eta + v_F q ]}{ [ \omega -2 \eta + v_F q ][ \omega - v_F q ] - | A |^2 } } & \hspace{2mm} {\displaystyle \frac{ 2 \pi \delta ( q - q^{\prime} - Q ) A}{ [ \omega - 2 \eta + v_F q ][ \omega - v_F q ] - | A |^2} } \\ {\displaystyle \rule [0mm]{0mm}{8mm} % lift,width,hight \frac{ 2 \pi \delta ( q - q^{\prime} + Q ) A^{\ast}}{ [ \omega - 2 \eta - v_F q ][ \omega + v_F q ] - | A |^2} } & \hspace{2mm} {\displaystyle \frac{ 2 \pi \delta ( q - q^{\prime} ) [ \omega -2 \eta - v_F q ] }{ [ \omega -2 \eta - v_F q ][ \omega + v_F q ] - | A |^2} } \end{array} \right) \; . \nonumber \\ & & \label{eq:Gqqres} \end{aligned}$$ The crucial observation is now that, in spite of the simple form (\[eq:deltasimple\]) of $\Delta ( x )$, it is still possible to satisfy Eqs. (\[eq:deltaav\]) and (\[eq:deltacov\]) if $A$ and $Q$ are interpreted as random variables. To obtain the exponential decay of the covariance we require that the probability distribution of the random momentum $Q$ is a Lorentzian, $${\cal{P}}_{Q} = \frac{\xi }{\pi } \frac{ 1}{ ( Q \xi )^{2} + 1 } \label{eq:PQdef} \; ,$$ or equivalently for the random energy shift $\eta$ defined in Eq. (\[eq:etadef\]), $${\cal{P}}_{\eta} = \frac{\gamma}{ \pi } \frac{ 1 }{ \eta^2 + {\gamma}^2 } \; , \label{eq:peps}$$ with $$\gamma = \frac{v_F}{2 \xi} \; . \label{eq:gammadef}$$ The random variable $A$ should be distributed such that $$\begin{aligned} \langle A \rangle_A & = & 0 \label{eq:Afirstmom} \; , \\ \langle | A |^2 \rangle_A & = & \Delta_s^2 \label{eq:Asecondmom} \; , \end{aligned}$$ where $\langle \ldots \rangle_A$ denotes averaging over the probability distribution of $A$. From Eqs. (\[eq:PQdef\]) to (\[eq:Asecondmom\]) it is then easy to show that the first two moments of the distribution of $\Delta ( x )$ are indeed given by Eqs. (\[eq:deltaav\]) and (\[eq:deltacov\]). Note that Eqs. (\[eq:Afirstmom\]) and (\[eq:Asecondmom\]) include the cases of pure phase and pure amplitude fluctuations. To describe pure phase fluctuations we choose $A = \Delta_s e^{i \varphi}$, where the phase $\varphi$ is uniformly distributed in the interval $[0, 2 \pi )$. Then $$\langle \ldots \rangle_{A}^{\rm ph} = \int_{0}^{2 \pi} \frac{ d \varphi}{2 \pi } \ldots \; . \label{eq:phasemeasure}$$ Since physical quantities should be independent of the constant phase $\varphi$ and therefore should only depend on $|A|$, the process of averaging amounts to replacing $|A|$ by $\Delta_s$. To take into account amplitude fluctuations we follow Sadovskii [@Sadovskii74; @Sadovskii79] and choose a Gaussian distribution for the real and imaginary parts of $A$, $$\langle \ldots \rangle_A^{\rm am} = \int_{-\infty}^{\infty} \frac{ d {\rm Re}A \; d {\rm Im} A}{ \pi \Delta_s^2} e^{ - | A |^2 / \Delta_s^2 } \ldots \; . \label{eq:ampmeasure}$$ The disorder averaging of any functional ${\cal{F}} \{ \Delta ( x ) \}$ is defined by $$\langle {\cal{F}} \{ \Delta (x ) \} \rangle \equiv \left\langle \int_{- \infty}^{\infty} d Q \, {\cal{P}}_Q \, {\cal{F}} \{ A e^{i Q x } \} \right\rangle_{A} \; . \label{eq:avdef2}$$ What is the physical meaning of an order parameter of the form (\[eq:deltasimple\])? In a superconductor such an order parameter describes a state with a uniform superflow [@deGennes66]. The gauge transformation (\[eq:gaugetrafo\]) corresponds to choosing a coordinate system where the superflow vanishes; $\eta$ is the associated energy shift. A more detailed physical justification for such a spatially constant random energy shift $\eta$ in the normal state of the cuprate superconductors has been given by Franz and Millis [@Franz98]: they pointed out that within a semi-classical approximation the effect of the quasi-static fluctuations of the phase of the order parameter field $\Delta (x )$ can be described by such an energy shift $\eta$. Franz and Millis [@Franz98] also presented a perturbative calculation of the probability distribution ${\cal{P}}_{\eta}$ of $\eta$, using earlier results by Emery and Kivelson [@Emery95]. Because in Ref. [@Franz98] a cumulant expansion of ${\cal{P}}_{\eta}$ was truncated at the second order, the form of ${\cal{P}}_{\eta}$ was found to be Gaussian by construction. However, there are certainly non-Gaussian corrections to the form of ${\cal{P}}_{\eta}$ given in Ref. [@Franz98]. Our assumption that the distribution of $\eta$ is a Lorentzian of width $\gamma$ is therefore not in contradiction to the work of Ref. [@Franz98]. Obviously, our parameter $\gamma$ in Eq. (\[eq:gammadef\]) is the analog of the parameter $W$ introduced in Eq. (9) of Ref. [@Franz98]. Note, however, that Franz and Millis [@Franz98] did not consider amplitude fluctuations of the order parameter, which are described by our second random variable $A$. As noted above, Gaussian amplitude fluctuations with a probability distribution given by Eq. (\[eq:ampmeasure\]) have been studied many years ago by Sadovskii [@Sadovskii74]. Thus, in the present work we combine the models introduced by Sadovskii [@Sadovskii74] and by Franz and Millis [@Franz98] such that we take both amplitude and phase fluctuations into account and still obtain an exactly solvable model. In the following section we shall calculate a number of physical quantities for this model exactly and confirm the intuitive picture [@Emery95; @Franz98] that phase fluctuations fill in the gap at the Fermi energy and render the system metallic. Calculation of physical quantities ================================== Single-particle Green’s function and spectral function ------------------------------------------------------ Because $\langle A \rangle = 0$, it follows from Eq. (\[eq:Gqqres\]) that the off-diagonal elements of the disorder averaged Green’s function vanish, and that the diagonal elements are $$\langle {\cal{G}}_{\alpha \alpha} ( q , q^{\prime} , \omega ) \rangle = 2 \pi \delta ( q - q^{\prime}) G_{\alpha} ( q , \omega ) \; , \label{eq:Galphadef}$$ where $$G_{\alpha} ( q , \omega ) = \left\langle \frac{ \omega - 2 \eta + \alpha v_F q }{ [ \omega - 2 \eta + \alpha v_F q ][ \omega - \alpha v_F q ] - | A |^2 } \right\rangle \; . \label{eq:Galphaav}$$ Here, $\alpha = +$ refers to ${\cal{G}}_{11}$, and $\alpha =-$ refers to ${\cal{G}}_{22}$. The averaging over the Lorentzian distribution (\[eq:peps\]) of the random energy shift $\eta $ can be performed analytically, $$G_{\alpha} ( q , \omega + i 0^{+}) = \Biggl\langle \frac{1}{ \omega - \alpha v_F q - { \displaystyle \frac{ | A |^2}{ \omega + \alpha v_F q + i \frac{v_F}{\xi} }} } \Biggr\rangle_{A} \; , \label{eq:GAA}$$ where $\langle \ldots \rangle_A$ denotes averaging over the probability distribution of $A$. In the case of pure phase fluctuations, as described by Eq. (\[eq:phasemeasure\]), this averaging is trivial, so that $$G_{\alpha}^{\rm ph} ( q , \omega + i 0^{+}) = \frac{1}{ \omega - \alpha v_F q - \Sigma_{\alpha}^{\rm ph} ( q , \omega + i 0^{+}) } \label{eq:Gsigmares} \; ,$$ with the self-energy given by $$\Sigma_{\alpha}^{\rm ph} ( q , \omega + i 0^{+}) = \frac{ \Delta_s^2 }{ \omega + \alpha v_F q + i \frac{v_F}{\xi} } \; . \label{eq:sigmaphase}$$ Eq. (\[eq:sigmaphase\]) agrees precisely with the lowest order Born approximation, which was used in the seminal work by Lee, Rice, and Anderson [@Lee73]. We have thus found a special probability distribution of $\Delta ( x)$ where the lowest order Born approximation for the average single-particle Green’s function is exact: the order parameter is in this case of the form $\Delta ( x) = \Delta_s e^{ i Q x + i \varphi}$, where $Q$ has a Lorentzian distribution of width $1/\xi$, and the random phase $\varphi$ merely assures $\langle \Delta(x) \rangle =0$, but due to gauge invariance does not affect any physical quantities. On the other hand, if in addition to phase fluctuations also amplitude fluctuations are important, there are corrections to the Born approximation. For Gaussian amplitude fluctuations given by Eq. (\[eq:ampmeasure\]) we find after substituting $t = | A |^2 / \Delta_s^2$ $$\begin{aligned} G_{\alpha}^{\rm ph+am} ( q , \omega + i 0^{+}) & = & \nonumber \\ & & \hspace{-35mm} \int_0^{\infty} dt \frac{ e^{-t}}{ \omega - \alpha v_F q - {\displaystyle \frac{ t \Delta_s^2}{ \omega + \alpha v_F q + i \frac{v_F}{ \xi} } } } \; . \label{eq:Gaaint} \end{aligned}$$ Recently Kuchinskii and Sadovskii [@Kuchinskii99] arrived precisely at Eq. (\[eq:Gaaint\]) within a diagrammatic attempt to estimate the accuracy of the method developed in Ref. [@Sadovskii79] for Gaussian disorder. For a better comparision with Sadovskii’s Green’s function calculated in Ref. [@Sadovskii79], let us represent Eq. (\[eq:Gaaint\]) as a continued fraction. Expressing the integral on the right-hand side of Eq. (\[eq:Gaaint\]) in terms of the incomplete $\Gamma$-function and using the known continued fraction expansion of this function [@Gradshteyn80], we obtain for the self-energy [ $$\begin{aligned} \Sigma_{\alpha}^{\rm ph+am} ( q , \omega + i 0^{+}) & = & \nonumber \\ & & \nonumber \\ & & \hspace{-22mm} \frac{\Delta_s^2}{ \displaystyle \omega + \alpha v_F q + i \frac{v_F}{\xi} - \frac{\Delta_s^2}{ \displaystyle \omega - \alpha v_F q - \frac{2 \Delta_s^2}{ \displaystyle \omega + \alpha v_F q + i \frac{v_F}{\xi} - \frac{ 2 \Delta_s^2}{\displaystyle \omega - \alpha v_F q - \frac{3 \Delta_s^2}{\displaystyle \omega + \alpha v_F q + i \frac{v_F}{\xi} - \ldots }}}}} %\frac{3 \Delta_s^2}{\displaystyle % \omega - \alpha v_F q - \ldots}}}}}} \; . \nonumber \\ & & \label{eq:sigmares} \end{aligned}$$ ]{} For the same model with Gaussian disorder the algorithm due to Sadovskii [@Sadovskii79] produces the continued fraction expansion [ $$\begin{aligned} \Sigma_{\alpha}^{ \rm Sadovskii} ( q , \omega + i 0^{+}) & = & \nonumber \\ & & \nonumber \\ & & \hspace{-32mm} \frac{\Delta_s^2}{ \displaystyle \omega + \alpha v_F q + i \frac{v_F}{\xi} - \frac{\Delta_s^2}{ \displaystyle \omega - \alpha v_F q + 2 i \frac{v_F}{\xi} - \frac{2 \Delta_s^2}{ \displaystyle \omega + \alpha v_F q + 3 i \frac{v_F}{\xi} - \frac{2 \Delta_s^2}{ \displaystyle \omega - \alpha v_F q + 4 i \frac{v_F}{\xi} - \frac{3 \Delta_s^2}{\displaystyle \omega + \alpha v_F q + 5 i \frac{v_F}{\xi} - \ldots}}}}} % - \frac{3 \Delta_s^2}{\displaystyle % \omega - \alpha v_F q + 6 i \frac{v_F}{\xi} - \ldots}}}}}} \nonumber \; . \\ & & \label{eq:sigmasadres} \end{aligned}$$ ]{} Note that only the first two lines in Eqs. (\[eq:sigmares\]) and (\[eq:sigmasadres\]) agree. Kuchinskii and Sadovskii argue in Ref. [@Kuchinskii99] that the true behavior of the Green’s function for Gaussian disorder lies somewhat in between Eqs. (\[eq:sigmares\]) and (\[eq:sigmasadres\]). In our model, the coexistence of amplitude fluctuations with phase fluctuations (which are related to our random energy shift $\eta$) generates a completely new feature in the average spectral function. The latter is related to the av- erage Green’s function via $$2 \pi \delta ( q - q^{\prime} ) \langle \rho ( \alpha k_F + q , \omega ) \rangle = - \frac{1}{\pi} \, {\rm Im} \, \langle {\cal{G}}_{\alpha \alpha} ( q , q^{\prime} , \omega + i 0^{+} ) \rangle \; , \label{eq:Adef}$$ Using Eq. (\[eq:Gaaint\]) we find $$\begin{aligned} \langle \rho ( \alpha k_F + q , \omega ) \rangle^{\rm ph+am} & = & % - \frac{1}{\pi} % \frac{ {\rm Im } \Sigma_{\alpha} ( q , \omega + i 0^{+} )}{ % [ \omega - \alpha v_F q - {\rm Re} \Sigma_{\alpha} ( q , % \omega + i 0^{+} ) ]^2 + [ {\rm Im} \Sigma_{\alpha} ( q , \omega % + i 0^{+} ) ]^2 } \nonumber \\ & & \hspace{-40mm} \frac{2 \bar{\gamma}}{\pi \Delta_s} \int_0^{\infty} d t \frac{ t e^{-t}}{ ( t - \bar{\omega}^2 + \bar{q}^2 )^2 + 4 \bar{\gamma}^2 ( \bar{\omega} - \alpha \bar{q})^2 } \; , \label{eq:avspecres} \end{aligned}$$ where $\bar{q} = v_F q / \Delta_s$, $\bar{\omega} = \omega / \Delta_s$, and $\bar{\gamma} = v_F / (2 \Delta_s \xi)$. Representative results for different values of $\bar{\gamma}$ are shown in Figs. \[fig:specq0\] and \[fig:specq02\]. The dashed line is the spectral function for $\bar{\gamma}=0$ (i.e. without phase fluctuations), which is easily calculated analytically, $$\begin{aligned} \langle \rho ( \alpha k_F + q , \omega ) \rangle^{\rm am} & = & \nonumber \\ & & \hspace{-20mm} \Delta_s^{-1} \Theta ( {\bar{\omega}}^2 - {\bar{q}}^2 ) | \bar{\omega} + \alpha \bar{q} | e^{ - ( \bar{\omega}^2 - \bar{q}^2 )} \; . \label{eq:specinfty} \end{aligned}$$ The important point is now that for any finite $\bar{\gamma}$ the spectral function exhibits a logarithmic singularity at $\omega = \alpha v_F q$. In the vicinity of this singularity the leading behavior of the spectral function can be calculated analytically. In the regime $$| \omega - \alpha v_F q | \ll {\rm min} \left\{ \frac{ \Delta_s^2 \xi}{v_F} , \frac{ \Delta_s^2 }{|\omega + \alpha v_F q | } \right\} \label{eq:regime}$$ the integral in Eq. (\[eq:avspecres\]) can be approximated by $$\begin{aligned} \langle \rho ( \alpha k_F + q , \omega ) \rangle^{\rm ph+am} & \sim & \frac{ 2 \bar{\gamma}}{\pi \Delta_s} \ln \left[ \frac{ 1}{ 2 \bar{\gamma} | \bar{\omega} - \alpha \bar{ q} |} \right] \nonumber \\ & & \hspace{-20mm} = \frac{ v_F }{\pi \Delta_s^2 \xi } \ln \left[ \frac{ \Delta_s^2 \xi }{ v_F | \omega - \alpha v_F q |} \right] \; . \label{eq:speclogdimless} \end{aligned}$$ Thus, the interplay between phase fluctuations (described by our random phase factor $e^{i Q x}$) and amplitude fluctuations (described by random fluctuations of $ | A | $) gives rise to a logarithmic singularity at the bare energy of the electron. Note that such a singularity is weaker than the algebraic singularities that are typically found in the spectral function of a Luttinger liquid. Of course, such a weak singularity cannot be called a quasi-particle peak. It is important to point out that in the presence of amplitude fluctuations alone or phase fluctuations alone such a logarithmic singularity does not exist. Recall that for pure phase fluctuations our model has the same spectral function as predicted by the Born approximation for the self-energy [@Lee73], while for pure amplitude fluctuations our model reduces to the model discussed by Sadovskii in Ref. [@Sadovskii74]. Note also that the approximate spectral function produced by Sadovskii’s algorithm [@Sadovskii91; @McKenzie96] for Gaussian disorder with a finite correlation length does not exhibit any logarithmic singularities. Whether an exact calculation of the spectral function for more realistic probability distributions could confirm this result or not remains an open question. From Fig. \[fig:specq02\] it is clear that the line-shape of the spectral function in the vicinity of the singularity is rather broad and asymmetric. Such a behavior has recently been seen in the photoemission spectra of a one-dimensional band-insulator [@Vescoli00]. Average density of states ------------------------- The average DOS is defined by $$\langle \rho ( \omega ) \rangle = - \frac{1}{\pi} \, {\rm Im}\, {\rm Tr}\, \langle {\cal{G}} ( x , x , \omega + i 0^{+} ) \rangle \; . \label{eq:localdos}$$ Performing the $q$-integration in Eq. (\[eq:GAA\]) we find $${\rm Tr}\, \langle {\cal{G}} ( x , x , \omega + i 0^{+} ) \rangle = - \frac{1}{v_F} \left\langle \frac{ \omega + i \gamma}{ \sqrt{ | A |^2 - ( \omega + i \gamma )^2 } } \right\rangle_A \; , \label{eq:GAAxx}$$ where $ \gamma $ is given in Eq. (\[eq:gammadef\]) and $\sqrt{z}$ denotes the principal branch of the square root, with the cut at the negative real axis. Note that phase fluctuations simply generate an imaginary shift $i \gamma$ to the frequency in Eq. (\[eq:GAAxx\]). In the absence of amplitude fluctuations (see Eq. (\[eq:phasemeasure\])) we may replace $|A| \rightarrow \Delta_s$ in Eq. (\[eq:GAAxx\]), so that we obtain for the average DOS $$\frac{ \langle \rho ( \omega ) \rangle^{\rm ph}}{\rho_0} = {\rm Im }\, \frac{ z }{ \sqrt{ 1 - z^2}} \; , \label{eq:rhophaseres}$$ where we have defined $$z = \frac{ \omega + i \gamma}{\Delta_s} = \bar{\omega} + i \bar{\gamma} \label{eq:zdef} \; .$$ Eq. (\[eq:rhophaseres\]) agrees exactly with the perturbative result by Lee, Rice, and Anderson [@Lee73]. For $\omega = 0$ we recover Eq. (\[eq:rhoBorn\]). On the other hand, in the presence of additional Gaussian amplitude fluctuations, with probability distribution given by Eq. (\[eq:ampmeasure\]), we obtain $$\frac{\langle \rho ( \omega ) \rangle^{\rm ph+am}}{\rho_0} = {\rm Im } \int_0^{\infty} d t \frac{ e^{-t} z }{ \sqrt{ t - z^2}} \; . \label{eq:rhoampres}$$ A numerical evaluation of Eq. (\[eq:rhoampres\]) is shown in Fig. \[fig:rhoomega\]. For $\gamma = 0$ the integral in Eq. (\[eq:rhoampres\]) can be done analytically and reduces to the result obtained by Sadovskii [@Sadovskii74], which does not contain phase fluctuations. In this case the DOS vanishes quadratically for small frequencies, $$\frac{\langle \rho ( \omega ) \rangle^{\rm am}}{\rho_0} \sim 2 {\bar\omega }^2 \; \; , \; \; | \bar\omega | \ll 1 \; .$$ For any finite $\xi$ the DOS at the Fermi energy (i.e. at $\omega = 0$) is finite. From Eq. (\[eq:rhoampres\]) we find $$\frac{\langle \rho ( 0 ) \rangle^{\rm ph+am} }{\rho_0}= R ( \bar{\gamma} ) \; , \label{eq:rhozero}$$ with $$R ( \bar{\gamma} ) = \bar{\gamma} \int_0^{\infty} dt \frac{ e^{-t}}{ \sqrt{t + \bar{\gamma}^2}} \; . \label{eq:Rgdef}$$ A numerical evaluation of $R ( \bar\gamma )$ is shown in Fig. \[fig:rhozero\]. For small and large $\bar\gamma$ we obtain to leading order $$R ( \bar{\gamma} ) \sim \left\{ \begin{array}{ll} \sqrt{\pi} \bar{\gamma} & \; , \; \bar\gamma \ll 1 \\ 1 & \; , \; \bar{\gamma} \gg 1 \end{array} \right. \; . \label{eq:Rasym}$$ For large $\xi$ the DOS at the Fermi energy is $$\langle \rho ( 0 ) \rangle^{\rm ph+am} \sim \frac{\sqrt{\pi}}{2 \pi \Delta_s \xi} \;, \; \; v_F \xi \gg \Delta_s \; , \label{eq:rhozerosmall}$$ which should be compared with the result obtained within the Born approximation, see Eq. (\[eq:rhoBornsmall\]), $$\langle \rho ( 0 ) \rangle^{\rm pert} = \langle \rho ( 0 ) \rangle^{\rm ph} \sim \frac{1}{2 \pi \Delta_s \xi} \; . \label{eq:rhoBornsmall2}$$ Hence, Gaussian amplitude fluctuations increase the value of the DOS at the Fermi energy as compared with pure phase fluctuations. However, from Fig. \[fig:rhozero\] it is evident that the qualitative behavior of the DOS is correctly predicted by a model with pure phase fluctuations, which exactly reproduces the perturbative result [@Lee73]. Let us emphasize that this is not the case if $\Delta ( x)$ has a Gaussian distribution: the prediction of lowest order perturbation theory, $\langle \rho (0) \rangle \propto \xi^{-1}$, is in disagreement with the exact numerical result for Gaussian disorder, $\langle \rho (0) \rangle \propto \xi^{-0.64}$ (see Eq. (\[eq:rhoGauss\])). We thus conclude that the behavior of the average DOS at the Fermi energy of the FGM in one dimension is non-universal and sensitive to the detailed form of the probability distribution of $\Delta (x)$. Lyapunov exponent and localization length {#subsec:localization} ----------------------------------------- Since the energy dispersion of the FGM is linear, the Schrödinger equation $\hat{H}_x \psi_{\omega } ( x ) = \omega \psi_{\omega } ( x )$ is a system of linear first order differential equations. Fixing the two-component wave-function $\psi_{\omega} ( x )$ arbitrarily at one space point $x_0$ therefore constitutes the wave-function at all points $x$. In a disordered system, the Lyapunov exponent $\kappa ( \omega )$ characterizes the exponential growth of the magnitude of the wave-function at large distances $| x-x_0|$ [@Lifshits88], $$| \psi_{\omega } ( x ) | \sim | \psi_{\omega} ( x_0 ) | \exp [ \kappa ( \omega ) | x - x_0 | ] \; . \label{eq:lyapunovdef}$$ Strictly speaking, the Lyapunov exponent is defined by the limit $|x-x_0| \to \infty$ of this equation and assumes a certain value with probability one [@Lifshits88]. In one dimension the inverse of the Lyapunov exponent can be identified with the [*mean*]{} localization length. According to the Thouless formula the [*[mean]{}*]{} localization length $\ell ( \omega )$ can be obtained from the real part of the disorder-averaged single-particle Green’s function. Originally the Thouless formula was derived for a one-band model with quadratic energy dispersion [@Thouless72], but it can be shown to hold also for the FGM, where it can be written as [@Hayn87; @Bartosch00] $$\frac{\partial}{\partial \omega} \frac{1}{\ell ( \omega )} = {\rm Re} {\rm Tr} \langle {\cal{G}} ( x , x , \omega + i 0^{+} ) \rangle \; . \label{eq:Thouless}$$ Integrating the Thouless formula for Eq. (\[eq:GAAxx\]), we obtain $$\frac{v_F}{\ell ( \omega )} = {\rm Re} \left\langle \sqrt{ | A |^2 - ( \omega + i \gamma )^2 } \right\rangle_A - \gamma \; , \label{eq:Lyapunovres}$$ where the constant of integration is uniquely determined by the requirement $\lim_{\omega \rightarrow \infty } \ell^{-1} ( \omega ) = 0$. For pure phase fluctuations Eq. (\[eq:Lyapunovres\]) reduces to $$\frac{v_F}{\Delta_s \ell ( \omega )^{\rm ph}} = {\rm Re}\, \sqrt{ 1 - (\bar \omega + i\bar \gamma)^2 } - \bar{\gamma} \; , \label{eq:Lyapunovresphase}$$ while with additional Gaussian amplitude fluctuations $$\frac{v_F}{\Delta_s \ell ( \omega )^{\rm ph+am}} = {\rm Re} \left[ \int_0^{\infty} dt e^{-t} \sqrt{ t - (\bar \omega + i\bar \gamma)^2 } \right]- \bar{\gamma} \; . \label{eq:Lyapunovresamp}$$ A plot of the inverse localization length $\ell^{-1}(\omega)^{\rm ph+am}$ is given in Fig. \[fig:invloc-omega\]. For $\gamma \rightarrow 0$ only amplitude fluctuations are left, and Eq. (\[eq:Lyapunovresamp\]) reduces to $$\frac{v_F}{\Delta_s \ell ( \omega )^{\rm am} } = \frac{ \sqrt{\pi}}{2} e^{- \bar{\omega}^2} \; \; , \; \; \bar{\gamma} \rightarrow 0 \; . \label{eq:Thres2}$$ In the presence of phase and amplitude fluctuations the general expression (\[eq:Lyapunovresamp\]) simplifies at the Fermi energy to $$\frac{v_F}{\Delta_s \ell ( 0 )^{\rm ph+am} } \equiv P ( \bar{\gamma} ) \;, \label{eq:loczerores}$$ where the dimensionless function $P ( \bar{\gamma})$ is given by $$P ( \bar{\gamma} ) = \int_{0}^{\infty} dt e^{-t} \left[ \sqrt{ t + \bar{\gamma}^2 } - \bar{\gamma} \right] \label{eq:Pinfzero} \; .$$ A comparison of Eq. (\[eq:Pinfzero\]) with the corresponding expression obtained from Eq.(\[eq:Lyapunovresphase\]) for phase fluctuations is shown in Fig. \[fig:P\]. For small and large $\bar{\gamma}$ the leading behavior is $$P ( \bar{\gamma} ) \sim \left\{ \begin{array}{ll} {\sqrt{\pi}}/{2} & \; , \; \bar{\gamma} \ll 1 \\ { 1}/({2 {\bar{\gamma}}}) & \; , \; \bar{\gamma} \gg 1 \end{array} \right. \label{eq:Pinftyasym} \; .$$ In the white noise limit $\xi \rightarrow 0$, $\Delta_s \rightarrow \infty$ with $\Delta_s^2 \xi = {\rm{const}}$ only the behavior of $P ( \bar{\gamma})$ for large $\bar{\gamma}$ matters, and in this limit both Eq. (\[eq:Lyapunovresphase\]) and Eq. (\[eq:loczerores\]) reduce to the known white-noise result $$\frac{v_F}{ \ell ( 0 ) } = \frac{\Delta_s}{2 \bar{\gamma}} = \frac{\Delta_s^2 \xi}{ v_F} \; , \; \xi \rightarrow 0 \; {\rm with} \; \Delta_s^2 \xi = {\rm const} \; . \label{eq:whitenoiseloc}$$ An extrapolation of this white-noise result towards finite correlation lengths is shown as the dotted line in Fig. \[fig:P\]. Evidently, for large $\gamma$ the behavior of the localization length becomes independent of the precise form of the probability distribution of the disorder. For $\bar{\gamma } { \raisebox{-0.5ex}{$\; \stackrel{<}{\sim} \;$}} 1$ the localization length begins to deviate significantly from the white-noise limit and approaches a finite value of the order of $ v_F / \Delta_s$ for $\bar{\gamma} \rightarrow 0$, the precise value of which depends on the type of the disorder. We emphasize that for a real order parameter the low-frequency behavior of the localization length is dominated by the Dyson singularity, so that in this case $1/ \ell ( 0) = 0$ for any finite value of $\bar{\gamma}$, see Refs. [@Hayn87; @Bartosch00]. To compare the localization length of our exactly solvable toy model with phase and amplitude fluctuations with the case where the distribution of $\Delta (x)$ is a Gaussian, we have evaluated the Thouless formula (\[eq:Thouless\]) numerically for Gaussian colored noise with correlation length $\xi$, using an algorithm [@Bartosch00] similar to the one developed in Ref.[@Bartosch99b]. The numerical results for $v_F / ( \Delta_s \ell ( 0 ))$ are shown as the open circles in Fig. \[fig:P\]. In view of the simplicity of our model the agreement with Eq. (\[eq:loczerores\]) is quite spectacular. Hence, the localization length of our model with phase and amplitude fluctuations is a very accurate approximation to the localization length of the FGM with Gaussian disorder. The dashed line in Fig. \[fig:P\] describes the localization length for the case where we ignore amplitude fluctuations in our model, which is equivalent to the perturbative result by Lee, Rice, and Anderson [@Lee73]. The agreement with the case of Gaussian disorder is not so good, in particular in the pseudogap regime $\bar{\gamma } { \raisebox{-0.5ex}{$\; \stackrel{<}{\sim} \;$}} 1$. Average conductivity {#subsec:conductivity} -------------------- The DOS and the spectral function \[see Eqs. (\[eq:Adef\]) and (\[eq:localdos\])\] involve only the diagonal elements of the single-particle Green’s function. The simplest physical quantity which involves also the off-diagonal elements of ${\cal{G}}$ is the average polarization $ \langle \Pi ( q , i \omega_m ) \rangle$, which is given by $$\begin{aligned} 2 \pi \delta ( q - q^{\prime} ) \langle \Pi ( q , i \omega_m ) \rangle & = & - \frac{1 }{\beta} \sum_n \int \frac{ d p}{ 2 \pi} \int \frac{ d p^{\prime}}{ 2 \pi} \nonumber \\ & & \hspace{-35mm} \times {\rm Tr} \langle {\cal{G}} ( p + q , p^{\prime} + q^{\prime} , i \tilde{\omega}_{ n + m } ) {\cal{G}} ( p^{\prime} , p , i \tilde{\omega}_{ n } ) \rangle \; . \label{eq:poldef} \end{aligned}$$ Here, $\beta$ is the inverse temperature, $\omega_m = 2 \pi m / \beta$ are bosonic Matsubara frequencies and $\tilde{\omega}_n = 2 \pi ( n + \frac{1}{2} ) / \beta$ are fermionic ones. Given the average polarization, the average conductivity is easyly obtained from $$\langle \sigma ( q , \omega ) \rangle = - e^2 \frac{ i \omega }{q^2} \langle \Pi ( q , \omega + i 0^{+} ) \rangle \; .$$ In this work we shall only consider the real part of the conductivity at $q=0$, $${\rm Re}\, \langle \sigma ( \omega ) \rangle = \lim_{ q \rightarrow 0} {\rm Re} \, \langle \sigma ( q , \omega ) \rangle = e^2 \omega \lim_{q \rightarrow 0} \frac{ \langle {\rm Im}\, \Pi ( q , \omega + i 0^{+} ) \rangle }{q^2} \; . \label{eq:sigmaprimedef}$$ Substituting Eq. (\[eq:Gqqres\]) into Eq. (\[eq:poldef\]) and performing the Matsubara sum, we obtain for the average polarization $$\begin{aligned} \langle \Pi ( q , i \omega_m ) \rangle & = & \nonumber \\ & & \hspace{-27mm} \left\langle - \int \frac{dp}{ 2 \pi} \frac{E_p E_{p+q} + \xi_p \xi_{p+q} + | A |^2}{2 E_p E_{p+q}} \nonumber \right. \\ & & \hspace{-20mm} \times \left[ \frac{ f ( E_p - \eta ) - f ( E_{p+q} - \eta ) }{ E_p - E_{p+q} - i \omega_m} \nonumber \right. \\ & & \left. \hspace{-17mm} + \frac{ f ( E_p + \eta ) - f ( E_{p+q} + \eta ) }{ E_p - E_{p+q} + i \omega_m} \right] \nonumber \\ & & \hspace{-27mm} + \int \frac{dp}{ 2 \pi} \frac{E_p E_{p+q} - \xi_p \xi_{p+q} - | A |^2}{2 E_p E_{p+q}} \nonumber \\ & & \hspace{-20mm} \times \left[ \frac{ 1 - f ( E_p - \eta ) - f ( E_{p+q} + \eta ) }{ E_p + E_{p+q} - i \omega_m} \nonumber \right. \nonumber \\ & & \left. \left. \hspace{-17mm} + \frac{ 1 - f ( E_p + \eta ) - f ( E_{p+q} - \eta ) }{ E_p + E_{p+q} + i \omega_m} \right] \right\rangle \label{eq:Pimatsubara} \; , \end{aligned}$$ where we use the notation $E_p = ( \xi_p^2 + | A |^2)^{1/2}$, $\xi_p = v_F p$ and $ f ( E ) = 1 / [e^{\beta E} + 1 ] $ is the Fermi-Dirac function. Setting $\eta = 0$ in Eq. (\[eq:Pimatsubara\]) we recover Eq. (2.10) of Ref. [@Sadovskii74]. Expanding Eq. (\[eq:Pimatsubara\]) for small $q$ and performing the average over the Lorentzian distribution of $\eta$, we obtain in the limit of zero temperature $(\beta \rightarrow \infty)$, $$\begin{aligned} {\rm Re}\, \langle \sigma ( \omega ) \rangle & = & \frac{ n e^2 }{m} \frac{\pi }{\gamma} \left\langle \sqrt{ | A |^2 + \gamma^2 } - | A | \right\rangle_A \delta ( \omega ) \nonumber \\ & & \hspace{-15mm }+ \frac{ n e^2 }{m} \arctan \left( \frac{ | \omega |}{\gamma} \right) \left\langle \frac{ | A |^2 }{\omega^2} \frac{ \Theta ( \omega^2 - | A |^2 )}{ \sqrt{ \omega^2 - | A |^2 }} \right\rangle_{A} \; , \end{aligned}$$ where $ n/m \equiv v_F / \pi$ and ${\gamma}$ is defined in Eq. (\[eq:gammadef\]). For pure phase fluctuations the averaging over the distribution of $A$ is trivial and simply leads to the replacement $| A| \rightarrow \Delta_s$. Then the conductivity exhibits a Drude peak with weight given by $ \bar{\gamma}^{-1} ( \sqrt{ \Delta_s^2 + \bar{\gamma}^2} - \Delta_s )$, which is separated from a continuum at higher frequencies by a finite gap $\Delta_s$. Gaussian amplitude fluctuations wash out the gap but do not remove the Drude peak. Averaging over the probability distribution of the amplitude $ A $ given in Eq. (\[eq:ampmeasure\]) we obtain $${\rm Re}\, \langle \sigma ( \omega ) \rangle = \frac{n e^2 }{m } \left[ \pi D ( \bar\gamma ) \delta ( \omega )+ \frac{1}{\Delta_s} C ( \bar\gamma , \bar{\omega} ) \right] \label{eq:ResigmaDC} \; ,$$ where we have used again the notation $\bar\gamma = \gamma / \Delta_s$, $\bar{\omega } = \omega / \Delta_s$, and the dimensionless functions $D ( \bar{\gamma} )$ and $C ( \bar{\gamma} , \bar{\omega} )$ are $$D ( \bar{\gamma} ) = \frac{1}{ \bar{\gamma} } \int_0^{\infty} d t\, e^{-t} [ \sqrt{ t + \bar{\gamma}^2} - \sqrt{t} ] \; , \label{eq:Ddef}$$ $$C ( \bar{\gamma} , \bar\omega ) = \arctan \left( \frac{ | \bar{\omega} | }{ \bar{\gamma} } \right) | \bar{\omega} | \int_0^{1} dt e^{- \bar{\omega}^2 t} \frac{t}{\sqrt{ 1 - t }} \; . \label{eq:Cdef}$$ A graph of $D (\bar{\gamma}) $ is shown in Fig. \[fig:DrudeP\]. Physically $D ( \bar{\gamma} )$ is the dimensionless renormalization factor for the weight of the Drude peak, with $D = 1$ corresponding to an unrenormalized Drude peak. The leading terms in the expansion of $D ( \bar{\gamma} )$ for small and large $\bar{\gamma}$ are $$D ( \bar{\gamma} ) \sim \left\{ \begin{array}{ll} \frac{\sqrt{\pi}}{2} \bar{\gamma} & \; , \; \bar{\gamma} \ll 1 \\ 1 & \; , \; \bar{\gamma} \gg 1 \end{array} \right. \; . \label{eq:Drudeasym}$$ At the first sight the existence of a Drude peak in our model is rather surprising because in Sec. \[subsec:localization\] we have found that the localization length $\ell ( 0)$ at zero frequency is finite. In fact, we believe that for Gaussian disorder with moments given by Eqs. (\[eq:deltaav\]) and (\[eq:deltacov\]) the conductivity of the one-dimensional FGM does not exhibit a Drude peak, because the eigenstates at ${\omega} = 0$ should all be localized [*[for a given realization of the disorder]{}*]{} [@Lifshits88; @Sadovskii91]. On the other hand, for our choice $\Delta ( x ) =A e^{i Q x}$ with spatially constant but random $A$ and $Q$, the Green’s function is not self-averging, so that its spatial average is not identical with its disorder average. As a consequence, there is a finite probability of finding delocalized states at the Fermi energy: for $| \omega - \eta | > | A |$ the solutions of the Schrödinger equation are simply plane waves, whereas for $| \omega - \eta | < | A |$ there is a gap in the spectrum, and the Schrödinger equation does not have any normalizable solutions. Hence, depending on the realization of the disorder, the system is either a perfect conductor or an insulator. Because in Eq. (\[eq:Thouless\]) we have defined the inverse localization length in terms of the [*[disorder averaged]{}*]{} Green’s function, the value of $\ell^{-1} ( \omega )$ is determined by those realizations of the disorder where localized states at energy $\omega$ do not exist. However, the probability of finding delocalized states at the Fermi energy is finite, and can be expressed in terms of the function $P ( \bar{\gamma})$ defined in Eq. (\[eq:Pinfzero\]), $$\begin{aligned} W_{\rm deloc} ( 0 ) & = & \langle \Theta ( \eta^2 - | A |^2 ) \rangle \nonumber \\ & = & 1 - \frac{2}{\sqrt{\pi}} P ( \bar{\gamma} ) \nonumber \\ & \sim & \left\{ \begin{array}{ll} \frac{2}{\sqrt{\pi}} \bar{\gamma} & , \; \bar{\gamma} \ll 1 \\ 1 & , \; \bar{\gamma} \gg 1 \end{array} \right. \label{eq:Wres} \; . \end{aligned}$$ A graph if $W_{\rm deloc} ( 0 )$ is shown as the dashed-dotted line in Fig. \[fig:DrudeP\]. Note that the qualitative behavior of $W_{\rm deloc} ( 0 )$ is very similar to the weight $D$ of the Drude peak. The conductivity of quasi-one-dimensional Peierls systems [*[below]{}*]{} the Peierls transition (for which $\langle \Delta ( x ) \rangle \neq 0$) has been discussed in Refs. [@Froehlich54; @Lee74]. The authors pointed out that in this case a gapless collective mode associated with fluctuations of the phase of the order parameter generates a finite Drude peak. In our toy model, $\eta$ describes such a gapless mode. As discussed in Sec. \[sec:intro\], our model is also relevant to describe higher-dimensional systems such as superconductors within a quasiclassical approximation. In this case it is physically reasonable to expect that phase fluctuations of the superconducting order parameter generate delocalized states at the Fermi energy [@Emery95; @Franz98]. Then we indeed expect a finite Drude peak in the conductivity, which is broadened by disorder and becomes a sharp $\delta$-function in the superconducting state. Let us now focus on the incoherent part of the conductivity, which is described by the dimensionless function $C ( \bar{\gamma} , \bar{\omega } )$ in Eq. (\[eq:ResigmaDC\]). A graph of this function is shown in Fig. \[fig:C\]. For large correlation lengths, i.e. $\bar{\gamma} \ll 1$ there are three characteristic regimes where $C ( \bar{\gamma} , \bar{\omega } )$ can be approximated by $$C ( \bar{\gamma} , \bar{\omega} ) \sim \left\{ \begin{array}{ll} \frac{4}{3} \bar{\gamma}^{-1} \bar{\omega}^2 & \; , \; | \bar{\omega} | \ll \bar{\gamma} \\ \frac{4\pi }{6} | \bar{\omega} | & \; , \; | \bar{\gamma} | \ll | \bar{\omega} | \ll 1 \\ \frac{\pi }{2} | \bar{\omega} |^{-3} & \; , \; 1 \ll | \bar{\omega} | \end{array} \right. \; . \label{eq:Casym}$$ For $\bar{\gamma} \ll | \bar{\omega} |$ this agrees with the result of Ref. [@Sadovskii74]. Note that for a one-band model with Gaussian white noise disorder the real part of the conductivity is known to vanish for small frequencies as $\omega^2 \ln^2 (1/ \omega)$ [@Mott67]. Thus, apart from the logarithmic correction, the incoherent part of the conductivity of our simple model shows the generic behavior of one-dimensional disordered electrons. Note also that for small $\bar{\gamma}$ the relative weight of the Drude peak is of the order of $\bar{\gamma}$, so that the incoherent contribution dominates. The white-noise limit is defined by letting $\Delta_s \xi \rightarrow 0$ while keeping $\Delta_s^2 \xi$ finite. In this case $D(\bar \gamma)$ approaches unity. In fact, in the white-noise limit the average conductivity is not modified by the disorder at all because the function $\Delta_s^{-1} C ( \bar{\gamma} , \bar{\omega} )$ vanishes if we let $\Delta_s \rightarrow \infty$. Conclusions {#sec:conclusions} =========== In this work we have introduced a simple exactly solvable toy model which describes the combined effects of phase and amplitude fluctuations of an off-diagonal order parameter on the physical properties of an electronic system. Although we have only discussed the one-dimensional version of this model with linearized energy dispersion, the exact solubility of our model does not depend on these features, so that our calculations can be generalized to more realistic models of electrons in dimensions $d > 1$ with non-linear energy dispersions. In this case the fluctuating gap should be chosen of the from $\Delta ( {\bf{r}}) = A e^{ i {\bf{Q}} \cdot {\bf{r}}}$. To satisfy $\langle \Delta ( {\bf{r}}) \rangle = 0$ and $\langle \Delta ( {\bf{r}}) \Delta^{\ast} ( {\bf{r}}^{\prime}) \rangle = \Delta_s^2 e^{ - | {\bf{r}} - {\bf{r}}^{\prime}| / \xi}$, the random variable $A$ should be distributed such that Eqs. (\[eq:Afirstmom\]) and (\[eq:Asecondmom\]) are satisfied, while the distribution ${\cal{P}}_{\bf{Q}}$ of the $d$-dimensional random-vector ${\bf{Q}}$ should be $${\cal{P}}_{\bf{Q}}= \frac{ 1}{( 2 \pi)^d} \int d {\bf{r}} e^{ - i {\bf{Q}} \cdot {\bf{r}}} e^{ - | {\bf{r}}| / \xi } \; . \label{eq:Probarb}$$ For $d=1$ this reduces to Eq. (\[eq:peps\]), but in $d>1$ Eq. (\[eq:Probarb\]) is not a Lorentzian. In one dimension our model describes the disordered phase of Peierls and spin-Peierls chains. We have presented explicit results for the density of states, the localization length, the single-particle spectral function, and the real part of the conductivity. Let us emphasize three points: \(a) The mean localization length of our toy model, which we have defined via the Thouless formula (\[eq:Thouless\]), is an excellent approximation to the mean localization length of the FGM with Gaussian disorder. Although the respective density of states agree quite well on a qualitative level, deviations become substantial for large correlation lengths, leading to a different scaling behavior as a function of $\xi$. \(b) The interplay between phase and amplitude fluctuations gives rise to a weak logarithmic singularity in the single-particle spectral function of our model. Whether this singularity is just an artifact of our toy model or not remains an open question. (c) The conductivity of our model exhibits not only a pseudogap below the energy scale $\Delta_s$ but also a Drude peak at $\omega = 0$ with a weight that vanishes as $1/ \xi$ for $\xi \rightarrow \infty$. While the qualitative picture of the continuous part should be generic for more realistic one-dimensional disordered systems (up to logarithmic corrections for small frequencies [@Mott67]), the Drude peak in our model is due to the existence of delocalized states at the Fermi energy which are created by phase fluctuations. However, in a strictly one-dimensional disordered system, the disorder should lead to the localization of all eigenstates, resulting in a vanishing zero temperature dc conductivity [@Mott67]. On the other hand, even very weak three-dimensional interactions can lead to a phase transition leading to long-range order and a finite Drude peak as found in our model. We expect that forward scattering by disorder (which we have ignored in our calculation) will broaden the Drude peak [@Kopietz99]. Experimentally, peak structures in the far infrared well below the pseudogap regime have been observed in the optical conductivity of several quasi one-dimensional Peierls systems above the Peierls transition [@Gorshunov94]. Our model also describes superconducting fluctuations in $d>1 $ within a semiclassical approximation. Recall that our Eq. (\[eq:Andreev\]) for the Green’s function in $d=1$ is formally equivalent to the Andreev equation for the semiclassical wave-function of a superconductor. The latter can be obtained from the more general Gorkov equation (\[eq:Gorkov\]) in the limit of a slowly varying order parameter. To calculate physical observables, the solutions of the Andreev equations should be averaged over the classical trajectories of the electrons [@Andreev64], which we have not done in this work. Therefore we cannot make any quantitative comparisons with experimental data for high-temperature superconductors. However, some qualitative features of our results seem to agree with experiments. In particular, in our model the pseudogap in the conductivity coexists with a small Drude peak. Such a behavior has been seen experimentally in the normal state of high-temperature superconductors [@Lupi00]. In our model the Drude peak is a direct consequence of the fluctuating phase of the superconducting order parameter. Without phase fluctuations all charge carriers at the Fermi energy are localized and there is no Drude peak. In this respect our model describes a bad metal in the sense defined by Emery and Kivelson [@Emery95]. This work was financially supported by the DFG (Grants No. Ko 1442/3-1 and Ko 1442/4-1). [99]{} J. Schmalian, D. Pines, and B. Stojkovič, Phys. Rev. Lett. [**[80]{}**]{}, 3839 (1998); Phys. Rev. B [**[60]{}**]{}, 667 (1999). V. J. Emery and S. A. Kivelson, Nature (London) [**[374]{}**]{}, 434 (1995); Phys. Rev. Lett. [**[74]{}**]{}, 3253 (1995). A. A. Abrikosov, L. P. Gorkov, and I. E. Dzyaloshinski, [*[Methods of Quantum Field Theory in Statistical Physics]{}*]{} (Dover, New York, 1963), Chap. 7. A. F. Andreev, Zh. Eksp. Teor. Fiz. [**[46]{}**]{}, 1823 (1964) \[Sov. Phys. JETP [**[19]{}**]{}, 1228 (1964)\]; for recent discussions of the quasiclassical Andreev approximation see I. Kosztin, S. Kos, M. Stone and A. J. Leggett, Phys. Rev. B [**[58]{}**]{}, 9365 (1998); S. Kos and M. Stone, Phys. Rev. B [**[59]{}**]{}, 9545 (1999); L. Bartosch and P. Kopietz, Phys. Rev. B [**[60]{}**]{}, 7452 (1999); I. Adagideli, P. M. Goldbart, A. Shnirman, and A. Yazdani, Phys. Rev. Lett. [**[83]{}**]{}, 5571 (1999). P. A. Lee, T. M. Rice, and P. W. Anderson, Phys. Rev. Lett. [**[31]{}**]{}, 462 (1973). J. E. Bunder and R. H. McKenzie, Phys. Rev. B [**[60]{}**]{}, 344 (1999); R. H. McKenzie, Phys. Rev. Lett. [**[77]{}**]{}, 4804 (1996); M. Fabrizio and R. Mélin, Phys. Rev. Lett. [**[78]{}**]{}, 3382 (1997); M. Steiner, M. Fabrizio, and A. O. Gogolin, Phys. Rev. B [**[57]{}**]{}, 8290 (1998). L. Bartosch and P. Kopietz, Phys. Rev. Lett. [**[82]{}**]{}, 988 (1999). L. Bartosch and P. Kopietz, Phys. Rev. B [**[60]{}**]{}, 15488 (1999). P. Chandra, J. Phys.: Condens. Matter [**[1]{}**]{}, 10067 (1989). M. V. Sadovskii, Zh. Eksp. Teor. Fiz. [**[77]{}**]{}, 2070 (1979) \[Sov. Phys. JETP [**[50]{}**]{}, 989 (1979)\]. Note that the parameter $\Gamma $ defind by Sadovskii can be identified with our $2 \bar{\gamma} = v_F / ( \Delta_s \xi )$. O. Tchernyshyov, Phys. Rev. B [**59**]{}, 1358 (1999). A. Millis and H. Monien, cond-mat/9907233. S. A. Brazovskii and I. E. Dzyaloshinskii, Zh- Eksp. Teor. Fiz. [**[71]{}**]{}, 2338 (1976) \[Sov. Phys. JETP [**[44]{}**]{}, 1233 (1976)\]. M. V. Sadovskii, Zh. Eksp. Theor. Fiz. [**[66]{}**]{}, 1720 (1974) \[Sov. Phys. JETP [**[39]{}**]{}, 845 (1974)\]. P. G. de Gennes, [*[Superconductivity of Metals and Alloys]{}*]{} (Benjamin, New York, 1966). M. Franz and A. J. Millis, Phys. Rev. B [**[58]{}**]{}, 14572 (1998). E. Z. Kuchinskii and M. V. Sadovskii, Zh. Eksp. Teor. Fiz. [**[115]{}**]{}, 1765 (1999) \[Sov. Phys. JETP [**[88]{}**]{}, 968 (1999)\]. Note that Eq. (A9) of this work is equivalent to our Eq. (\[eq:Gaaint\]). I. S. Gradshteyn and I. M. Ryzhik, [*[Table of Integrals, Series, and Products]{}*]{} (Academic Press, San Diego, 1980). M. V. Sadovskii and A. A. Timofeev, J. Moscow Phys. Soc. [**[1]{}**]{}, 391 (1991). In this work an approximate calculation of ${\rm Re}\, \sigma ( \omega )$ for Gaussian disorder is presented. Surprisingly, it is found that even in this case the conductivity exhibits a broadened Drude peak for not too small values of $\bar{\gamma}$. R. H. McKenzie and D. Scarratt, Phys. Rev. B [**[54]{}**]{}, 12709 (1996). V. Vescoli, F. Zwick, J. Voit, H. Berger, M. Zacchigna, L. Degiorgi, M. Grioni, and G. Grüner, Phys. Rev. Lett. [**[84]{}**]{}, 1272 (2000). I. M. Lifshits, S. A. Gredeskul, and L. A. Pastur, [*[Introduction to the Theory of Disordered Systems]{}*]{}, (Wiley, New York, 1988). D. J. Thouless, J. Phys. C [**[5]{}**]{}, 77 (1972). R. Hayn and W. John, Z. Phys. B [**[67]{}**]{}, 169 (1987). L. Bartosch, [*[PhD-thesis]{}*]{}, (Universität Göttingen, 2000, unpublished). J. Fröhlich, Proc. Royal Soc. London, Ser. [**[A]{}**]{} 233, 296 (1954). P. A. Lee, T. M. Rice, and P. W. Anderson, Solid State Commun. [**[14]{}**]{}, 703 (1974). G. Grüner, [*[Density Waves in Solids]{}*]{}, (Addison-Wesley, Reading, 1994). N. F. Mott, Adv. Phys. [**[16]{}**]{}, 49 (1967); N. F. Mott and E. Davis, [*[Electronic Processes in Non-Crystalline Materials]{}*]{}, 2nd ed., (Clarendon Press, Oxford, 1979). Recently the conductivity of one-dimensional disordered electrons was re-examined by A. O. Gogolin, Phys. Rev. Lett. [**[84]{}**]{}, 1760 (2000), who found ${\rm Re} \sigma ( \omega ) \sim \omega^2 \ln^3 (1 / \omega )$. P. Kopietz and G. E. Castilla, Phys. Rev. B [**[59]{}**]{}, 9961 (1999). B. P. Gorshunov, A. A. Volkov, G. V. Kozlov, L. Degiorgi, A. Blank, T. Csiba, M. Dressel, Y. Kim, A. Schwartz, and G. Grüner, Phys. Rev. Lett. [**[73]{}**]{}, 308 (1994); A. Schwartz, M. Dressel, B. Alavi, A. Blank, S. Dubois, G. Grüner, B. P. Gorshunov, A. A. Volkov, G. V. Kozlov, S. Thieme, L. Degiorgi, and F. Lévy, Phys. Rev. B [**[52]{}**]{}, 5643 (1995); M. Dressel, A. Schwartz, G. Grüner, and L. Degiorgi, Phys. Rev. Lett. [**[77]{}**]{}, 398 (1996); A. Schwartz, M. Dressel, G. Grüner, V. Vescoli, L. Degiorgi, and T. Giamarchi, Phys. Rev. B [**[58]{}**]{}, 1261 (1998). P. V. Puchkov, P. Fournier, D. N. Basov, T. Timusk, A. Kapitulnik, and N. N. Kolesnikov, Phys. Rev. Lett. [**[77]{}**]{}, 3212 (1996); S. Lupi, P. Calvani, M. Capizzi, and P. Roy, cond-mat/0001244.
--- abstract: | We find necessary and sufficient conditions for the validity of weighted Rellich inequalities in $L^p$, $1\le p \le \infty$, for functions in bounded domains vanishing at the boundary. General operators like $L=\Delta+c\frac{x}{|x|^2}\cdot\nabla-\frac{b}{|x|^2}$ are considered. Critical cases and remainder terms are also investigated. Mathematics subject classification (2010): 26D10, 35PXX, 47F05. Keywords: Rellich inequalities, Spectral theory. author: - '[G. Metafune [^1] L. Negro [^2] M. Sobajima [^3]C. Spina]{} [^4]' title: Rellich inequalities in bounded domains --- Introduction ============ In this paper we consider the operator $$\label{L} Lu=\Delta u+c\frac{x}{|x|^2}\cdot\nabla u-\frac{b}{|x|^2} u,\quad c,\ b\in{\mathbb{R}}$$ acting in the space $L^p(\Omega)$, for $1\le p \le \infty$, endowed with Dirichlet boundary conditions and we determine all $\alpha's$ (depending on $N,p,c,b$) for which the following weighted Rellich inequalities hold $$\label{Intr 1} \||x|^\alpha Lu\|_p \geq C\||x|^{\alpha-2} u\|_p.$$ Note that, when $c=0$, $L$ becomes a Schrödinger operator with inverse square potential. When best constants can be computed, we prove that they are not attained by adding remainder terms. Finally, when Rellich inequalities above fail, we prove modified inequalities which include logarithmic terms.\ The first results in this direction have been obtained for the Laplacian in unweighted $L^p$-spaces and when $\Omega={\mathbb{R}}^N$. In 1956, Rellich proved the inequalities $$\left(\frac{N(N-4)}{4}\right)^2\int_{{\mathbb{R}}^N}|x|^{-4}|u|^2\, dx\leq \int_{{\mathbb{R}}^N}|\Delta u|^2\, dx$$ for $N\not =2$ and for every $u\in C_c^\infty ({\mathbb{R}}^N\setminus\{0\})$, see [@rellichF]. These inequalities have been then extended to $L^p$-norms: in 1996, Okazawa proved in [@okazawa] the validity of $$\left(\frac{N}{p}-2\right)^p\left(\frac{N}{p'}\right)^p\int_{{\mathbb{R}}^N}|x|^{-2p}|u|^p\, dx\leq \int_{{\mathbb{R}}^N}|\Delta u|^p\, dx$$ for $1<p<\frac{N}{2}$, showing also the optimality of the constants. Weighted Rellich inequalities have also been studied in [@davi-hinz] and later by Mitidieri who proved for $N\geq 3$ and for $2-\frac{N}{p}<\alpha<\frac{N}{p'}$ $$\label{WRI} C^p(N,p,\alpha)\int_{{\mathbb{R}}^N}|x|^{(\alpha-2)p}|u|^p\, dx\leq \int_{{\mathbb{R}}^N}|x|^{\alpha p}|\Delta u|^p\, dx$$ with the optimal constants $C^p(N,p,\alpha)=\left(\frac{N}{p}-2+\alpha\right)^p\left(\frac{N}{p'}-\alpha\right)^p$, see [@mitidieri Theorem 3.1].\ In the recent paper [@caldiroli], Caldiroli and Musina improved weighted Rellich inequalities for $p=2$ by giving necessary and sufficient conditions on $\alpha$ for the validity of (\[WRI\]) and finding also the optimal constants $C^2(N,2,\alpha)$. In particular they proved that (\[WRI\]) is verified for $p=2$ if and only if $\alpha\neq N/2+n$, $\alpha \neq -N/2+2-n$ for every $n\in{\mathbb{N}}_0$. In [@rellich] the results in [@caldiroli] are extended to $1 \le p \le \infty$, computing also best constants in some cases. It is shown that (\[WRI\]) holds if and only if $\alpha \neq N/p'+n$, $\alpha \neq -N/p+2-n$ for every $n \in {\mathbb{N}}_0$. Moreover, Rellich inequalities are employed to find necessary and sufficient conditions for the validity of weighted Calderón-Zygmund estimates when $1<p<\infty$. These methods can be applied to general operators as in (\[L\]), thus providing a complete solution to problem (\[Intr 1\]) with $\Omega={\mathbb{R}}^N$. Let us now consider bounded open sets $\Omega$ containing the origin and spaces of functions vanishing at the boundary. In contrast with Hardy inequality, where many results in bounded domains improving those in the whole space are known, Rellich inequalities do not seem to have been studied intensively. We quote however [@musina] for $L=\Delta$, where the author discovers a range of parameters $\alpha$ where Rellich inequalities hold in the whole space but not in a bounded $\Omega$, due to the boundary conditions. In this paper we find all parameters $\alpha$ for which (\[Intr 1\]) hold for a general $L$ as in (\[L\]), assuming that $\Omega$ has a smooth boundary and the condition $D:=b+(N-2+c)^2/4\geq 0$ on the coefficients of $L$, which guarantees the solvability of related elliptic problems. When $\Omega$ is a ball, however, this restriction on $D$ is not necessary. Our method is based on the spectral analysis of the auxiliary operator $A=|x|^2 \Delta +cx\cdot \nabla$, as explained in Section 2. In particular, we show that, setting $\lambda_n=n(N-2+n)$, (\[L\]) holds if and only if $$\begin{aligned} \label{eq1} \nonumber \alpha&<N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}+ \sqrt{ D} \quad\text{and}\;\\[1ex] \alpha&\neq N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}- \sqrt {D+\lambda_n}, \quad\forall\, n\in {\mathbb{N}}_{0}.\end{aligned}$$ When $\Omega$ is a ball centered at the origin, the above characterization holds also when $D<0$ (changing the square roots with their real parts) and in the extreme cases $p=1, \infty$. However, when $\Omega={\mathbb{R}}^N$ the results in [@rellich] say that Rellich inequalities hold if and only if $$\label{eq3} \alpha\neq N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}\pm {\textrm{\emph{Re}\,}}\sqrt {D+\lambda_n}, \quad\forall\, n\in {\mathbb{N}}_{0}.$$ The reason for the difference between and is explained in Section 2 in an elementary way in the case of the ball, by showing explicit counterexamples due to the boundary. Rellich inequalities can be proved by using integration by parts and applying Hardy-type inequalities only when $$\label{eq2} N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}-\sqrt {D}<\alpha < N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}+\sqrt {D}.$$ This proof allows also to compute the best constant $C:=b+\Bigl(\frac{N}{p}-2+\alpha\Bigr)\Bigl(\frac{N}{p'}-\alpha+c\Bigr)$. For the other values of $\alpha$ appearing in , the best constant is unknown unless $p=2$, see [@caldiroli], [@rellich], or when $p$ is generic but special subspaces of $L^p$ are considered, see [@rellich]. In the range , Rellich inequalities have essentially a one dimensional structure, since the (approximate) extremants are radial functions and best constants can therefore be computed. Outside of this range, however, the problem loses its rotational symmetry and the extremants, in special subspaces, involve spherical harmonics, see [@rellich], again. This explains also why symmetrization arguments based on spherical rearrangements do not work and a spectral analysis appears. Similarly, best constants can be computed on subspaces of $L^p$ which allow a one-dimensional reduction and then on the whole $L^2$, by orthogonal expansions. Remainder terms are known for the Laplacian in the unweighted case. We quote [@tertikas-zogra] where the authors obtained in particular $$\begin{aligned} \int_{\Omega} |\Delta u|^2 \,dx & \geq \left(\frac{N(N-4)}{4}\right)^2\int_{\Omega} \frac{|u|^{2}}{|x|^{4}} \,dx\\[1ex]&+\left(1+\frac{N(N-4)}{8}\right)\sum_{i=1}^\infty\int_{\Omega} \frac{|u|^{2}}{|x|^{4}} X_1^2X_2^2\cdots X_i^2\,dx,\end{aligned}$$ for bounded domains $\Omega$ in ${\mathbb{R}}^N$, $N\geq 5$, $u\in C_c^\infty(\Omega\setminus\{0\})$, where $X_k=X_k\left(\frac{|x|}{R(\Omega)}\right)$, $R(\Omega)=\sup_{x\in \Omega}|x|$, are iterated radial logarithmic functions. The result has been extended to $L^p$ norms in [@barbatis-tertikas] under the restriction $p<\frac{N}{2}$, according to when $\alpha=b=c=0$. A different proof which uses symmetrization and covers also the case $p=\frac{N}{2}$ is given in [@ando]. Rellich inequalites with remainder terms in the whole space have been investigated in [@sano], where the remainder is given in terms of weighted $L^q$ norms of the Schwartz symmetrization of the functions. We prove a similar result for our operator $L$ in weighted $L^p$ norms, considering only one remainder term. When $\alpha$ satisfies we obtain with $C$ above $$\Big\||x|^\alpha Lu\Big\|_p^p -C^p \Big\||x|^{\alpha-2} u\Big\|_p^p \ge c \Big\||x|^{\alpha-2}\left|\log |R^{-1}x|\right|^{-\frac{2}{p}} u\Big\|_p^p$$ for $u\in C^2_c(B_{R/2}\setminus\{0\})$. Some explanation on the class of functions here considered is necessary. Since is satisfied, Rellich inequalities hold for both $\Omega$ bounded or $\Omega={\mathbb{R}}^N$ but we choose to formulate the above result with reference to the whole space, that is for functions having compact support. A similar formulation for functions only vanishing at $\partial \Omega$, when $\Omega$ is a ball, is also possible but we prefer to point out only the role of the singularity at $0$, since the weight $|x|^\alpha$ has no effect on the boundary. In the critical cases, when Rellich inequalities do not hold, we prove that modified inequalities with logarithmic correction terms are still valid. Again we focus on the singularity at $0$ and consider functions with compact support in ${\mathbb{R}}^N$. If $$\alpha= N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}\pm {\textrm{\emph{Re}\,}}\sqrt {D+\lambda_n}$$ for some $n \in {\mathbb{N}}_0$, $1<p\le \infty$, then $$\||x|^\alpha Lu\|_p \geq C\Big\||x|^{\alpha-2}\left|\log |R^{-1}x|\right|^{-2} |u|\Big\|_p \quad {\rm when }\ D+\lambda_n \le 0$$ $$\||x|^\alpha Lu\|_p \geq C\Big\||x|^{\alpha-2}\left|\log |R^{-1}x|\right|^{-1} |u|\Big\|_p \quad {\rm when }\ D+\lambda_n >0$$ for $u\in C^2_c(B_{R/2}\setminus\{0\})$. When $p=1$, the previous inequalities hold with $|\log |R^{-1}x||^{-2}$ and $|\log |R^{-1}x||^{-1}$ replaced by $|\log |R^{-1}x||^{-2-{\varepsilon}}$ and $|\log |R^{-1}x||^{-1-{\varepsilon}}$, respectively.\ In this way we extend the results already proved in [@adimurthi] for the Laplace operator under the more restrictive conditions $\alpha=0$, $p=\frac{N}{2}$, $N\geq 3$. We also refer to [@gazzola] where Rellich inequalities for the Laplacian have been proved with different remainder terms for $\alpha=0$, $p\leq \frac{N}{2}$. The treatment of the critical case does not rely on rearrangements, as already explained, but a reduction to the one-dimensional case is still possible via a spectral analysis. In fact we show that Rellich inequalities are true, even in the critical cases, if we consider subspaces of $L^p({\mathbb{R}}^N)$ spanned by functions like $f(r)P(\omega)$, where $P$ is a spherical harmonic of degree different from $n$ and the problem is then reduced to find the right inequalities for (linear combinations of) functions $g(r)Q(\omega)$ where $Q$ is a spherical harmonic of degree $n$, hence to a finite number of one-dimensional problems. Let us explain why semigroups of linear operators appear often in the paper. When $p=2$, Rellich inequalities can be reduced to a countable set of one-dimensional inequalities, by an orthogonal expansion in spherical harmonics, see for example [@rellich]. Moreover, it turns out that is more convenient to work with the operator $A=|x|^2L$ instead of $L$, so that the radial and the angular parts decouple. When $p \neq 2$ the one-dimensional analysis can be still performed but one needs a substitute for orthogonal expansions. This role is played by the semigroup $e^{tA}$ which allows to compute the spectrum of $A$, by tensor product arguments, since the radial and the angular parts commute. Rellich inequalities are equivalent to spectral inequalities for $A$ and, moreover, the description of the domain of $A$ allows us to identify precise classes where Rellich inequalities hold. Let us briefly describe the content of the sections. In Section 2 we present the basic ideas and some explicit counterexamples which serve as a guide for the rest of the paper. We reduce Rellich inequalities to a spectral problem for an operator with singular coefficients $A=|x|^2 \Delta +cx\cdot \nabla$ which is therefore analysed in detail in Section 3, which is the core of the paper. Rellich inequalities for the ball and for the whole space are easily deduced in Section 4 from the analysis of Section 3. The case of general domains, without any rotational symmetry, is studied in Section 5: here we need $1<p<\infty$ and $D \ge 0$, a condition which is known to be equivalent to the existence of positive solutions for elliptic and parabolic problems related to $L$. When $L=\Delta-b|x|^{-2}$, this condition reduces to the classical one $b+(N-2)^2/4 \ge 0$. The main tool to pass from the ball to a general $\Omega$ is a pointwise estimate of the Green function of $-L$ which follows from precise bounds of the heat kernel. Rellich inequalities in exterior domains not containing the origin are easily treated via the Kelvin transform. In Section 6 we show that, when Rellich inequalities fail, modified inequalities which include logarithmic terms are still valid. The situation is similar to Hardy inequality, when the classical one fails. In Section 7, we analyse the remainder term in Rellich inequalities when is satisfied. **Notation.** We denote by ${\mathbb{N}}_0={\mathbb{N}}\cup\{0\}$ the natural numbers including 0. If $\Omega$ is an open subset of ${\mathbb{R}}^N$, $C_b(\Omega)$ is the Banach space of all continuous and bounded functions in $\Omega$, endowed with the sup-norm, $C_0(\overline{\Omega})$ its subspace consisting of functions vanishing at the boundary and $C_0^0(\overline{\Omega})$ its subspace consisting of functions vanishing at the origin and at the boundary, when $0 \in \Omega$. $C_c^\infty(\Omega)$ denotes the space of infinitely continuously differentiable functions with compact support in $\Omega$. The unit sphere $\{\|x\|=1\}$ in ${\mathbb{R}}^N$ is denoted by $S^{N-1}$; $\Delta_0$ is its Laplace-Beltrami operator. We adopt standard notation for $L^p$ and Sobolev spaces when $1 \le p<\infty$ but we use $L^\infty (\Omega)$ for $C_b(\Omega)$ to unify the notation. $B_r$ is the ball of center $0$ and radius $r$, $B_r^c={\mathbb{R}}^N\setminus B$. We write $B$ for $B_1$. For $V\subseteq{\mathbb{R}}^N$, we denote by $\overset{\mathrm{o}}{V}$ the interior part of $V$. When $L$ is a closed operator $\sigma (L)$, $P\sigma (L)$, $A\sigma (L)$, $R\sigma (L)$, denote the spectrum, the point-spectrum, the approximate point spectrum and the residual spectrum, respectively. Definitions and the relevant properties are listed in the Appendix. Basic results and methods {#Preliminaries} ========================= Let $L$ be as in (\[L\]) and let $\Omega$ be an open, bounded, connected subset of ${\mathbb{R}}^N$ containing the origin and with a smooth boundary, or $\Omega={\mathbb{R}}^N$. For $1\leq p\leq\infty$, $\alpha\in{\mathbb{R}}$ we define $$\begin{aligned} D_{p,\alpha}(\Omega):&=\left\{u:\ |x|^{\alpha-2}u,\ |x|^{\alpha}Lu\in L^p\left(\Omega\right),\ u=0 \text{ on } \partial \Omega\right\} $$ $Lu$ is understood as a distribution in $\Omega \setminus \{0\}$. Since the coefficients of $L$ are $C^\infty$ away from the origin, by local elliptic regularity it follows that, if $u \in D_{p, \alpha}(\Omega)$, then $u \in W^{2,p}_{loc}({\mathbb{R}}^N \setminus \{0\})$ when $\Omega={\mathbb{R}}^N$ and $u \in W^{2,p}(\Omega \setminus B_{\varepsilon})$ for every ${\varepsilon}>0$, when $\Omega$ is bounded. This clearly holds for $1<p<\infty$; when $p=\infty$, the same is true for any $q <\infty$. Note that, when $\Omega$ is bounded, also the class $$\begin{aligned} D_{p,\alpha,0}(\Omega):=\{u \in D_{p,\alpha}(\Omega) , u=0\ {\rm in\ a\ neighborhood\ of }\ \partial \Omega \} \end{aligned}$$ could be considered. However, since every function $u\in D_{p,\alpha,0}(\Omega)$, extended by $0$ to ${\mathbb{R}}^N$, belongs to $D_{p,\alpha}({\mathbb{R}}^N)$, the problem is then reduced to the case of the whole space. A scaling argument, moreover, shows that Rellich inequalities (\[Intr 1\]) hold in $D_{p,\alpha,0}(\Omega)$ if and only if they hold in $D_{p,\alpha}({\mathbb{R}}^N)$. Defining $$\begin{aligned} v(x)=|x|^{\alpha-2}u(x), \end{aligned}$$ it is straightforward to compute that $|x|^\alpha L u=Av-\mu v$, where $$\begin{aligned} \label{A} A=|x|^2\Delta +(c+4-2\alpha)x\cdot\nabla \ \ {\rm and } \ \ \mu=b-(2-\alpha)(N-\alpha+c).\end{aligned}$$ Then Rellich inequalities (\[Intr 1\]) are equivalent to the spectral estimates $$\label{disv} \|\mu v-Av\|_p \geq C\|v\|_p, \quad v\in D_{p,max}(\Omega)$$ where $$\begin{aligned} D_{p,max}(\Omega):&=\left\{u\in L^p(\Omega): Au\in L^p\left(\Omega\right),\ u=0 \text{ on } \partial \Omega\right\} $$ and $Au$ is understood as a distribution as above. Moreover, the constants $C$ in (\[Intr 1\]) and (\[disv\]) are the same. Inequalities (\[disv\]) hold precisely when $\mu$ does not belong to the approximate point spectrum of $A$. This explains why a large part of this paper is devoted to the study of the operator $A$ and of the fine structure of its spectrum. In the next proposition we state the above reduction, for further reference, and prove a density result using the same method. We refer to Section \[spectral\] for basic definitions and results from spectral theory. \[reddensity\] Let $L$ be as in (\[L\]) and let $\Omega$ be an open, bounded, connected subset of ${\mathbb{R}}^N$ containing the origin and with a $C^{2, \beta}$ boundary, or $\Omega={\mathbb{R}}^N$. Then - Rellich inequalities (\[Intr 1\]) hold if and only if $\mu=b-(2-\alpha)(N-\alpha+c) $ does not belong to the approximate point spectrum of $(A,D_{p,max}(\Omega))$. - Rellich inequalities (\[Intr 1\]) hold for functions in $D_{p,\alpha}(\Omega)$ if and only if they hold for $C^2$ - functions vanishing in a neighbourhood of the origin and on $\partial \Omega$, when $ \Omega$ is bounded, or also in a neighbourhood of infinity, when $\Omega={\mathbb{R}}^N$. [ Proof. ]{} The discussion above shows that Rellich inequalities hold if and only the spectral inequalities (\[disv\]) are valid in $D_{p,max}(\Omega)$, hence when $\mu$ does not belong to the approximate point spectrum of $A$, by Proposition \[Rellich-spectrum\]. This proves (i). To prove (ii) it is sufficient to note that the transformation $v(x)=|x|^{\alpha-2}u(x)$ preserves the class of functions defined in (ii) and that, by Lemma \[coreOmega\] and Proposition \[L1\], these functions constitute a core of $(A, D_{p,max}(\Omega))$. The interplay between the operators $A$ and $L$ allows to give simple proofs of Rellich inequalities in special cases where best constants can be computed. \[easy\] Let $\Omega$ be an open, bounded, connected subset of ${\mathbb{R}}^N$ with a $C^{1}$ boundary, or $\Omega={\mathbb{R}}^N$. Assume that $1 \le p \le \infty$, that $D:=b+\left(\frac{N-2+c}{2}\right)^2>0$ and that $$\label{range} N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}-\sqrt {D}<\alpha < N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}+\sqrt {D}.$$ Then Rellich inequalities (\[Intr 1\]) hold in $D_{p, \alpha}(\Omega)$ with $C:=b+\Bigl(\frac{N}{p}-2+\alpha\Bigr)\Bigl(\frac{N}{p'}-\alpha+c\Bigr)$. The constant $C$ is optimal when $\Omega$ contains the origin. [ Proof. ]{} We have to show that (\[disv\]) holds, with the constant $C$ above, for $A$ and $\mu$ defined in (\[A\]). This is proved in Theorem \[dissipativity\], using only integration by parts and Hardy inequality (change $c$ with $c+4-2\alpha$ and $\lambda-\omega_p$ with $\mu$, therein). We note that $C>0$ is equivalent to (\[range\]). To prove the optimality of $C$, when $ 0 \in \Omega$, we observe that Rellich inequalities are invariant under dilations. If $C_\Omega$ is the best constant in $\Omega$, then $C_{r\Omega}=C_\Omega$ for any $r>0$. Letting $r \to \infty$ we see that $C_{{\mathbb{R}}^N} \le C_\Omega $. However, $C_{{\mathbb{R}}^N}=b+\Bigl(\frac{N}{p}-2+\alpha\Bigr)\Bigl(\frac{N}{p'}-\alpha+c\Bigr)$, by [@rellich Theorem 3.1]. Note that when $L=\Delta$, then $D=(N-2)^2/4$ and (\[range\]) reduces to $2-N/p <\alpha <N/p'$ and $C=\Bigl(\frac{N}{p}-2+\alpha\Bigr)\Bigl(\frac{N}{p'}-\alpha\Bigr)$. If $\Omega$ does not contain the origin the constant $C$ above is not optimal, in general, see again [@rellich Section 6] for the case of the half space. Next, we show explicit counterexamples to Rellich inequalities already appeared in [@musina] when $L=\Delta$. We distinguish between free counterexamples depending on the singularity at zero, which appear in any set $\Omega$ containing the origin and counterexamples where the boundary $\partial \Omega$ is involved, appearing only when $\Omega$ is bounded in addition to the preceding ones. We confine here only to the case of the unit ball $B$; the general case will be treated in Section \[Rellich Bounded domain\]. We employ spherical coordinates on ${\mathbb{R}}^N\setminus\{0\}$ and write $x=r\omega$, where $r:=|x|$, $\omega:=x/|x|\in {\mathbb{S}}^{N-1}$. Then $$\begin{aligned} L=D_{rr}+\frac{N-1+c}{r} D_r-\frac{b-\Delta_0}{r^2},\end{aligned}$$ where $D_{rr}$, $D_r$ denote radial derivatives and $\Delta_0$ is the Laplace-Beltrami operator on the unit sphere $\mathbb{S}^{N-1}$. Let $P$ be a spherical harmonics of order $n\in N_0$, with $\Delta_0 P=-\lambda_n P$, $\lambda_n=n(N+n-2)$. If $u(r\omega)=v(r)P(\omega)$ then $$\begin{aligned} Lu=\left[v_{rr}+\frac{N-1+c}{r} v_r-\frac{b+\lambda_n}{r^2}v\right]P.\end{aligned}$$ The equation $Lu=0$ has solutions $|x|^{-s_1^n}P$, $|x|^{-s_2^n}P$ where the function $r^{-s_1^n}$, $r^{-s_2^n}$ solve $$v_{rr}+\frac{N-1+c}{r} v_r-\frac{b+\lambda_n}{r^2}v=0.$$ $s_1^n,s_2^n$ are the roots of the indicial equation $f(s)=-s^2+(N-2+c)s+b+\lambda_n=0$ given by $$\label{defs} s_1^n:=\frac{N-2+c}{2}-\sqrt{D+\lambda_n}, \quad s_2^n:=\frac{N-2+c}{2}+\sqrt{D+\lambda_n}$$ where $$\label{defD} D:= b+\left(\frac{N-2+c}{2}\right)^2.$$ The following Examples shows that, due to the singularity of $L$ at $0$, Rellich inequalities always fail when $\alpha$ equals one of the values $$\begin{aligned} \alpha_n^{\pm}:= N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}\pm{\textrm{\emph{Re}\,}}\sqrt {D+\lambda_n}, \quad \,n\in{\mathbb{N}}_0,\end{aligned}$$ \[esem1\] Let $1\leq p\leq\infty$ and let $\Omega\subseteq {\mathbb{R}}^N$ be an open subset of ${\mathbb{R}}^N$ such that $0\in\Omega$. If $\alpha=\alpha_n^{\pm}$, then Rellich inequalities do not hold in $D_{p,\alpha}(\Omega)$. Suppose, for example, that $\alpha=\alpha_n^-$. Let $s_1^n$ be defined in and $\gamma=-{\textrm{\emph{Re}\,}}s_1^n$. We fix $R>0$ such that $B_R\subseteq\Omega$ and take $P$ a spherical harmonics of order $n$. The function $$\begin{aligned} u(r\omega):=r^\gamma P(\omega), \quad x=r\omega\in B_R\end{aligned}$$ satisfies $Lu=0$ but $|x|^{\alpha_n^--2} u\notin L^p\left(B_r\right)$ since $$\begin{aligned} \label{Counterexample 0-1} \alpha_n^--2+\gamma=-\frac{N}{p},\quad 1\leq p\leq\infty.\end{aligned}$$ Let $\varphi\in C^\infty({\mathbb{R}})$ such that $\mbox{supp}\,\varphi\subseteq [\frac 1 4,\frac 1 2]$ and $\varphi_\epsilon(r):=\varphi(r^\epsilon)$. By construction $u_\epsilon:=u\varphi_\epsilon$ has support in $ [\left(\frac 1 4\right)^{\frac 1 \epsilon},\left(\frac 1 2\right)^{\frac 1 \epsilon}]$, lies in $D_{p,\alpha}\left(\Omega\right)$ and satisfies $$\begin{aligned} Lu_\epsilon(r\omega)=P(\omega)\left[r^\gamma\varphi_\epsilon''+(2\gamma+N-1+c)r^{\gamma-1}\varphi_\epsilon'\right].\end{aligned}$$ If $1\leq p<\infty$ and $\bar r>0$ such that $\mbox{supp}\,\varphi_\epsilon\subseteq B_{\bar r}$ we get $$\begin{aligned} \int_{\Omega}|x|^{(\alpha_n^--2)p} |u_\epsilon|^p\,dx&=\int_{B_{\bar r}}|x|^{(\alpha_n^--2+\gamma)p} |P(\omega)|^p|\varphi_\epsilon|^p\,dx=C\int_0^{\bar r}\frac{|\varphi(r^\epsilon)|^p}{r}\,dr=\frac{C}{\epsilon}\int_{\frac 1 4}^{\frac 1 2}\frac{|\varphi(s)|^p}{s}\,ds,\end{aligned}$$ where $C=\int_{\mathbb{S}^{N-1}} |P(\omega)|^p\,d\omega$. On the other hand $$\begin{aligned} \int_{\Omega}|x|^{\alpha_n^-p} |Lu_\epsilon|^p\,dx=C\,\epsilon^{p-1}\int_{\frac 1 4}^{\frac 1 2}s^{p-1}\left|\epsilon s\varphi''(s)+(2\gamma+N-2+c+\epsilon)\varphi'(s)\right|^p\,ds.\end{aligned}$$ It follows, from the previous equalities, that $$\begin{aligned} \frac{\int_{\Omega}|x|^{\alpha_n^-p} |Lu_\epsilon|^p\,dx}{\int_{\Omega}|x|^{(\alpha_n^--2)p} |u_\epsilon|^p\,dx}=\epsilon^p\,\frac{\int_{\frac 1 4}^{\frac 1 2}s^{p-1}\left|\epsilon s\varphi''(s)+(2\gamma+N-2+c+\epsilon)\varphi'(s)\right|^p\,ds}{\int_{\frac 1 4}^{\frac 1 2}\frac{|\varphi(s)|^p}{s}\,ds} $$ which tends to $0$ as $\epsilon \to 0$, hence Rellich inequalities do not hold in $D_{p,\alpha}(\Omega)$ for $1\leq p<\infty$.\ If $p=\infty$, then $\alpha_n^--2+\gamma=0$ and an analogous computation yields $$\begin{aligned} |x|^{\alpha_n^--2} u_\epsilon(x)&=P(\omega)\varphi(r^\epsilon),\\[1ex] |x|^{\alpha_n^-p} Lu_\epsilon(x) &=P(\omega)\left[r^{2\epsilon}\epsilon^2\varphi''(r^\epsilon)+\epsilon(2\gamma+N-2+c+\epsilon)r^\epsilon\varphi'(r^\epsilon)\right] .\end{aligned}$$ This implies $$\begin{aligned} \frac{\||x|^{\alpha_n^-}Lu_\epsilon\|_\infty}{\||x|^{\alpha_n^--2} u_\epsilon\|_\infty}=\frac{\epsilon\sup_{s\in[\frac 1 4,\frac 1 2]}\left|\epsilon s^2\varphi''(s)+(2\gamma+N-2+c+\epsilon)s\varphi'(s)\right|}{\sup_{s\in[\frac 1 4,\frac 1 2]}|\varphi(s)|} $$ which tends to $0$ as $\epsilon \to 0$. The proof for $\alpha=\alpha_n^+$ is similar, choosing $\gamma=-{\textrm{\emph{Re}\,}}s_2^n$.\ Next we consider the case where $\Omega=B$ and show that, due to the Dirichlet boundary condition at $\partial B$, new counterexamples appear, in addition to the previous ones. The same result is proved in Section \[Rellich Bounded domain\] for general bounded domains. \[Counterexample in B\] If $ 1 \le p \le \infty$ and $\alpha> N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}+ {\textrm{\emph{Re}\,}}\sqrt{D}$, then the Rellich inequalities cannot hold in $D_{p,\alpha}(B)$. Let $\alpha> N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}+ {\textrm{\emph{Re}\,}}\sqrt{D}$ and let $s_{1,2}$ be defined in with $n=0$. The function $$\begin{aligned} u(x):=|x|^{-s_2}-|x|^{-s_1}\end{aligned}$$ satisfies $Lu=0$ and $|x|^{\alpha-2} u\in L^p (B)$, since $ \alpha-2+{\rm Re\ }s_{1,2}>-N/p$. Furthermore $u=0$ on $\partial B$, hence $u\in D_{p,\alpha}(B)$ and, since $Lu=0$, Rellich inequalities fail.\ The operator $A=|x|^2\Delta+cx\cdot \nabla$ {#Section A} =========================================== Let $c \in {\mathbb{R}}$ and $$\label{defA} A=|x|^2\Delta+cx\cdot \nabla.$$ This section is devoted to the analysis of $A$ acting on $L^p(\Omega)$ for $1\leq p\leq \infty$, where $\Omega={\mathbb{R}}^N$ or a bounded domain, endowed with Dirichlet boundary conditions in this last case. The operator is degenerate both at $0$ and at $\infty$. Employing spherical coordinates on ${\mathbb{R}}^N\setminus\{0\}$ we write $x=r\omega$, where $r:=|x|$, $\omega:=x/|x|\in {\mathbb{S}}^{N-1}$ and $$\Delta= D_{rr}+\frac{N-1}{r}D_r+\frac{1}{r^2}\Delta_0,$$ where $D_{rr}$, $D_r$ denote radial derivatives and $\Delta_0$ is the Laplace-Beltrami operator on the unit sphere $\mathbb{S}^{N-1}$. Thus we obtain $$\begin{aligned} A=r^2D_{rr}+(N-1+c)r D_r+\Delta_0.\end{aligned}$$ Defining $$\Gamma=r^2D_{rr}+(N-1+c)r D_r,$$ the operators $\Gamma$ and $\Delta_0$ act on independent variables and therefore, when $\Omega$ is spherically symmetric, generation and spectral properties of $A$ can be proved through tensor products methods. We start by analysing $\Gamma$ and $\Delta_0$ separately and then we deduce properties of $A$ on $L^p(\Omega)$ when $\Omega={\mathbb{R}}^N$ and $\Omega=B$. This method has the advantage to apply also on more general subspaces defined as tensor products of radial functions and spherical harmonics. Finally, we study $A$ in a general open set $\Omega$. The Laplace-Beltrami operator $\Delta_0$ on $L^p_{J}(S^{N-1})$ -------------------------------------------------------------- We summarize in the next proposition some well known results about $\Delta_0$ referring, for example, to [@Grigoryan; @Mor; @SW] for further details. We recall that a spherical harmonic $P^n$ of order $n$ is the restriction to $\mathbb{S}^{N-1}$ of a homogeneous harmonic polynomial of degree $n$. We write $L^\infty (S^{N-1})$ for $C(S^{N-1})$. \[lemma Spherical Harmonics\] The Laplace-Beltrami operator $\Delta_0$ generates an analytic semigroup $(T_{S^{N-1}}(t))_{t \ge 0}$ in $L^p(S^{N-1})$ (with respect to the surface measure $d\sigma$) for every $1\leq p \le \infty$ . If $1<p<\infty$, its domain $D_p(\Delta_0)$ coincides with $W^{2,p}(S^{N-1}, d\sigma)$ . The spectrum of the operator $(\Delta_0,D_p(\Delta_0))$ is independent of $1 \le p \le \infty$ and consists of eigenvalues $-\lambda_n:=-n(n+N-2)$, $n \in {\mathbb{N}}_0$. The eigenspace corresponding to $-\lambda_n$ consist of all spherical harmonics of degree $n$ and has dimension $a_n$ where $a_0=1$, $a_1=N$ and for $n \ge 2$ $$a_n= \binom{N+n-1}{n}-\binom{N+n-3}{n-2}.$$ The linear span of spherical harmonics coincides with the set of all polynomials and it is dense in $C(\mathbb{S}^{N-1})$, hence in $L^p(\mathbb{S}^{N-1})$ for every $1\leq p<\infty$. The generation and spectral properties of the Laplace Beltrami operator $\Delta_0$ are classic result about Heat operators on compact manifolds. If $1<p<\infty$, $D_p(\Delta_0)=W^{2,p}(S^{N-1}, d\sigma)$ by elliptic regularity. The analyticity of the semigroup as well as the invariance of the spectrum follows, for example, from the Gaussian estimates of the heat kernel of $\Delta_0$ (see e.g. [@davies Theorem 5.2.1, Theorem 5.5.1]) using [@ou Corollary 7.5, Theorem 7.10]. The main properties of spherical harmonics can be found in [@Mor Chapter II] and [@SW Chapter IV.2].\ Accordingly to the latter proposition let $$\sigma(S^{N-1})=\{\lambda_n=n(n+N-2):\ n\in{\mathbb{N}}_0\}$$ be the spectrum of $(-\Delta_0,D_p(\Delta_0))$ and let us write $\{P_j,\}_{j\in{\mathbb{N}}_0}$ and $\{\lambda(P_j)\}_{j\in{\mathbb{N}}_0}$ to denote the sequences of the ($L^2$-orthonormal) eigenfunctions and their respectively eigenvalues repeated according to the relative multiplicity. With this notation $P_j$ is a spherical harmonics whose eigenvalue is $\lambda(P_j)=n(n+N-2)$ and $n=\mbox{deg}(P_j)$. We extend the analysis of $\Delta_0$ on more general subspaces defined by spherical harmonics. \[Fjp\] For a given $J \subseteq {\mathbb{N}}_0$ we define $$L^p_{J}(S^{N-1}) =\overline{span\{P_j: j \in J\}},$$ where the closure is taken in $L^p(S^{N-1})$, $1\leq p \le \infty$. It is clear that $L^p_{J}(S^{N-1})$ is $\Delta_0$-invariant and that the domain of ${\Delta_0}_{|L^p_{J}(S^{N-1})}$ is given by $D_{p}(\Delta_0)\cap L^p_{J}(S^{N-1})$. The following lemma is elementary and proved in [@rellich Lemma 5.8]. \[spectrum Delta0\] Let $1\leq p\leq \infty$ and $J \subseteq {\mathbb{N}}_0$. Then ${\Delta_0}_{|L^p_{J}(S^{N-1})}$ generates in $L^p_{J}(S^{N-1})$ the analytic semigroup $$\left(T_{S^{N-1}}(t)_{|L^p_{J}(S^{N-1})}\right)_{t \ge 0}.$$ Moreover ${span}\{P_j: j \in J\}$ is a core for $\Delta_0$ in $L^p_J(S^{N-1})$ and $$\sigma({-\Delta_0}_{|L^p_{J}(S^{N-1})})=\{\lambda(P_j):\ j\in J \}$$ where $\lambda (P_j)$ is the eigenvalue whose eigenfunction is $P_j$. Note that, since each eigenvalue can have more than one eigenfunction, different set of indexes leads to different spaces but not necessarily to different spectra. The asymptotic behaviour of $\left(T_{S^{N-1}}(t)_{|L^p_{J}(S^{N-1})}\right)_{t \ge 0}$ in $L^p_{J}(S^{N-1})$ is determined by the first eigenvalue. However we need a better estimate near $t=0$ which relies on a Poincaré-type inequality. (*[[@met-soba-spi3 Lemma 2.7]]{})\[Poincare-Lpn\] Let $1 < p < \infty$ and $J \subseteq {\mathbb{N}}_0$ such that $n:=\mbox{min}J\geq 1$. Let $\widetilde{C}_{p,n}$ be the best constant for which $$\int_{S^{N-1}}|v|^p\,d\omega \leq C\int_{S^{N-1}}|\nabla_{\!\tau} v|^2|v|^{p-2}\,d\omega, \quad v\in C^\infty(S^{N-1})\cap L^p_{J}(S^{N-1}).$$ Then $\widetilde{C}_{p,n}$ are finite, decreasing and satisfy $\widetilde{C}_{p,n}\to 0$ as $n\to \infty$.* In the next Proposition we assume that the numbers $\lambda(P_j)$ are listed in the increasing order. \[asymptotic\] Let $J \subseteq {\mathbb{N}}_0$ and let $n$ be the smallest integer in $J$. There exists $M$ (depending on $n$ but not on $p$) such that for every $1 \le p \le \infty$ $$\label{expdecay} \|T_{S^{N-1}}(t)_{|L^p_{J}(S^{N-1})}\|_p \le M^{\big|1-\frac{2}{p}\big|}e^{-\lambda(P_n)\, t}.$$ Furthermore $M=1$ when $n=0$. If $1<p<\infty$ then $$\label{expdecay 2} \|T_{S^{N-1}}(t)_{|L^p_{J}(S^{N-1})}\|_p \le e^{-\frac{p-1}{\tilde C_{p,n}}\, t},$$ where $\tilde C_{p,n}$ is the best constant of Lemma \[Poincare-Lpn\]. The first statement is proved in [@rellich Lemma 5.9]. To prove the second it is enough to show the dissipativity of $\Delta_0+\frac{p-1}{\tilde C_{p,n}}$ on $L^p_{J}(S^{N-1})$ or equivalently that, for every $u\in C^\infty(S^{N-1})\cap L^p_{J}(S^{N-1})$, $$-\int_{S^{N-1}}\Delta_0u |u|^{p-2}u d\sigma\geq \frac{p-1}{\tilde C_{p,n}} \int_{S^{N-1}}|u|^p\; d\sigma.$$ Consider first the case $2\leq p<\infty$. Setting $u^\star=u|u|^{p-2}$ we multiply $\Delta_0 u$ by $u^\star$ and integrate over $S^{N-1}$. Integrating by parts and using Lemma \[Poincare-Lpn\] we get $$\begin{aligned} -\int_{S^{N-1}}\Delta_0 u\, u^\star\; d\sigma&=(p-1)\int_{S^{N-1}}|u|^{p-2}|\nabla_\tau u|^2\;d\sigma\geq \frac{p-1}{\tilde C_{p,n}}\int_{S^{N-1}}|u|^p\; d\sigma.\end{aligned}$$ For $1<p<2$ it is sufficient to replace $u^\star$ by $u(u^2+\delta)^{\frac{p}{2}-1}$, $\delta>0$; and then let $\delta$ to $0$ to obtain the same inequality.\ The operator $\Gamma$ on $L^p(I,r^{N-1}\,dr)$ {#section Gamma} --------------------------------------------- In this section we summarize the main results about generation and spectral properties for the operator $$\Gamma=r^2D_{rr}+(N-1+c)r D_r,$$ acting, for $1\leq p<\infty$, on $L^p(I,r^{N-1}dr)$, where $I=]0,\infty[$ or $I=]0,1[$. When $p=\infty$, $L^\infty(I,r^{N-1}\,dr)$ stands for the space $C_0^0(I)$ of all the continuous functions defined on $I$ vanishing at both endpoints.\ For $1\leq p\leq \infty$ we define $\Gamma_p$ as the operator $\Gamma$ endowed with the domain $D(\Gamma_p)$ defined, when $I=]0,\infty[$, as $$\begin{aligned} \label{dp gamma 0,infty} D(\Gamma_p)=\{u\in L^p(]0,\infty[,r^{N-1}\,dr),\ r\frac{\partial u}{\partial r},\ r^2\frac{\partial^2 u}{\partial r^2}\in L^p(]0,\infty[,r^{N-1}\,dr)\}\end{aligned}$$ and for $I=]0,1[$ $$\begin{aligned} \label{dp gamma 0,1} D(\Gamma_p)=\{u\in L^p(]0,1[,r^{N-1}\,dr),\ r\frac{\partial u}{\partial r},\ r^2\frac{\partial^2 u}{\partial r^2}\in L^p(]0,1[,r^{N-1}\,dr),\ u(1)=0\}.\end{aligned}$$ In the next Theorem we show that $\Gamma_p$ always generates an analytic semigroup in $L^p(I,r^{N-1}\,dr)$; the spectral analysis is more subtle since the spectrum and the approximate point spectrum of $\Gamma_p$ drastically change accordingly to $I$ being bounded or not and to the sign of $N\left(1-\frac{2}{p}\right)-2+c$. Let us introduce some notation: for $1\leq p\leq\infty$ (limiting values are taken for $p=\infty$), let us set $$\label{spettrogamma} {\cal Q}_p:=\left\{\lambda\in {\mathbb{C}}\ \textrm{such that}\ {\rm Re}\lambda\leq -\frac{({\rm Im} \lambda)^2}{\left (N\left(1-\frac{2}{p}\right)-2+c\right)^2}-\omega_p\right\}$$ and $$\label{spettrogamma1} {\cal P}_p:=\left\{\lambda=-\xi^2+i\xi\left (N(1-\frac{2}{p})-2+c\right)-\omega_p,\, \xi\in{\mathbb{R}}\right\},$$ where $$\label{omegap} \omega_p:=\frac{N}{p^2}\left[p(N-2+c)-N\right].$$ ${\cal P}_p$ is a parabola having vertex $-\omega_p$, symmetric with respect to the $x$ axis whereas ${\cal Q}_p$ is the region enclosed inside ${\cal P}_p$. Obviously ${\cal P}_p$ coincides with the boundary of ${\cal Q}_p$ and, when $N\left(1-\frac{2}{p}\right)-2+c=0$, both reduce to the half line $(-\infty, -\omega_p]$. \[spec-rad\] Let $1\leq p\leq \infty$. Then the operator $\Gamma_p$ generates a strongly continuous analytic semigroup $(S(t))_{t \ge 0}$ in $L^p(I,r^{N-1}dr)$ which satisfies the estimate $$\|S(t)\|_p \le e^{-\omega_p t},\quad \text{ for }\ t \ge 0.$$ If $I=]0,\infty[$ we have $$\sigma(\Gamma_p)=A\sigma(\Gamma_p)={\cal P}_p .$$ If $I=]0,1[$, then $$\sigma(\Gamma_p)={\cal Q}_p.$$ Moreover - if $N\left(1-\frac{2}{p}\right)-2+c<0$, then $\sigma(\Gamma_p)=A\sigma(\Gamma_p)={\cal Q}_p$, $P\sigma(\Gamma_p) \supset\overset{\mathrm{o}}{\cal Q}_p$; - if $N\left(1-\frac{2}{p}\right)-2+c=0$, then $\sigma(\Gamma_p)=A\sigma(\Gamma_p)=(-\infty, -\omega_p]$; - if $N\left(1-\frac{2}{p}\right)-2+c>0$, then $ A\sigma(\Gamma_p)={\cal P}_p$, $\overset{\mathrm{o}}{\cal Q}_p=R\sigma(\Gamma_p)\setminus A\sigma(\Gamma_p)$. [Proof.]{} Assume first that $I=]0,1[$. Let $J=]-\infty,0[$ and consider the isometry $S$ defined, for $1\leq p<\infty$, by $$S:L^p(J, ds)\to L^p( ]0,1[,{r}^{N-1}\,d{r}), \quad (Su)({r})={r}^{-\frac{N}{p}}u(\log {r}),$$ and, for $p=\infty$, by $$S:C_0^0\left(J\right)\to C_0^0\left(]0,1[\right), \quad Su({r})=u(\log {r}).$$ It follows that $$S^{-1}\Gamma S u=u''+\left(N\left(1-\frac{2}{p}\right)-2+c\right)u'-\omega_p u.$$ By classical results, $S^{-1}\Gamma S$, endowed with domain $D_p(S^{-1}\Gamma S)$ $$\begin{aligned} W^{2,p}(J)\cap W_0^{1,p}(J)\ ( p<\infty), \qquad \left\{u\in C_0^0(J)\cap C^2 \left(J\right):\ S^{-1}\Gamma Su\in C_0^0\left(J\right) \right\}\ ( p=\infty),\end{aligned}$$ generates a strongly continuous analytic semigroup in $L^p\left(J\right)$ whose norm is bounded by $e^{-\omega_p t}$. It is elementary to check that $$D(\Gamma_p)=\{Su:\ u\in D_p\left(S^{-1}\Gamma S\right)\}.$$ It follows that $\Gamma_p$ generates a strongly continuous and analytic semigroup $(S(t))_{t \ge 0}$ in the space $L^p(]0,1[, {r}^{N-1}d{r})$ which satisfies $\|S(t)\|_p \le e^{-\omega_p t}$. The case $I=]0,\infty[$ is similar and proved in [@rellich Proposition 5.1] by considering $S$ with $J={\mathbb{R}}$.\ Concerning the second part of the statement we observe that the spectra of $\Gamma_p$ and $S^{-1}\Gamma_p S$ coincide. When $I=]0,\infty[$, the operator $S^{-1}\Gamma_p S$ is uniformly elliptic in $L^p({\mathbb{R}}, ds)$, hence its spectrum is independent of $p$ and coincides with the spectrum in $L^2({\mathbb{R}}, ds)$ which is ${\cal P}_p$, using the Fourier transform. Furthermore, since ${\cal P}_p$ coincides with its boundary, it follows, from Proposition \[boundary-spectrum\], that $\sigma(\Gamma_p)=A\sigma(\Gamma_p)={\cal P}_p$. When $I=]0,1[$ we use Lemma \[ODE2\] to see that the spectrum of $S^{-1}\Gamma_p S$, hence of $\Gamma_p$, coincides with the region ${\cal Q}_p$. Moreover, for the same reason, the approximate point spectrum $A\sigma(\Gamma_p)$ coincides with ${\cal Q}_p$ if $N\left(1-\frac{2}{p}\right)-2+c<0$ (and in this case $P\sigma(\Gamma_p)\supset\overset{\mathrm{o}}{\cal Q}_p$), with the boundary ${\cal P}_p$ if $N\left(1-\frac{2}{p}\right)-2+c>0$ (and in this case $\overset{\mathrm{o}}{\cal Q}_p=R\sigma(\Gamma_p)\setminus A\sigma(\Gamma_p)$) and with the half line $(-\infty, -\omega_p]$ when $N\left(1-\frac{2}{p}\right)-2+c=0$.\ \[Remark Equality domain N=1,p=inf\] Since the domain $D_p(S^{-1}\Gamma S)$ coincides with its maximal one $$\{u \in L^p(J, ds): S^{-1}\Gamma Su \in L^p(J,ds)\},$$ as it easily follows from the classical interpolative inequalities $ \|u'\|_p\leq \epsilon \|u''|_p+\frac{C}{\epsilon}\|u\|_p, $ it follows that $$D(\Gamma_p)=\{ u \in L^p(I, r^{N-1}\, dr): \Gamma u \in L^p(I, r^{N-1}\, dr)\}.$$ The operator $A=|x|^2\Delta+cx\cdot \nabla$ on $L^p_J({\mathbb{R}}^N)$ and $L^p_J(B)$ ------------------------------------------------------------------------------------- In this section we use tensor arguments to combine the previous results on $\Gamma$ and $\Delta_0$ and deduce generation and spectral properties of $$A=|x|^2\Delta+cx\cdot \nabla$$ on $L^p(\Omega)$ when $\Omega={\mathbb{R}}^N$ and $\Omega=B$. We extend the analysis also on more general subspaces defined by tensor products of radial functions and spherical harmonics. If $X,Y$ are function spaces over $G_1, G_2$ we denote by $X\otimes Y$ the algebraic tensor product of $X,Y$, that is the set of all functions $u(x,y)=\sum_{i=1}^n f_i(x)g_i(y)$ where $f_i \in X, g_i \in Y$ and $x \in G_1, y\in G_2$. If $T,S$ are linear operators on $X,Y$ we denote by $T\otimes S$ the operator on $X\otimes Y$ defined by $$T\otimes S \left (\sum_{i=1}^n f_i(x)g_i(y)\right)=\sum_{i=1}^n T f_i(x)Sg_i(y).$$ Let us fix a complete orthonormal system of spherical harmonics $\{P_j,\}_{j\in{\mathbb{N}}_0}$ $L^2(S^{N-1})$ and let $\{\lambda(P_j)\}_{j\in{\mathbb{N}}_0}$ be the sequence of the corresponding eigenvalues repeated according to their multiplicity. With this notation $-\Delta_0(P_j)=\lambda(P_j)P_j$ and $\lambda(P_j)=n(n+N-2)$, where $n=\mbox{deg}(P_j)$.\ Unless otherwise specified $\Omega$ denotes ${\mathbb{R}}^N$ or $B$, $I$ stands for $]0,\infty[$, $]0,1[$, respectively. As usual we write $L^\infty (\Omega)$ for $C_0^0(\Omega)$. \[Fjp1\] Let $1\leq p\leq \infty$ and let $J \subseteq {\mathbb{N}}_0$. We define $$L^p_{J}(\Omega) =\overline{L^p\left(I, r^{n-1}dr\right)\otimes L^p_J(S^{N-1})}=\overline{L^p\left(I, r^{n-1}dr\right)\otimes \mbox{span}\{P_j: j \in J\}},$$ where the closure is taken in $L^p(\Omega)$. Fixing $n \in {\mathbb{N}}_0$ we write $L^p_{\ge n}(\Omega), L^p_n(\Omega), L^p_{<n}(\Omega)$ when $J$ identifies all spherical harmonics of order $\ge n$, $n$ and $< n$ respectively. The spaces $L^p_{>n}(\Omega), L^p_{\le n}(\Omega), L^p_{\neq n}(\Omega)$ are defined similarly. Note that $L^p_{J}(\Omega)=L^p(\Omega)$ if $J={\mathbb{N}}_0$. The next lemma clarifies the structure of the spaces $L^p_{J}(\Omega)$. \[projection\] Assume that the $L^2$ orthogonal projection $P: L^2(S^{N-1}) \to L^2_J(S^{N-1})$ extends to a bounded projection $P$ in $L^p(S^{N-1})$. Then $$\label{complement} L^p(\Omega)= L^p_{J}(\Omega) \oplus L^p_{{\mathbb{N}}_0\setminus J}(\Omega)$$ and $$\label{caratterizzazione} L^p_{J}(\Omega)=\left \{u \in L^p(\Omega): \int_{S^{N-1}} u(r\, \omega)P_j(\omega) \, d\sigma (\omega)=0\ {\rm for} \ r\in I\ {\rm and}\ j \not \in J\right \}.$$ When $J$ is finite $$\label{Jfinito1} L^p_J(\Omega)=\Bigl \{ u=\sum_{j \in J}f_j(r)P_j(\omega): f_j \in L^p(I, r^{N-1}dr)\Bigr \}$$ and the projection $I\otimes P :L^p(\Omega) \to L^p_{J}(\Omega)$ is given by $$\label{Jfinito} (I\otimes P) u=\sum_{j \in J}T_j u (r)\, P_j(\omega),$$ where $$T_j u (r):=\int_{S^{N-1}} u(r\, \omega)P_j(\omega) \, d\sigma (\omega), \quad \forall\ u\in L^p(\Omega).$$ When $\Omega={\mathbb{R}}^N$ we refer to [@rellich Lemma 5.11]. The proof for $\Omega=B$ is identical.\ \(i) The equality $$L^p_{J}(\Omega) = \left \{u \in L^p(\Omega): \int_\Sigma u(r\, \omega)P_j(\omega) \, d\sigma (\omega)=0\ {\rm for} \ r\in I\ {\rm and}\ j \not \in J\right \}$$ holds without assuming the boundedness of the projection $P$ (see [@rellich; @Disc Proposition 2.8]).\ (ii) $L^p_0(\Omega)$ consists of radial functions and $L^p(\Omega)=L^p_{\le n}(\Omega)\oplus L^p_{>n}(\Omega)$. The following result follows from well-known and elementary facts about Tensor Product Semigroups, see [@nagel AI, Section 3.7]. A proof is provided in [@rellich Proposition 5.14] when $\Omega={\mathbb{R}}^N$, the case of the ball is similar. \[analyt\] For $1\le p \le \infty$, let $D(\Gamma_p)$ and $D({\Delta_0}_{|L^p_{J}(S^{N-1})})$ be the domains of $\Gamma_p$ and ${\Delta_0}_{|L^p_{J}(S^{N-1})}$ introduced in the previous subsection. Then the closure of the operator $$\left(A,\, D(\Gamma_p)\otimes D({\Delta_0}_{|L^p_{J}(S^{N-1})})\right)$$ generates a strongly continuous analytic semigroup $(T_{p,J}(t))_{t \ge 0}$ in $L^p_J(\Omega)$. Let $n$ be the smallest integer in $J$. Then there exists $M$ (depending on $n$ but not on $p$) such that for every $1 \le p \le \infty$ $$\label{expdecay Omega} \|T_{p,J}(t)\|_p \le M^{\big|1-\frac{2}{p}\big|}e^{-(\omega_p+\lambda(P_n))\, t},$$ where $\omega_p$ is defined in and $M$ is the constant in which satisfies $M=1$ when $n=0$. Moreover, if $1<p<\infty$, then $$\label{expdecay 2 Omega} \|T_{p,J}(t)\|_p \le e^{-\left(\omega_p+\frac{p-1}{\tilde C_{p,n}}\right)\, t},$$ where $\tilde C_{p,n}$ is the best constant of Lemma \[Poincare-Lpn\]. \[Def A\_pJ\] We denote by $A_{p,J}$ the closure of $(A,D(\Gamma_p) \otimes D({\Delta_0}_{|L^p_{J}(S^{N-1})}))$ in $L^p_{J}(B)$. When $J={\mathbb{N}}_0$ we write $A_{p}$ for $A_{p,J}$ and $T_{p}(t)$ for $T_{p,J}(t)$. The proof of the following corollary is immediate. \[restriction\] $T_{p,J} (t)$ is the restriction of $T_{p}(t)$ to $L^p_J(B)$ and its generator $A_{p,J}$ is the part of $A_{p}$ in $L^p_J(B)$. As in [@rellich Proposition 5.16], we prove that the smooth functions are a core for $A_{p,J}$. \[core Palla R\^N\] Let $1\leq p\leq\infty$. The set $$C_{c,0}^2\left(B\right):=\big\{u\in C^2_c\left(\bar B\right): u=0 \text{ on } \partial B\ \text{and on a neighborhood of } 0\big\}.$$ is a core for $A_{p,J}$ when $\Omega=B$. When $\Omega={\mathbb{R}}^N$, $C^\infty _c\left({\mathbb{R}}^N\setminus\{0\}\right)$ is a core for $A_{p,J}$. [Proof.]{} Let us suppose that $\Omega=B$. Recalling the proof of Theorem \[spec-rad\], we observe that, since by Proposition \[Sobolev approximation 1,infty\] the set $$\big\{u\in C^2_c\left(]-\infty,0]\right):\ u(0)=0 \big\}$$ is dense in $D_p(S^{-1}\Gamma_p S)$, then $${\cal F}:=\big\{u\in C^2_c\left(]0,1]\right):\ u(1)=0\big\}$$ is dense in $D(\Gamma_p)$. Moreover $ \mbox{span}\{P_j: j \in J\}$ is dense in $D({\Delta_0}_{|L^p_{J}(S^{N-1})})$. Since by construction $D(\Gamma_p)\otimes D({\Delta_0}_{|L^p_{J}(S^{N-1})})$ is a core for $A_{p,J}$, it follows that $$\mathcal{F}\otimes \mbox{span}\{P_j: j \in J\}$$ is dense in $D(A_{p,J})$. Observing that $$\mathcal{F}\otimes \mbox{span}\{P_j: j \in J\}\subseteq C_{c,0}^2\left(\Omega\right)$$ we get the thesis. The proof for $\Omega={\mathbb{R}}^N$ is similar.\ In order to prove the main result of this section, namely $$\sigma(A_{p,J})=\sigma(\Gamma_p)+\sigma({\Delta_0}_{|L^p_{J}(S^{N-1})}),$$ we need two preliminary lemmas. The first provides some regularity properties of the projection defined in and is proved in [@met-soba-spi3 Lemma 2.15] when $\Omega={\mathbb{R}}^N$. \[regularity of projections\] Let $J\subseteq {\mathbb{N}}_0$ and let $j_0\in J$. Let us consider the operator $T_{j_0} :L^p_J(\Omega) \to L^p(I,\ r^{N-1}dr)$ defined by $$T_{j_0} u (r):=\int_{S^{N-1}} u(r\, \omega)P_{j_0}(\omega) \, d\sigma (\omega), \quad \forall\ u\in L^p(\Omega)$$ and the projection $$I\otimes P_{j_0} :L^p_J(\Omega) \to L^p_{j_0}(\Omega)=L^p(I,\ r^{N-1}dr)\otimes P_{j_0}$$ given, for $u\in L^p_J(\Omega)$, $r\in I$, $\omega\in S^{N-1}$, by $$(I\otimes P_{j_0})\ u(r\omega)=\,T_{j_0} u (r) P_{j_0}(\omega).$$ Then $T_{j_0}$, $I\otimes P_{j_0}$ are well defined and bounded operator. Furthermore $T_{j_0}$ maps $D(A_{p,J})$ onto $D(\Gamma_p)$ and one has $$\begin{aligned} \label{A Tu} T_{j_0}Au=\Big(\Gamma-\lambda(P_{j_0})\Big)T_{j_0} u, \quad \forall u\in D(A_{p,J}).\end{aligned}$$ The next lemma relates the spectra of $\Gamma_p$ and $A_{p,J}$. \[heritability spectrum\] Let $1\leq p\leq \infty$, $J\subseteq{\mathbb{N}}_0$ and $j_0\in J$. Let $\Omega$ stand for ${\mathbb{R}}^N$ or $B$ and let $A_{p,J}$ be the operator defined in Definition \[Def A\_pJ\]. The following properties hold. - If $\lambda\in P\sigma(\Gamma_p)$ then $\lambda-\lambda(P_{j_0})\in P\sigma(A_{p,J})$; - If $\lambda\in A\sigma(\Gamma_p)$ then $\lambda-\lambda(P_{j_0})\in A\sigma(A_{p,J})$; - If $\lambda\in R\sigma(\Gamma_p)$ then $\lambda-\lambda(P_{j_0})\in R\sigma(A_{p,J})$; Let $\lambda\in P\sigma(\Gamma_p)$ and let $0 \neq u\in D(\Gamma_p)$ be such that $\Gamma u=\lambda u$. Then it is immediate to see that the function $f=uP_{j_0}$ satisfies $f\in D(A_{p,J})$ and $Af=\left(\lambda-\lambda(P_{j_0})\right)f$. This proves (i).\ Assertion (ii) follows similarly by using Lemma \[Char Aspectrum\].\ Let us now consider (iii) and let $\lambda\in R\sigma(\Gamma_p)$. Recalling Definition \[defi R spectrum\] we have to show that $\mbox{rg}\left(\lambda-\lambda(P_{j_0})-A_{p,J}\right)$ is not dense in $L^p(A_{p,J})$. Since $\lambda\in R\sigma(\Gamma_p)$, $\mbox{rg}(\lambda-\Gamma_p)$ is not dense in $L^p(I,r^{N-1}dr)$ and therefore there exists a linear form $0\neq G$ in the dual space $\left(L^p(I,r^{N-1}dr)\right)'$ which vanishes over $\mbox{rg}(\lambda-\Gamma_p)$. Let us consider the projection $$\begin{aligned} T_{j_0}:L^p_J(\Omega)\to L^{p}(I,r^{N-1}dr),\quad u\mapsto T_{j_0}u(r)=\int_{S^{N-1}}u(r\omega)P_{j_0}(\omega)\,d\sigma(\omega).\end{aligned}$$ Using Lemma \[regularity of projections\] we see that $0\neq T=G\circ T_{j_0}$ belongs to the dual space $\left(L^p_j(\Omega)\right)'$ and satisfies for $u\in D(A_{p,J})$, $$\begin{aligned} T\left(\lambda-\lambda(P_{j_0})-A\right)u=G\Big(T_{j_0}\left(\lambda-\lambda(P_{j_0})-A\right)u\Big)=G\Big(\left(\lambda-\Gamma_p \right)T_{j_0}u\Big)=0.\end{aligned}$$ This implies that $T$ vanishes over $\mbox{rg}(\lambda-\lambda(P_{j_0})-A_{p,J})$ and proves (iii).\ We can finally describe in detail the spectrum of $A_{p,j}$. We are mainly interested in the computation of the complement of the approximate point spectrum, that is the set of all $\lambda$ such that the inequality $$\begin{aligned} \|u\|\leq C\|\lambda u-Au\|,\quad \forall u\in D(A_{p,J})\end{aligned}$$ holds, since it is equivalent to Rellich inequalities. Observe that the situation is more complicate in the case where $N\left(1-\frac{2}{p}\right)-2+c>0$ since residual spectra appear. We recall that ${\cal P}_p$ and ${\cal Q}_p$ are defined in and . \[Spectrum main\] Let $1\leq p\leq \infty$, $J\subseteq{\mathbb{N}}_0$ and $j_0:=\min\{j\in J\}$. The following properties hold - If $\Omega={\mathbb{R}}^N$, the spectrum of $A_{p,J}$ in $L^p_J({\mathbb{R}}^N)$ is given by $$\sigma(A_{p,J})=A\sigma(A_{p,J})=\bigcup\limits_{j\in J}({\cal P}_p-\lambda(P_j))$$ and reduces to $]-\infty,-\omega_p-\lambda(P_{j_o})]$ when $N\left(1-\frac{2}{p}\right)-2+c=0$. - If $\Omega=B$, the spectrum of $A_{p,J}$ in $L^p_J(B)$ is given by $$\sigma(A_{p,J})={\cal Q}_p-\lambda (P_{j_0})$$ and reduces to $]-\infty,-\omega_p-\lambda(P_{j_o})]$ when $N\left(1-\frac{2}{p}\right)-2+c=0$. In particular we have - If $N\left(1-\frac{2}{p}\right)-2+c<0$, then $$A\sigma(A_{p,J})={\cal Q}_p-\lambda (P_{j_0}),\quad P\sigma(A_{p,J}) \supset\overset{\mathrm{o}}{\cal Q}_p-\lambda(P_{j_0}).$$ - If $N\left(1-\frac{2}{p}\right)-2+c=0$, then $$A\sigma(A_{p,J})=(-\infty, -\omega_p-\lambda (P_{j_0})].$$ - If $N\left(1-\frac{2}{p}\right)-2+c>0$, then $$\begin{aligned} A\sigma(A_{p,J})&=\bigcup\limits_{j\in J}({\cal P}_p-\lambda (P_{j}));\\[1ex] R\sigma(A_{p,J})\setminus A\sigma(A_{p,J})&=\left (\overset{\mathrm{o}}{\cal Q}_p-\lambda(P_{j_0})\right )\setminus \bigcup\limits_{j\in J}({\cal P}_p-\lambda (P_{j})).\end{aligned}$$ [Proof.]{} We give a proof only when $\Omega=B$, since the case $\Omega={\mathbb{R}}^N$ is similar and proved in [@rellich Theorem 5.17]. Let us prove first the inclusion $$\begin{aligned} \sigma(A_{p,J})\subseteq \sigma(\Gamma_p)+\sigma({\Delta_0}_{|L^p_{J}(S^{N-1})}) ={\cal Q}_p-\lambda (P_{j_0}).\end{aligned}$$ Let $\lambda \not \in {\cal Q}_p-\lambda (P_{j_0})$ and fix $n \in {\mathbb{N}}_0$ such that $$\label{boundgamma} -\omega_p-\lambda(P_k) < {\rm Re}\, \lambda \quad {\rm for\ every\ } k > n.$$ According to Lemma \[projection\] we write $L^p_J(B)= L^p_{J_n}(B) \oplus L^p_{J\setminus J_n}(B)$, where $J_n=J\cap \{0,1,\dots ,n\}$ (note that if $J_n=\emptyset$ then $L^p_{J_n}(B)={0}$ and $L^p_J(B)\subseteq L^p_{> n}(B)$). Since both $L^p_{J_n}(B)$ and $L^p_{J\setminus J_n}(B)$ are $A_{p,J}$ invariant, then $\lambda \in \rho(A_{p,J})$ if and only if $\lambda \in \rho(A_{p,J_n})$ and $\lambda \in \rho(A_{p,J \setminus J_n})$. The second inclusion follows immediately from with $J\setminus J_n$ instead of $J$, since $\rm{Re}\, \lambda$ is greater than the growth bound of $(T_{p, J \setminus J_n})_{t \ge 0}$, by (\[boundgamma\]). Concerning the first inclusion let us suppose that $J_n\neq\emptyset$ and, without loss of generality, let us assume $J_n=\{0,1,\dots,n\}$. We note that $$L^p_{J_n}(B) =\oplus_{i=0}^n L^p_{i}(B) =\oplus_{i=0}^n L^p\left((0,1),r^{N-1}dr\right)\otimes P_i$$ and that each $ L^p_{i}(B) $ is $A_{p,J}$ invariant. Moreover, $\lambda-A_{p,J}$ coincides with $\left (\lambda+\lambda (P_i)-\Gamma_p\right ) \otimes I$ on $ L^p_{i}(B) $, hence it is invertible on it, since $\lambda+\lambda(P_i) \not \in {\cal Q}_p=\sigma(\Gamma_p)$ by assumption. This shows that $\lambda \in \rho (A_{p,J})$, hence $$\begin{aligned} \label{Spectrum palla eq 1} \sigma(A_{p,J})\subseteq {\cal Q}_p-\lambda (P_{j_0}).\end{aligned}$$ Let us prove the opposite inclusion. Using the description of the spectrum of $\Gamma_p$ proved in Theorem \[spec-rad\] and Lemma \[heritability spectrum\], we get immediately the reverse inclusion and (i) and (ii). In the case $N\left(1-\frac{2}{p}\right)-2+c>0$, Lemma \[heritability spectrum\] only shows that $$\begin{aligned} A\sigma(A_{p,J})&\supseteq\bigcup\limits_{j\in J}({\cal P}_p-\lambda (P_{j}));\\[1ex] R\sigma(A_{p,J})&\supseteq \left (\overset{\mathrm{o}}{\cal Q}_p-\lambda(P_{j_0})\right )\setminus \bigcup\limits_{j\in J}({\cal P}_p-\lambda (P_{j})).\end{aligned}$$ To end the proof we need to show that, if $\lambda\in \left (\overset{\mathrm{o}}{\cal Q}_p-\lambda(P_{j_0})\right )\setminus \bigcup\limits_{j\in J}({\cal P}_p-\lambda (P_{j}))$, then $\lambda\notin A\sigma(A_{p,J})$. Recalling Proposition \[Rellich-spectrum\] this is equivalent to the validity, for some $C>0$, of the inequality $$\begin{aligned} \label{Spectrum palla eq 2} \|\lambda v-Av\|_p \geq C \|v\|_p,\quad \forall v\in D(A_{p,J}). \end{aligned}$$ Let us fix $\lambda\in \left (\overset{\mathrm{o}}{\cal Q}_p-\lambda(P_{j_0})\right )\setminus \bigcup\limits_{j\in N_0}({\cal P}_p-\lambda (P_{j}))$ and let ${\overline}{n}\in N_0$ sufficiently large such that $\lambda\notin {\cal Q}_p-\lambda(P_{{\overline}n})$. Then, by , $\lambda$ belongs to the resolvent of the operator $A_{p,>{\overline}{n}}$ in $L^p_{>{\overline}{n}}(B)$. It follows that is true in $L^p_{>{\overline}{n}}(B)$.\ Since from , $L^p(B)= L^p_{\leq {\overline}n}(B) \oplus L^p_{>{\overline}n}(B)$, it remains to proves for any $v\in D(A_{p,J})\cap L^p_{\leq {\overline}n}(B)$. Recalling and Lemma \[regularity of projections\], one has $$\begin{aligned} v(\rho\omega)=\sum_{i=1}^{{\overline}{n}}c_i(\rho)P_i(\omega), \end{aligned}$$ for some $c_i\in D(\Gamma_p)$. Then $$\begin{aligned} \|\lambda v-Av\|_p &=\|\sum_{i=1}^{{\overline}{n}}P_i\left(\lambda+\lambda(P_i)-\Gamma\right)c_i\|_p\geq C \sum_{i=1}^{{\overline}{n}}\|P_i\left(\lambda+\lambda(P_i)-\Gamma\right)c_i\|_p\\[1ex] &=C \sum_{i=1}^{{\overline}{n}}\|\left(\lambda+\lambda(P_i)-\Gamma\right)c_i\|_{L^p\left((0,1), r^{N-1}dr\right)},\end{aligned}$$ where in the last equality we have used spherical coordinates to evaluate the integrals.\ By the assumption on $\lambda$ and recalling (iii) in Theorem \[spec-rad\], one has $\lambda+\lambda(P_i)\notin {\cal P}_p=A\sigma(\Gamma_p)$, which implies, for a possibly different constant $C>0$, $$\begin{aligned} \nonumber \|\lambda v-Av\|_p &\geq C \sum_{i=1}^{{\overline}{n}}\|\left(\lambda+\lambda(P_i)-\Gamma\right)c_i\|_{L^p\left((0,1), r^{N-1}dr\right)}\\[1ex] &\geq C \sum_{i=1}^{{\overline}{n}}\|c_i\|_{L^p\left((0,1), r^{N-1}dr\right)}\geq C \|v\|_p.\end{aligned}$$ This proves in the remaining case.\ [ The inclusion $$\begin{aligned} \sigma(A_{p,J})\subseteq \sigma(\Gamma_p)+\sigma({\Delta_0}_{|L^p_{J}(S^{N-1})})= {\cal Q}_p-\lambda (P_{j_0}).\end{aligned}$$ follows also from the more general result [@arendt1 Theorem 7.3] since the semigroups generated by $\Gamma$ and ${\Delta_0}_{|L^p_{J}(S^{N-1})}$ are analytic and commute. ]{} \[a-priori\] Let $\Omega$ be equal to ${\mathbb{R}}^N$ or $B$ and assume that $\lambda+\omega_p>0$. Then the best constant for which the inequality $$\label{estA} \|u\|_p\leq C\|\lambda u-A u\|_p,\quad \forall u\in D(A_p)$$ holds is given by $$\begin{aligned} C=\frac{1}{\lambda+\omega_p}.\end{aligned}$$ If $\lambda+\omega_p>0$, then $\lambda\in\rho(A_p)$, by the preceding theorem, and then the optimal constant in is $ \|R(\lambda,A_p)\|_p$. Recalling we have $$\begin{aligned} \|R(\lambda,A_p)\|_p\geq \frac{1}{\mbox{dist}(\lambda,\sigma(A_p))}=\frac{1}{\mbox{dist}(\lambda,{\cal{P}}_p))}=\frac{1}{\lambda+\omega_p}.\end{aligned}$$ Using the contractivity estimates and writing the resolvent as the Laplace transform of the semigroup we see that also the reverse inequality $$\begin{aligned} \|R(\lambda,A_p)\|_p\leq \frac{1}{\lambda+\omega_p}\end{aligned}$$ holds. The operator $A=|x|^2\Delta+cx\cdot \nabla$ on $L^p(\Omega)$ ------------------------------------------------------------ In this section we complete the study of the operator $A$ in ${\mathbb{R}}^N$ and $B$ by providing a complete description of the domain. Then we use the results in the whole space to extend our results to bounded sets containing the origin. In particular we prove that the domain of the operator coincides with the maximal one, see Proposition \[L1\]. This allows to state the precise class of functions where Rellich inequalities hold. Note that $A$ is singular both at $0$ and at $\infty$. Let $\beta\in(0,1]$. In what follows we assume $\Omega$ to be ${\mathbb{R}}^N$ or a bounded open connected subset of ${\mathbb{R}}^N$ whose boundary $\partial \Omega$ is $C^{2,\beta}$ and such that $0\notin\partial\Omega$. For any $p\in ]1,\infty[$ we define $A_p$ by $A_p=Au$ in $$\begin{aligned} \label{dp} D_p(\Omega)&=\left\{u\in W^{2,p}(\Omega\setminus B_\epsilon)\cap L^p(\Omega)\ \forall \epsilon>0\,:\,u=0 \text{ on } \partial \Omega,\ |x|\nabla u,\ |x|^2D^2 u\in L^p(\Omega)\right\};\end{aligned}$$ for $p=1$ we define $\left(A_1,D_1(\Omega)\right)$ as $$\begin{aligned} \label{d1} D_1(\Omega)&=\left\{u\in L^1(\Omega):\,u=0 \text{ on } \partial \Omega,\ |x|\nabla u,\ |x|^2\Delta u\in L^1(\Omega)\right\}.\end{aligned}$$ When $\Omega={\mathbb{R}}^N$ and, correspondingly, $\partial\Omega=\emptyset$, the requirement “$u=0$ on $\partial \Omega$” must be disregarded. When $\Omega$ is bounded the Dirichlet boundary condition $u(x)=0$ for $x\in\partial\Omega$ makes sense in the sense of traces since $u$ has first derivatives in $L^p$ in a neighbourhood of the boundary $\partial \Omega$. The case $0\notin\Omega$ is classical since the term $|x|$ is negligible and, for $1<p<\infty$, $D_p(\Omega)$ becomes $W^{2,p}(\Omega)\cap W_0^{1,p}(\Omega)$.\ For $p=\infty$, we also consider the operator $A_\infty$ endowed with the domain $$\begin{aligned} \label{def A Omega infty} D_\infty(\Omega)&=\left\{u\in C_0^0(\overline{\Omega}):\ Au\in C_0^0(\overline{\Omega}),\ |x|\nabla u,\ |x|^2\Delta u\in C^0(\overline{\Omega})\right\},\end{aligned}$$ where $C^0(\overline{\Omega})$ denotes the space of bounded and continuous functions defined in $\overline{\Omega}$ and vanishing at the origin, if $0\in\Omega$; $C_0^0(\overline{\Omega})$ is its subspace consisting of functions vanishing also at $\infty$ when $\Omega={\mathbb{R}}^N$ and at the boundary $\partial\Omega$, otherwise. When $\Omega$ is bounded we use Proposition \[Partition unity\] to fix $\delta>0$ such that the subsets $$\begin{aligned} K_{\delta}:=\left\{x\in{\mathbb{R}}^N:\ \mbox{dist}(x,\partial\Omega)<\delta\right\},\quad \Omega_{\delta}:=K_{\delta}\cap\Omega\end{aligned}$$ have $C^{2,\beta}$ boundary. Furthermore we can write $\overline\Omega=\overline\Omega_\delta\cup\Omega_0$ where $\Omega_0$ is an open subset $\Omega_0\subset\subset\Omega$ and we fix a partition of unity $\{\eta_\delta^2,\eta_0^2\}$ such that $$\begin{aligned} \nonumber (i)&\quad \eta_\delta\in C_c^\infty(K_{\delta}), \quad 0\leq\eta_\delta\leq 1,\quad \eta_\delta=1 \;\text{ in }\; \overline\Omega_{\frac \delta 2};\\[1ex]\label{Partiton unity eq} (ii)&\quad\eta_0\in C_c^\infty(\Omega_0), \quad 0\leq\eta_\delta\leq 1;\\[1ex]\nonumber (iii)&\quad \eta_\delta^2+\eta_0^2=1\hspace{5ex} \text{ in }\;\overline\Omega.\end{aligned}$$ In order to identify a core for $A_p$ we define $$\begin{aligned} C_{c,0}^2\left(\Omega\right):&=\big\{u\in C^2_c\left(\bar\Omega\setminus\{0\}\right):\ u=0 \text{ on } \partial \Omega\big\}\\ &=\big\{u\in C^2_c\left(\bar\Omega\right):\ u=0 \text{ on } \partial \Omega\ \text{and on a neighborhood of } 0\big\}.\end{aligned}$$ \[coreOmega\] The space $C_{c,0}^2\left(\Omega\right)$ is dense in $D_p(\Omega)$, endowed with the norm $$\begin{aligned} \|u\|_{D_p({\Omega})}&=\|u\|_p+\||x|\nabla u\|_p+\||x|^2D^2 u\|_p,\quad (1<p<\infty);\\[1.5ex] \|u\|_{D_p({\Omega})}&=\|u\|_p+\||x|\nabla u\|_p+\||x|^2\Delta u\|_p,\quad\hspace{2ex} (p=1,\infty).\end{aligned}$$ When $\Omega={\mathbb{R}}^N$, $C^\infty _c\left({\mathbb{R}}^N\setminus\{0\}\right)$ is is dense in $D_p(\Omega)$. [Proof.]{} Let us consider, preliminarily, $\Omega={\mathbb{R}}^N$.\ Let $u\in D_p({\mathbb{R}}^N)$; we approximate $u$ with functions in $D_p({\mathbb{R}}^N)$ having compact support in ${\mathbb{R}}^N\setminus\{0\}$. Let $$\Omega_n=\left\{x\in{\mathbb{R}}^N:\ |x|\geq \frac{1}{n}\right\},\quad \xi_n=\chi_{\Omega_{\frac n 2}}\ast\phi_\frac{1}{n}$$ where $\phi$ is a classical mollifier supported in $B_1$, with $\int_{{\mathbb{R}}^N}\phi=1$ and $\phi_{\frac{1}{n}}(x)=n^N\phi\left(nx\right)$. It is easy to check that $\xi_n(x)=1$ for $x\in\Omega_n$, $\xi_n$ is supported in ${\mathbb{R}}^N\setminus\{0\}$ and that $0 \le \xi_n \le 1$, $|\nabla \xi_n|\leq Cn$, $|D^2\xi_n|\leq Cn^2$. Consider also a smooth function $\eta$ such that $\chi_{B_1}\leq\eta\leq \chi_{B_2}$ and, for every $n\in{\mathbb{N}}$, define $\eta_n(x)=\eta\left(\frac{x}{n}\right)$. Set $u_n=\xi_n\eta_n u$. It is immediate to check, using Lebesgue’s Theorem, that $u_n$ tends to $u$ in $L^p({\mathbb{R}}^N)$. Concerning the gradient term, we have $$\begin{aligned} \||x|(\nabla(\xi_n\eta_n u)-\nabla u)\|_p^p\leq& \int_{{\mathbb{R}}^N}|x|^p|\xi_n\eta_n-1|^p|\nabla u|^p\,dx\\[1ex] &+ \int_{{\mathbb{R}}^N}|x|^p|\nabla\xi_n|^p|\eta_n|^p|u|^p\,dx+ \int_{{\mathbb{R}}^N}|x|^p|\xi_n|^p|\nabla\eta_n|^p |u|^p\,dx\\[1ex] \leq&\int_{{\mathbb{R}}^N}|x|^p|\xi_n\eta_n-1|^p|\nabla u|^p\,dx\\[1ex] &+C n^p \int_{|x|\leq\frac 1 n}|x|^p |u|^p\,dx+ C n^{-p}\int_{\{n \le |x| \le 2n\}}|x|^p|u|^p\,dx.\end{aligned}$$ The last inequality implies $$\begin{aligned} \||x|(\nabla(\xi_n\eta_n u)-\nabla u)\|_p^p\,dx\leq& \int_{{\mathbb{R}}^N}|x|^p|\xi_n\eta_n-1|^p|\nabla u|^p\,dx\\[1ex]& +C\int_{|x|\leq\frac 1 n}|u|^p\,dx+C\int_{\{n \le |x|\le 2n\}} |u|^p\,dx\end{aligned}$$ which tends to 0 by dominated convergence. Using a similar argument one shows that, if $1<p<\infty$, $|x|^2D^2u_n$ tends to $|x|^2D^2 u$ in $L^p({\mathbb{R}}^N)$ and that, if $p=1,\infty$, $|x|^2\Delta u_n$ tends to $|x|^2\Delta u$ in $L^p({\mathbb{R}}^N)$. This proves that $u_n$ tends to $u$ in $D_p({\mathbb{R}}^N)$; we also note that, by construction, $\mbox{supp\,} u_n\subseteq \mbox{supp\,} u$. Finally we can use a standard convolution argument to approximate in $D_p({\mathbb{R}}^N)$ functions having compact support in ${\mathbb{R}}^N\setminus\{0\}$ with $C_c^\infty\left({\mathbb{R}}^N\setminus\{0\}\right)$ functions.\ Let us consider, now, a bounded set $\Omega\subset{\mathbb{R}}^N$ and let $u\in D_p(\Omega)$. We use the partition of unity defined in to write $$\begin{aligned} u=\eta_0^2u+\eta_\delta^2 u:=u_0+u_\delta.\end{aligned}$$ The function $u_0$ satisfies $\mbox{supp\,}u_0=\Omega_0\subset\subset\Omega$: the same proof as before shows that we can approximate $u_0$ in $D_p(\Omega)$ with $C_c^\infty(\Omega\setminus\{0\})$ functions.\ On the other hand the function $u_\delta$ satisfies $u_\delta\in D_p(\Omega_\delta)$ since $u=0$ on $\partial\Omega$ and $\mbox{supp}\, \eta_\delta\subseteq K_{\delta}$. Since no singularity appears in $\Omega_\delta$, the approximation problem is a classical one: Proposition \[Sobolev approximation 1,infty\] then proves that $u_\delta$ can be approximated in $D_p(\Omega)$ with functions in $ C_{c,0}^2\left(\Omega\right)$. The previous Lemma shows that $C_{c,0}^2\left(\Omega\right)$ is a core for $A_p$. When $\Omega={\mathbb{R}}^N$ or $\Omega= B$, Proposition \[core Palla R\^N\] states that $C_{c,0}^2\left(\Omega\right)$ is also a core for the operator $A_{p,J}$ of Definition \[Def A\_pJ\]. We have therefore proved the following result which provides a description of the operators introduced in the previous subsection. \[domainap\] Let $1\leq p\leq\infty$ and $\Omega={\mathbb{R}}^N$ or $\Omega= B$. Then the operator $A_p$ coincides with that of Definition \[Def A\_pJ\] for $J={\mathbb{N}}_0$. In the next lemma we state some interpolative and a-priori estimates. \[apriori\] Let $1\leq p\leq \infty$. Then there exist ${\varepsilon}_0, \ C>0$ depending only on $c,N,{\Omega}$ such that for every $0<{\varepsilon}<{\varepsilon}_0$ and $u\in D_p({\Omega})$ one has $$\begin{aligned} \label{apriori1} \||x|\nabla u\|_p&\leq {\varepsilon}\|A u\|_p+\frac{C}{{\varepsilon}}\|u\|_p.\end{aligned}$$ Moreover, if $1<p<\infty$, $$\begin{aligned} \label{apriori2} \||x|^2D^2 u\|_p&\leq C(\|A u\|_p+\|u\|_p).\end{aligned}$$ [Proof.]{} In view of Lemma \[coreOmega\], it is enough to prove these estimates for $u\in C_{c,0}^2\left(\Omega\right)$. The proof of (\[apriori1\]) follows as in [@for-lor Lemma 2.4] with minor modifications (in particular, one intersects the balls $B(x_0, \rho)$ with $\Omega$). To prove (\[apriori2\]) for $1<p<\infty$, it is sufficient to apply the classical elliptic estimate $\|D^2u\|_p \le C\|\Delta u\|_p$ (which holds both in ${\mathbb{R}}^N$ as well as in a bounded $\Omega$ if $u$ vanishes at the boundary) to $|x|^2u$ and then to interpolate the terms containing $\nabla u$, by (\[apriori1\]). In the next Propositions we prove dissipativity properties for $A_p$ through Hardy type inequalities. In the spirit of Section \[Section Rellich \], this is equivalent to the fact that the Rellich inequalities for the operator $L$, when $b$ is sufficiently large, can be proved using integration by parts and Hardy inequalities . We begin by the recalling the following result. \[hardy2\] (see [@rellich Proposition 8.3]). Let $1<p<\infty$, $\beta\in {\mathbb{R}}$. Then, if $N-2+\beta\neq 0$, for every $u\in C_c^\infty({\mathbb{R}}^N\setminus\{0\})$, $$\label{WHardy2-RN} \int_{{\mathbb{R}}^N} |x|^{\beta}|\nabla u|^{2}|u|^{p-2} \,dx \geq \left(\frac{N-2+\beta}{p}\right)^2 \int_{{\mathbb{R}}^N} |x|^{\beta-2}|u|^{p} \,dx;$$ We prove now that $A_p$ is quasi-dissipative. \[dissipativity\] Let $1\leq p\leq \infty$ and set $\omega_p=\frac{N}{p^2}\left[p(N-2+c)-N\right]$. Then, for every $u\in D_p(\Omega)$, $\lambda>0$, $$\label{contractivity} \lambda \|u\|_p \le \|(\lambda-A-\omega_p)u\|_p.$$ [Proof.]{} We consider, preliminarily, $1<p<\infty$ and prove the inequality $$\label{ausiliaria} -\int_{\Omega}Au |u|^{p-2}u\; dx\geq \omega_p \int_{\Omega}|u|^p\; dx.$$ Let $2\leq p<\infty$. By Proposition \[coreOmega\], we may assume that $u\in C_{c,0}^2\left(\Omega\right)$. Setting $u^\star=u|u|^{p-2}$ we multiply $Au$ by $u^\star$ and integrate over $\Omega$. Integrating by parts we get $$\begin{aligned} -\int_{\Omega}Au\, u^\star\; dx&=(p-1)\int_{\Omega}|x|^2|u|^{p-2}|\nabla u|^2\;dx- (c-2)\int_{\Omega}x\cdot\nabla u\, u|u|^{p-2}\; dx\\[1ex] &=(p-1)\int_{\Omega}|x|^2|u|^{p-2}|\nabla u|^2\;dx- \left(\frac{c-2}{p}\right)\int_{\Omega}x\cdot\nabla|u|^p\; dx\\[1ex] &=(p-1)\int_{\Omega}|x|^2|u|^{p-2}|\nabla u|^2\;dx+ N\left(\frac{c-2}{p}\right)\int_{\Omega}|u|^p\; dx.\end{aligned}$$ By Hardy inequality (\[WHardy2-RN\]) with $\beta=2$, $$\begin{aligned} -\int_{\Omega}&Au\, u^\star\; dx\geq \left[(p-1)\frac{N^2}{p^2}+ N\left(\frac{c-2}{p}\right)\right]\int_{\Omega}|u|^p\; dx=\omega_p \int_{\Omega}|u|^p\; dx\end{aligned}$$ and therefore $$-\int_{\Omega}Au |u|^{p-2}u dx\geq \omega_p \int_{\Omega}|u|^p\; dx.$$ For $1<p<2$ the integration by parts is not straightforward (but still allowed, see [@met-spi]) since $|u|^{p-2}$ becomes singular near the zeros of $u$. In this case it is sufficient to replace $u^\star$ by $u(u^2+\delta)^{\frac{p}{2}-1}$ where $\delta$ is a positive parameter and then let $\delta$ to $0$ obtaining the required estimates also in this case. It is clear that (\[ausiliaria\]) implies (\[contractivity\]) which is therefore proved for $1<p<\infty$. Letting $p \to 1, \infty$, we see that (\[contractivity\]) holds in all cases. - $\omega_\infty=0$ and $\omega_1=(c-2)N$; - $\omega_p\geq 0$ iff $p\geq \frac{N}{N-2+c}$. Moreover $\omega_p$ attains its maximum value at ${\overline}{p}=\frac{2N}{N-2+c}$ and $\omega_{{\overline}{p}}=\left(\frac{N-2+c}{2}\right)^2$. The previous theorem, combined with Lemma \[apriori\], allows us to deduce the following result. \[aprioriLambda\] Let $1\leq p\leq\infty$. There exist two constants $\Lambda>0$ and $C>0$ such that, for every $u\in D_p(\Omega)$ and every $\rm{Re}\lambda\geq \Lambda_p$ $$|\lambda|\|u\|_p+|\lambda|^\frac{1}{2}\||x|\nabla u\|_p\leq C\|\lambda u-Au\|_p.$$ If $1<p<\infty$, we have also $$\||x|^2D^2 u\|_p\leq C\|\lambda u-Au\|_p.$$ [Proof.]{} The estimate $$|\lambda|\|u\|_p\leq C\|\lambda u-Au\|_p$$ is nothing but sectoriality. The gradient estimate follows from it, using (\[apriori1\]) with ${\varepsilon}=|\lambda|^{-\frac{1}{2}}$. The Hessian estimate for $1<p<\infty$ follows from (\[apriori2\]).\ The next theorem shows that $A_p$ is the generator of a contractive analytic semigroup in $L^p(\Omega)$. \[gen-prel\] For any $1\leq p\leq\infty$, the operator $(A_p+\omega_p, D_p(\Omega))$ generates a contractive analytic semigroup in $L^p(\Omega)$. To distinguish, we write $\tilde{A}_p$ for $A_p$ when $\Omega={\mathbb{R}}^N$. Observe that, by Proposition \[analyt\], $\tilde{A}_p$ generates an analytic semigroup, hence its resolvent contains a sector $$\Sigma_{\theta,\rho}=\{\lambda \in {\mathbb{C}}: |\lambda| \ge \rho, |{\rm Arg }\lambda |<\theta \},$$ with $\theta >\pi/2$ where the following resolvent estimate holds $$\|(\lambda- \tilde{A}_p)^{-1}\|_p \le \frac{M}{|\lambda|}.$$ Let $\Omega\subset{\mathbb{R}}^N$ and define $\eta_\delta$ and $\eta_0$ as in (\[Partiton unity eq\]). For $\lambda \in \Sigma_{\theta, \rho}$, $f\in L^p(\Omega)$, set $R_0(\lambda)f=\eta_0(\lambda-\tilde{A}_p)^{-1}(\eta_0 f)\in D_p({\mathbb{R}}^N)$, $R_\delta(\lambda)f=\eta_\delta(\lambda-A_\delta)^{-1}(\eta_\delta f)\in W^{2,p}(K_\delta)\cap W_0^{1,p}(K_\delta)$ where $A_\delta$ is the operator $A$ in $K_\delta$ with Dirichlet boundary conditions. We have $$\begin{aligned} (\lambda-A)R_0(\lambda)f&=(\lambda-A)\eta_0(\lambda-\tilde{A}_p)^{-1}(\eta_0 f)\\&= \eta_0(\lambda-A)(\lambda-\tilde{A}_p)^{-1}(\eta_0f)+[\eta_0, A](\lambda-\tilde{A}_p)^{-1}(\eta_0 f)\\&= \eta_0^2 f +[\eta_0, A](\lambda-\tilde{A}_p)^{-1}(\eta_0f):=\eta_0^2f+S_0(\lambda)f\end{aligned}$$ where $$[\eta_0, A]g=\eta_0(Ag)- A(\eta_0 g)$$ is a first order operator supported on $K_\delta$. Using Corollary \[aprioriLambda\] (and disregarding $|x|$ which is bounded above and below from 0 in $K_\delta$) we see that $$\|S_0(\lambda)f\|_p \leq c_1\frac{\|f\|_p}{|\lambda|^\frac{1}{2}}$$ for $\lambda \in \Sigma_{\theta, \rho}$ and with $c_1$ depending only on $\delta$. In similar way we get $$(\lambda-A)R_\delta(\lambda)f=\eta_\delta^2f+S_\delta(\lambda)f$$ with $$\|S_\delta(\lambda)f\|_p \leq c_1\frac{\|f\|_p}{|\lambda|^\frac{1}{2}}$$ for $\lambda \in \Sigma_{\theta, \rho}$ and with $c_1$ depending only on $\delta$, by classical results, since $A_\delta$ is uniformly elliptic in $K_\delta$. Then setting $$R(\lambda):=R_0(\lambda)+R_\delta(\lambda), \quad S(\lambda):=S_0(\lambda)+S_\delta(\lambda),$$ we have $$(\lambda-A)R(\lambda)f=f+S(\lambda)f.$$ Choosing $|\lambda| >\rho_1$ large enough, we find $\|S(\lambda)\|_p\leq\frac{1}{2}$ and then the operator $I+S(\lambda)$ is invertible in $L^p(\Omega)$. Setting $V(\lambda)=(I+S(\lambda))^{-1}$ we have $$(\lambda-A)R(\lambda)V(\lambda)f=f$$ and hence the operator $R(\lambda)V(\lambda)$, which maps $L^p(\Omega)$ into $D_p(\Omega)$, is a right inverse of $\lambda-A$. Since both $\|R_0(\lambda)\|_p,\| R_\delta (\lambda)\|_p \le M|\lambda|^{-1}$ and $\|V(\lambda)\|_p \le 2$, then $$\label{StimaRes} \|R(\lambda)V(\lambda)\|_p\leq \frac{C}{|\lambda|}$$ for $\lambda \in \Sigma_{\theta, \rho_1}$. Clearly, $R(\lambda)V(\lambda)$ coincides with $(\lambda-A_p)^{-1}$ whenever this last is injective, in particular for $\lambda>-\omega_p$, By Proposition \[dissipativity\]. Then $(-\omega_p, \infty)\subset \rho (A_p)$, the a-priori estimates (\[StimaRes\]) shows that the norm of the resolvent cannot blow up in $\Sigma_{\theta, \rho_1}$, hence $\Sigma_{\theta, \rho_1} \subset \rho(A_p)$ and the proof is complete. In the next proposition we prove that the domain $D_p(\Omega)$ coincides with the maximal one. In what follows, $Au$ is understood in the sense of distributions in $\Omega \setminus \{0\}$. Since the coefficients of $A$ are $C^\infty$ away from the origin, by local elliptic regularity it follows that $u \in W^{2,p}_{loc}({\mathbb{R}}^N \setminus \{0\})$ when $\Omega={\mathbb{R}}^N$ and that $u \in W^{2,p}(\Omega \setminus B_{\varepsilon})$ for every ${\varepsilon}>0$, when $\Omega$ is bounded. This clearly holds for $1<p<\infty$; when $p=\infty$, the same is true for any $q <\infty$. \[L1\] Let $1\leq p\leq \infty$. The domain $D_p(\Omega)$ defined in coincides with the maximal domain $$\begin{aligned} \label{maximal domain} D_{p,max}(\Omega)=\{u\in L^p(\Omega): \,u=0 \text{ on } \partial \Omega,\ Au\in L^p(\Omega)\}.\end{aligned}$$ [Proof.]{} The inclusion $D_p(\Omega) \subset D_{p,max}(\Omega)$ is obvious. Conversely, let $u\in D_{p,max}(\Omega)$ and $\lambda>0$ be in the resolvent set of $(A_p, D_p(\Omega))$. Set $f=\lambda u-A_p u$ and $v=u-R(\lambda, A_p)f$. Then $v$ belongs to $D_{p,max}(\Omega)$ and satisfies $\lambda v-A_p v=0$. We prove that $v\equiv 0$ if $\lambda$ is large enough. Let us consider for large $n$ $$\Omega_n=\left\{x\in\Omega:\ |x|\geq \frac{1}{n},\ \mbox{dist}(x,\partial{\Omega})\geq\frac{1}{n}\right\},\quad \xi_n=\chi_{\Omega_{\frac n 2}}\ast\phi_\frac{1}{n}$$ where $\phi$ is a classical mollifier supported in $B_1$, with $\int_{{\mathbb{R}}^N}\phi=1$ and $\phi_{\frac{1}{n}}(x)=n^N\phi\left(nx\right)$. It is easy to check that $\xi_n(x)=1$ for $x\in\Omega_{\frac{n}{3}}$, $\xi_n$ is supported in $\Omega_n$ and that $0 \le \xi_n \le 1$, $|\nabla \xi_n|\leq Cn$, $|D^2\xi_n|\leq Cn^2$. Consider also a smooth function $\eta$ such that $\chi_{B_1}\leq\eta\leq \chi_{B_2}$ and set $\eta_n(x)=\eta\left(\frac{x}{n}\right)$, $\zeta_n=\xi_n \eta_n$. Since $|\nabla \xi_n| \le Cn\chi_{(\Omega_{n} \setminus \Omega_\frac{n}{3})}$ and $|\nabla \eta_n| \le Cn^{-1} \chi_{(B_{2n}\setminus B_n)}$, it follows that the function $\nabla \zeta_n$ has support in $F_n:=\left (\Omega_{n} \setminus \Omega_\frac{n}{3}\right) \cup \left (B_{2n} \setminus B_n\right)$ and satisfies $|x|^2|\nabla \zeta_n|^2 \le C$, with $C$ independent of $n$.\ Let us consider, first, the case where $p\geq 2$. Integrating by parts the identity $$\int_{\Omega}(\lambda v-Av)v|v|^{p-2}\zeta_n^2=0$$ we obtain $$\begin{aligned} 0 =&\lambda \int_{\Omega}|v|^p\zeta_n^2\, dx+(p-1)\int_{\Omega}|x|^2|\nabla v|^2|v|^{p-2}\zeta_n^2\, dx \\[1ex] &+2\int_{\Omega}|x|^2\zeta_n|v|^{p-2}v\nabla v\cdot\nabla\zeta_n\, dx+(2-c)\int_{\Omega}\zeta_n^2|v|^{p-2}v\,x\cdot\nabla v\, dx.\end{aligned}$$ Using Hölder’s inequality we obtain $$\begin{aligned} \left|\int_{\Omega}|x|^2\zeta_n|v|^{p-2}v\nabla v\cdot\nabla\zeta_n\, dx\right|&\leq \left(\int_{\Omega}|x|^2\zeta_n^2|\nabla v|^2|v|^{p-2}\, dx\right)^\frac{1}{2}\left(\int_{\Omega}|x|^2|v|^p|\nabla\zeta_n|^2\, dx\right)^\frac{1}{2}\\[1ex] &\leq C\left(\int_{\Omega}|x|^2\zeta_n^2|\nabla v|^2|v|^{p-2}\, dx\right)^\frac{1}{2}\left(\int_{\Omega\cap F_n}|v|^p\, dx\right)^\frac{1}{2}\\[1ex] &\leq {\varepsilon}\int_{\Omega}|x|^2\zeta_n^2|\nabla v|^2|v|^{p-2}\, dx+\frac{C}{{\varepsilon}}\int_{\Omega\cap F_n} |v|^p\, dx.\end{aligned}$$ Similarly $$\left|\int_{\Omega}\zeta_n^2|v|^{p-2}v x\cdot\nabla v\, dx\right|\leq {\varepsilon}\int_{\Omega}|x|^2\zeta_n^2|\nabla v|^2|v|^{p-2}\, dx+\frac{C}{{\varepsilon}}\int_{\Omega} |v|^p\zeta_n^2\, dx.$$ Combining the last inequalities we obtain, up to slightly changing the constants, $$\left(\lambda-\frac{C_1}{{\varepsilon}}\right)\int_{\Omega}|v|^p\zeta_n^2\, dx+(p-1-3{\varepsilon})\int_{\Omega}|x|^2|\nabla v|^2|v|^{p-2}\zeta_n^2\ dx-\frac{2C_1}{{\varepsilon}}\int_{\Omega\cap F_n} |v|^p\, dx\leq 0.$$ Finally, choosing $3{\varepsilon}<p-1$ and letting $n$ to infinity, we obtain $$\left(\lambda-\frac{C_2}{p-1}\right)\int_{\Omega}|v|^p\, dx\leq 0$$ which implies $v\equiv 0$, if $\lambda$ is large enough. For $1<p<2$ the integration by parts is not straightforward since $|v|^{p-2}$ becomes singular near the zeros of $v$, but still allowed ( see [@met-spi]) and one concludes as before (or, more simply, notice that $v$ is a smooth function, by elliptic regularity, replace $v|v|^{p-2}$ by $v(v^2+\delta)^{\frac{p}{2}-1}$ and then let $\delta \to 0$). For $p=1$, we notice that $v$ is a smooth function away from the origin, by elliptic regularity, and consider a sequence of smooth functions $h_n:{\mathbb{R}}\rightarrow {\mathbb{R}}$ such that $|h_n|\leq 1,$ $h_n'(s)\ge 0$ and $h_n(s)\rightarrow {\mathop{\rm sign}}(s)$ for every $s\in{\mathbb{R}}$. Integrating by parts the identity $$\int_{\Omega}(\lambda v-Av)h_k(v)\zeta_n^2=0$$ the proof follows as before. For $p=\infty$ we note that $v$ vanishes at $0$ and at $\partial \Omega$ when $\Omega $ is bounded or at $\infty$ if $\Omega={\mathbb{R}}^N$. Moreover, by elliptic regularity, $v$ is a smooth function out of the origin. If $v$ is not identically zero, then it has a positive maximum point (or a negative minimum point ) at some $x_0 \in \Omega$. The classical maximum principle yields $Av(x_0) \le 0$, hence $\lambda v(x_0) \le 0$, which is a contradiction for $\lambda >0$. Finally, we consider the domain of the operator $A_{p,J}$ of Subsection 3.3. \[domainpj\] If $\Omega={\mathbb{R}}^N$ or $\Omega=B$, then the domain $D_{p,J}(\Omega)$ of $A_{p,J}$ is given by $$D_{p,J}(\Omega)=D_p(\Omega)\cap L^p_J(\Omega)=D_{p,max}(\Omega) \cap L^p_J(\Omega).$$ [Proof. ]{} By Corollary \[restriction\], the domain of $A_{p,J}$ is the intersection of the domain of $A_p$ with $L^p_J$ and the thesis follows from Propositions \[domainap\], \[L1\]. Rellich inequalities in ${\mathbb{R}}^N$ and in $B$ {#Section Rellich } ===================================================== In this section we prove weighted Rellich inequalities for the operator $$L=\Delta +c\frac{x}{|x|^2}\cdot\nabla -\frac{b}{|x|^2} ,\quad c,\ b\in{\mathbb{R}}$$ on $L^p(\Omega)$ when $\Omega={\mathbb{R}}^N$ and $\Omega=B$. For $1\leq p\leq\infty$, $\alpha\in{\mathbb{R}}$ and $J \subset {\mathbb{N}}_0$ we define $$\begin{aligned} D_{p,\alpha, J}(\Omega):&=\left\{u:\ |x|^{\alpha-2}u,\ |x|^{\alpha}Lu\in L^p_J\left(\Omega\right),\ u=0 \text{ on } \partial \Omega\right\} .\end{aligned}$$ When $J={\mathbb{N}}_0$ we write $D_{p,\alpha}(\Omega)$ in place of $D_{p,\alpha, {\mathbb{N}}_0}(\Omega)$. As in the previous section $Lu$ is understood as a distribution in $\Omega \setminus \{0\}$. Since the coefficients of $L$ are $C^\infty$ away from the origin, by local elliptic regularity it follows that, if $u \in D_{p, \alpha}(\Omega)$, then $u \in W^{2,p}_{loc}({\mathbb{R}}^N \setminus \{0\})$ when $\Omega={\mathbb{R}}^N$ and $u \in W^{2,p}(\Omega \setminus B_{\varepsilon})$ for every ${\varepsilon}>0$, when $\Omega$ is bounded. This clearly holds for $1<p<\infty$; when $p=\infty$, the same is true for any $q <\infty$. Defining $$\begin{aligned} \quad \Phi u=v,\,v(x)=|x|^{\alpha-2}u(x), \end{aligned}$$ we have seen in Section \[Preliminaries\] that $$\begin{aligned} |x|^\alpha L u=Av-\mu v,\qquad \mu=b-(2-\alpha)(N-\alpha+c)\end{aligned}$$ where $A$ is the operator of Section \[Section A\] with $c+4-2\alpha$ in place of $c$, $$A=|x|^2\Delta +(c+4-2\alpha)x\cdot\nabla.$$ By construction $\Phi \left (D_{p,\alpha,J}(\Omega)\right )$ coincides with the domain $D_{p,J}(\Omega)=D_{p,max}(\Omega)\cap L^p_J(\Omega)$, see Corollary \[domainpj\]. In particular Rellich inequalities $$\||x|^\alpha Lu\|_p \geq C\||x|^{\alpha-2} u\|_p,\quad u\in D_{p,\alpha,J}(\Omega)$$ are equivalent to the spectral estimates $$\|\mu v-Av\|_p \geq C\|v\|_p, \quad v\in D_{p,J}(\Omega)$$ which, recalling Proposition \[Rellich-spectrum\], hold precisely when $\mu\notin A\sigma(A_{p,J})$. The results of this section are then immediate consequences of Theorem \[Spectrum main\] and Corollary \[a-priori\]. Let us define $$\gamma_p(\alpha,c):=\Bigl(\frac{N}{p}-2+\alpha\Bigr)\Bigl(\frac{N}{p'}-\alpha+c\Bigr)=\Bigl(\frac{N-2+c}{2}\Bigr)^2-\Bigl(N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}-\alpha\Bigr)^2.$$ and $$D:= b+\left(\frac{N-2+c}{2}\right)^2.$$ In what follows we refer to $D$ as the discriminant of $L$; in [@met-soba-spi3; @met-negro-spina; @1] the authors show that $D$ takes a fundamental role in generation properties of $L$. We recall that ${\cal Q}_p$, ${\cal P}_p$, $\omega_p$ have been defined in , , . For clarity sake, we rewrite them in the present situation where $c+4-2\alpha$ takes the place of $c$: $$\begin{aligned} {\cal Q}_p&=\left\{\lambda\in {\mathbb{C}}\ \textrm{such that}\ {\rm Re}\lambda\leq -\frac{({\rm Im} \lambda)^2}{\left (N\left(1-\frac{2}{p}\right)+2-2\alpha+c\right)^2}-\omega_p\right\},\\[1ex] {\cal P}_p&=\left\{\lambda=-\xi^2+i\xi\left (N\left(1-\frac{2}{p}\right)+2-2\alpha+c\right)-\omega_p,\, \xi\in{\mathbb{R}}\right\},\\[1ex] \omega_p&=\frac{N}{p^2}\left[p(N+2-2\alpha+c)-N\right].\end{aligned}$$ Note that, when $N\Bigl (\frac12-\frac1p\Bigr)+1-\alpha+\frac{c}{2}=0$, then $${\cal Q}_p=]-\infty,-\omega_p].$$ In the following lemma we denote with $\sqrt z$ a complex square root of $z$ having non negative real part. \[Parameters\] Let $1\leq p\leq \infty$, $j \in {\mathbb{N}}_0$ and $\mu:=b-(2-\alpha)(N-\alpha+c)$. Then the following properties are equivalent - $\mu\notin {\cal Q}_p-\lambda(P_{j})$; - $b+\gamma_p(\alpha,c)+\lambda(P_{j})>0$; - $\left|N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}-\alpha\right|< \sqrt{D+\lambda(P_{j})}$ and $D+\lambda(P_{j})>0$; - $\left|N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}-\alpha\right|< {\textrm{\emph{Re}\,}}\sqrt{D+\lambda(P_{j})}$. The proof follows from elementary calculations after noticing that $$\begin{aligned} \omega_p&=b+\gamma_p(\alpha,c)-\mu,\\[1ex] \gamma_p(\alpha,c)&=D-b-\Bigl(N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}-\alpha\Bigr)^2.\end{aligned}$$ Since $\mu\in{\mathbb{R}}$, the conditions $\mu\notin{\cal{P}}_p-\lambda(P_{j})$, $\mu\notin{\cal{Q}}_p-\lambda(P_{j})$ become $b+\gamma_p(\alpha,c)+\lambda(P_{j})\neq 0$, $b+\gamma_p(\alpha,c)+\lambda(P_{j})> 0$, respectively.\ The following is the main result of this section. Part 1 has been already proved in [@rellich]. \[RellichFJ\] Let $1\leq p\leq \infty$, $\alpha,\ b,\ c\in{\mathbb{R}}$ and $J\subseteq N_0$ with $j_0:=\min\{j\in J\}$. - If $\Omega={\mathbb{R}}^N$, Rellich inequalities $$\begin{aligned} \||x|^\alpha Lu\|_p \geq C\||x|^{\alpha-2} u\|_p,\quad u\in D_{p,\alpha,J}({\mathbb{R}}^N) \end{aligned}$$ hold if and only if $$\begin{aligned} \alpha\neq N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}\pm{\textrm{\emph{Re}\,}}\sqrt {D+\lambda(P_j)}, \quad \forall\, j\in J,\end{aligned}$$ or equivalently when $b +\gamma_p(\alpha,c)+\lambda(P_j)\neq 0$ for every $j\in J$. - If $\Omega=B$, Rellich inequalities $$\begin{aligned} \||x|^\alpha Lu\|_p \geq C\||x|^{\alpha-2} u\|_p,\quad u\in D_{p,\alpha,J}(B) \end{aligned}$$ hold if and only if $$\begin{aligned} \alpha&<N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}+ {\textrm{\emph{Re}\,}}\sqrt{D+\lambda(P_{j_0}}), \quad\text{and}\;\\[1ex] \alpha&\neq N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}-{\textrm{\emph{Re}\,}}\sqrt {D+\lambda(P_j)}, \quad\forall\, j\in J.\end{aligned}$$ In particular the latter conditions are verified - when $\alpha\geq N\left(\frac 1 2-\frac{1}{p}\right)+1+\frac c 2$, if and only if $b +\gamma_p(\alpha,c)+\lambda(P_{j_0})>0$, - when $\alpha< N\left(\frac 1 2-\frac{1}{p}\right)+1+\frac c 2$, if and only if $b +\gamma_p(\alpha,c)+\lambda(P_j)\neq 0$ for every $j\in J$. If $J={\mathbb{N}}_0$ and $b +\gamma_p(\alpha,c)>0$, that is $$\begin{aligned} \left|N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}-\alpha\right|< {\textrm{\emph{Re}\,}}\sqrt{D}, \end{aligned}$$ then the optimal constant is given by $C= b +\gamma_p(\alpha,c)$. [Proof.]{} Consider ${\cal Q}_p$, ${\cal P}_p$ and $\omega_p$ defined before Lemma \[Parameters\] and let $\mu=b-(2-\alpha)(N-\alpha+c)$. Then Rellich inequalities hold if and only if $\mu\notin A\sigma(A_{p,J})$. The proof of the required claims follows then easily by combining Lemma \[Parameters\], Theorem \[Spectrum main\] and Corollary \[a-priori\].\ For a fixed $\alpha$, Rellich inequalities are always true in $L^p_{\geq n}(\Omega)$, for a sufficiently large $n\in{\mathbb{N}}_0$, even though they fail in the whole $L^p(\Omega)$. This phenomenon appears also in the extreme cases $p=1,\infty$. The failure of Rellich inequalities for some values of $\alpha$ is, therefore, always determined by subspaces defined by spherical harmonics of low order. When $b=c=0$, the operator reduces to the Laplace operator $L =\Delta$. In this case $$\begin{aligned} D=\left(\frac{N-2}{2}\right)^2, \quad D+\lambda_n=\left(\frac{N-2}{2}+n\right)^2.\end{aligned}$$ Rellich inequalities in bounded domains for the Laplace operator have already been investigated in [@musina] where their validity is proved for $N\geq 3$, $1<p<\infty$ and $$\begin{aligned} \label{range easy} -\frac N p+2<\alpha<N\left(1-\frac 1 p\right). \end{aligned}$$ This range coincides with the values of $\alpha$ for which Rellich inequalities can be proved using integration by parts and the Hardy inequalities (see Theorem \[dissipativity\]). The following corollary characterizes their validity in the ball. \[Rellich-Delta\] Let $1\leq p\leq \infty$, $\alpha\in{\mathbb{R}}$. If $\Omega=B$, Rellich inequalities $$\begin{aligned} \||x|^\alpha \Delta u\|_p \geq C\||x|^{\alpha-2} u\|_p,\quad u\in D_{p,\alpha}(B) \end{aligned}$$ hold if and only if $$\begin{aligned} \alpha<N\left(1-\frac 1 p\right),\quad \alpha\neq-\frac N p+2-n, \quad\forall\, n\in{\mathbb{N}}_0.\end{aligned}$$ Rellich inequalities in general domains {#Rellich general} ======================================= \[Rellich Bounded domain\] Let $\Omega$ be an open bounded and connected subset of ${\mathbb{R}}^N$ whose boundary $\partial \Omega$ is $C^{2,\beta}$ and such that $0 \in\Omega$. In this section we show that Rellich inequalities for the operator $L$ hold in $\Omega$ if and only if they hold in the ball $B$. In terms of the auxiliary operator $A$, this means that its approximate point spectrum is independent of the bounded set $\Omega$. We have no direct proof of this fact which does not seem to be evident. We write $L$ in the symmetric form $$\label{symmetrize} L=\Delta +c\frac{x\,}{|x|^2}\cdot\nabla -\frac{b\,}{|x|^{2}}= |x|^{-c}{\rm div}(|x|^{c}\nabla ) - \frac{b}{|x|^2}.$$ and we always assume $1<p<\infty$ and that $$\label{D} D=b+\left (\frac{N-2+c}{2}\right )^2 \ge 0.$$ This condition is crucial for the solvability of some elliptic problems related to $L$ which will be studied in the following subsection in a auxiliary weighted $L^2$ space. The operator $L$ in $L^2(\Omega, d\mu)$ {#Section estimate} --------------------------------------- We need some preliminary facts concerning the operator $L$ in a weighted space and here we suppose $\Omega$ as above or $\Omega={\mathbb{R}}^N$. We consider the weighted space $L^2(\Omega,d\mu)$, $d\mu=|x|^{c}dx$, and the symmetric form $$\begin{aligned} \mathfrak{a}(u,v) := \int_{\Omega}\left( \nabla u\cdot \nabla \overline{v}+ \frac{b}{|x|^2}u\overline{v}\right)\,d\mu, \qquad u,v \in C_{c,0}^2(\Omega).\end{aligned}$$ Using , we see that for $u,v \in C_{c,0}^2(\Omega)$ $$\int_{\Omega} (Lu)\, \overline{v} \, d\mu=\mathfrak{a}(u,v).$$ To prove that $\mathfrak{a}$ is non-negative, we make different change of variables according to $D>0$ or $D=0$. When $D>0$ we write $u=u_1|x|^{-\frac{c}{2}}$ and $v=v_1|x|^{-\frac{c}{2}}$ to obtain, after integration by parts $$\begin{aligned} \label{N-dimensional form change variables} \mathfrak{a}(u,v)= \int_{\Omega}\left( \nabla u_1\cdot \nabla \overline{v}_1+\left(D-\frac{(N-2)^2}{4}\right) \frac{u_1\overline{v}_1}{|x|^2}\right)\,dx.\end{aligned}$$ Then we use the classical Hardy inequality. When $D=0$ we are in the critical case of Hardy inequality and it is convenient to use the transformation (which is the basis of the proof of Hardy inequality) $u=u_1|x|^{-\frac{N-2+c}{2}}$ and $v=v_1|x|^{-\frac{N-2+c}{2}}$. Integrating by parts we get $$\begin{aligned} \label{N-dimensional form change variables-crit} \mathfrak{a}(u,v)= \int_{\Omega}\left( \nabla u_1\cdot \nabla \overline{v}_1\right)|x|^{2-N}\,dx.\end{aligned}$$ To identify the domain of the closure of $\mathfrak{a}$ we use the classical Sobolev space $H^1_0(\Omega)$ and also $H_0^1\left(\Omega,|x|^{2-N}dx\right)$ defined as the closure of $C_{c}^2(\Omega)$ with respect to the norm $$\left\|v\right\|^2_{H_0^1\left(\Omega,|x|^{2-N}dx\right)}=\int_{\Omega}\left[|\nabla v|^2+ \left|v\right|^2\right]\,|x|^{2-N}dx.$$ Note that we use $C^2_c(\Omega)$ and not $C_{c,0}^2(\Omega)$, that is we do not assume that the functions vanish in a neighbourhood of $0$. However, the above definition would not change using the smaller space. Let us recall, in fact, that, since $N \ge 2$, $C_{c,0}^2(\Omega)$ is dense in $H^1_0(\Omega)$ and the same is true for $H_0^1\left(\Omega,|x|^{2-N}dx\right)$, as we show below. $C_{c,0}^2(\Omega)$ is dense in $H_0^1\left(\Omega,|x|^{2-N}dx\right)$. [Proof. ]{} Let us assume, for example that $\Omega={\mathbb{R}}^N$ and let $f\in C_c^2({\mathbb{R}}^N)$. We approximate $f$ in the norm of $H_0^1\left(\Omega,|x|^{2-N}dx\right)$ with functions belonging to $C_{c,0}^2 ({\mathbb{R}}^N)$.\ Let $\varphi\in C^\infty({\mathbb{R}}^+)$ such that $ \varphi(r)=0$ if $0\leq r\leq \frac 1 4$ and $ \varphi(r)=1$ if $ r\geq \frac 1 2$ and set $\varphi_\epsilon(x):=\varphi(|x|^\epsilon)$. By construction $f\varphi_\epsilon\in C_{c,0}^2({\mathbb{R}}^N)$ and, as $\epsilon\to 0^+$, $f\varphi_\epsilon$, $\partial_if\varphi_\epsilon$ converge in $L^2\left(\Omega,|x|^{2-N}dx\right)$ to $f$, $\partial_i f$, respectively, by dominated convergence.\ It remains to show that $f\partial_i\varphi_\epsilon$ converges to $0$ in $L^2\left(\Omega,|x|^{2-N}dx\right)$. This is true since $$\begin{aligned} &\int_{{\mathbb{R}}^N} |f|^2|\partial_i\varphi_\epsilon|^2\,|x|^{2-N}dx \leq \|f\|^2_\infty \int_{(\frac 1 4)^{\frac{1}{\epsilon}}\leq |x|\leq (\frac 1 2)^{\frac{1}{\epsilon}}}|x|^{2\epsilon-2} \epsilon^2\left|\varphi'(|x|^\epsilon)\right|^2\,|x|^{2-N}dx\\[1ex] &=\epsilon^2\|f\|^2_\infty|S^{N-1}|\int_{(\frac 1 4)^{\frac{1}{\epsilon}}}^{(\frac 1 2)^{\frac{1}{\epsilon}}}|\varphi'(r^\epsilon)|^2r^{2\epsilon-1}\,dr =\epsilon\|f\|^2_\infty|S^{N-1}|\int_{\frac 1 4}^{\frac 1 2}|\varphi'(s)|^2s\,ds.\end{aligned}$$ To prove the main properties of $\mathfrak{a}$ we may therefore use $C_{c,0}^2(\Omega)$. \[Close form Nd\] Let $D\geq 0$. The form $\mathfrak{a}$ is non-negative and symmetric in $L^2(\Omega,d\mu)$. For $u\in C_{c,0}^{2}\left(\Omega\right)$, let $||u||_{\mathfrak{a}}:=\sqrt{\mathfrak{a}(u,u)+||u||^2_{L_\mu^2}}$. Then $||u||_{\mathfrak{a}}$ is equivalent to $\||x|^{\frac{c}{2}}\,u\|_{H_0^1\left(\Omega \right)}$, if $D>0$, and to $\||x|^{\frac{N-2+c}{2}}\,u\|_{H_0^1\left(\Omega,|x|^{2-N}dx\right)}$, if $D=0$. [Proof.]{} If $D>0$ we set $u=v|x|^{-\frac{c}{2}}$. We choose ${\varepsilon}$ small enough such that $D-{\varepsilon}\frac{(N-2)^2}{4}>0$. Using and Hardy inequality $$\int_\Omega |\nabla v|^2\, dx \ge \frac{(N-2)^2}{4} \int_\Omega \frac{|v|^2}{|x|^2}\, dx$$ we obtain $$\begin{aligned} \label{injective} \mathfrak{a}(u,u) \geq {\varepsilon}\int_{\Omega} |\nabla v|^2\,dx+ \left(D-{\varepsilon}\frac{(N-2)^2}{4}\right)\int_{\Omega}\frac{|v|^2}{|x|^2}\,dx\geq {\varepsilon}\int_{\Omega} |\nabla v|^2\,dx.\end{aligned}$$ On the other hand, by Hardy inequality again, $$\begin{aligned} \mathfrak{a}(u,u) \leq C\left(\int_{\Omega} |\nabla v|^2\,dx+ \int_{\Omega}\frac{|v|^2}{|x|^2}\,dx\right) \leq \tilde{C}\int_{\Omega} |\nabla v|^2\,dx.\end{aligned}$$ This proves that $||u||_{\mathfrak{a}}$ and $||v||_{H_0^1\left(\Omega \right)}$ are equivalent norms. If $D=0$, setting $u=v|x|^{-\frac{N-2+c}{2}}$, we obtain from (\[N-dimensional form change variables-crit\]) $$\mathfrak{a}(u,u) =\int_{\Omega} |\nabla v|^2|x|^{2-N} \, dx.$$ Since also the norms of $u$ in $L^2(\Omega, d\mu)$ and $v$ in $L^2(\Omega, |x|^{2-N}\, dx)$ coincide, we see that the norms $||u||_{\mathfrak{a}}$ and $||v||_{H_0^1\left(\Omega,|x|^{2-N}dx\right)}$ are equivalent.\ Using the density of $C_{c,0}^2(\Omega)$ in $H^1_0(\Omega)$ and in $H_0^1\left(\Omega,|x|^{2-N}dx\right)$, we extend the form ${\mathfrak{a}}$ to the domain $$\begin{aligned} D({\mathfrak{a}})&=\left\{u\in L^2(\Omega,d\mu): u|x|^{\frac{c}{2}}\in H_0^1\left(\Omega \right)\right\}, &\text{for}\quad D>0,\\[1.5ex] D({\mathfrak{a}})&=\left\{u\in L^2(\Omega,d\mu): u|x|^{\frac{N-2+c}{2}}\in H_0^1\left(\Omega, |x|^{2-N}dx \right)\right\}, &\text{for}\quad D=0,\end{aligned}$$ thus obtaining a closed form. Note that both the norms of $u|x|^{\frac{c}{2}}$ and $u|x|^{\frac{N-2+c}{2}}$ in the corresponding spaces equal the norm of $u$ in $L^2(\Omega, d\mu)$. The transformation $u= u_1|x|^{\frac{N-2+c}{2}}$ can be performed also in the case $D>0$. However it leads to the extra term $D(u_1v_1)/|x|^2$ in the integral which cannot be dominated by the norm of $H_0^1\left(\Omega, |x|^{2-N}dx \right)$. Let $-L$ be the operator associated to ${\mathfrak{a}}$, that is $$\begin{aligned} \label{Definition operator in L^N} D(L):= \left\{u\in D({\mathfrak{a}}) \;;\; \exists v\in L^2_{\mu}\ \text{s.t.}\ {\mathfrak{a}}(u,w) = \int_{\Omega}v\overline{w}\,d\mu \quad\forall w\in D(\mathfrak{a}) \right\}, \quad -Lu:=v.\end{aligned}$$ Clearly, $L$ is given by when $u \in C^2_{c,0}(\Omega)$. In the next lemma we prove the simplest inequality useful to prove compactness when $D=0$. Note that Hardy inequality fails with respect to the weight $|x|^{2-N}$. \[Compactness D(a)\] Let $\Omega$ be bounded and let $R(\Omega):=\max_{x\in\Omega}|x|$. Then, for every $u\in C_{c,0}^2(\Omega)$, $$\begin{aligned} \int_{\Omega} \frac{|u|^2}{|x|}\, |x|^{2-N}\,dx\leq 4R(\Omega)\int_\Omega |\nabla u|^2\,|x|^{2-N}\,dx.\end{aligned}$$ In particular the immersion $H_0^1\left(\Omega, |x|^{2-N}dx \right)\hookrightarrow L^2\left(\Omega, |x|^{2-N}dx \right)$ is compact. [Proof.]{} Let us fix $u\in C_{c,0}^2(\Omega)$. Integrating by parts we have $$\begin{aligned} \int_{\Omega} \frac{|u|^2}{|x|}\, |x|^{2-N}\,dx=\int_\Omega |u|^2\mbox{div}(|x|^{1-N}x)\,dx=- 2\int_\Omega u\nabla u\cdot(|x|^{1-N}x)\,dx.\end{aligned}$$ This implies, using the Cauchy-Schwarz inequality, $$\begin{aligned} \int_{\Omega} \frac{|u|^2}{|x|}\, |x|^{2-N}\,dx&\leq 2\int_\Omega |u|\,|\nabla u|\,|x|^{2-N}\,dx\leq 2\sqrt{R(\Omega)}\int_\Omega \frac{|u|}{\sqrt {|x|}}|\nabla u|\,|x|^{2-N}\,dx\\[1ex] &\leq 2\sqrt{R(\Omega)}\left(\int_\Omega \frac{|u|^2}{|x|}\,|x|^{2-N}\,dx\right)^{\frac 1 2}\left(\int_\Omega|\nabla u|^2\,|x|^{2-N}\,dx\right)^{\frac 1 2}\end{aligned}$$ and the inequality follows. To prove the compactness of the embedding, we take $u$ in the unit ball $\mathcal{B}$ of $ H_0^1\left(\Omega, |x|^{2-N}dx \right)$ and fix $\epsilon>0$. Then $$\begin{aligned} \int_{\Omega\cap B_\epsilon}|u|^2\, |x|^{2-N}\,dx\leq \int_{\Omega\cap B_\epsilon}\frac{\epsilon}{|x|}|u|^2\, |x|^{2-N}\,dx \leq 4\epsilon R(\Omega).\end{aligned}$$ Since $L^2\left(\Omega\cap B^c_\epsilon,\,|x|^{2-N}dx\right)=L^2\left(\Omega\cap B^c_\epsilon,\,dx\right)$, the compactness of $\mathcal{B}_{\vert \Omega\cap B^c_\epsilon}$ in $L^2\left(\Omega\cap B^c_\epsilon,\,|x|^{2-N}dx\right)$ is classical. This fact and and the above estimate show that $\mathcal{B}$ is totally bounded.\ In the next Proposition we collect the main properties of $L$ in $L^2(\Omega, d\mu)$.. \[LForm\] The operator $-L$ defined in (\[Definition operator in L\^N\]) is non-negative and self-adjoint in $L^2(\Omega, d\mu)$. The generated semigroup $T_{\Omega}(t)$ is positivity preserving in $L^2(\Omega, d\mu)$. Moreover, $C_{c,0}^2(\Omega)\hookrightarrow D(L)$ and for every $u\in C_{c,0}^2(\Omega)$ $$Lu=\Delta u +c\frac{x}{|x|^2}\cdot\nabla u - \frac{b}{|x|^2}u.$$ If $\Omega$ is bounded then $L$ has compact resolvent and is invertible in $L^2(\Omega, d\mu)$. [Proof.]{} Non-negativity and self-adjointness of $-L$ follow by construction. The positivity of $T_{\Omega}(t)$ follows from that of the resolvent which is a consequence of the Beurling-Deny conditions. Let us suppose, now, $\Omega$ be bounded and let us prove that $D({\mathfrak{a}})$ is compactly embedded in $L^2(\Omega, d\mu)$. To this aim let $\mathcal{U}$ be a bounded subset of $D({\mathfrak{a}})$. Assume $D>0$; then the set $\mathcal{U}'=\{u|x|^\frac{c}{2}:\, u\in \mathcal{U}\}$ is a bounded subset of $H_0^1(\Omega)$, hence totally bounded in $L^2(\Omega)$, by the compactness of the embedding of $H^1_0(\Omega)$ into $L^2(\Omega)$. It is then immediate to check that $\mathcal{U}$ is totally bounded in $L^2(\Omega, d\mu)$, which proves the claim. The case $D=0$ follows similarly from Lemma \[Compactness D(a)\]. In both cases $L$ has compact resolvent; its spectrum consists, therefore, of eigenvalues and, being injective by , , $L$ is invertible. Next we need a maximum principle for the solution of an homogeneous problem related to $L$. Note that no singularity appears, since $0 \not \in V$ below. However, comparison is not obvious since the coefficient $b$ can be negative even though $D\geq 0$. \[Maximum principle for L\] Let $V$ be an open bounded and connected subset of ${\mathbb{R}}^N$ whose boundary $\partial V$ is $C^{2,\beta}$ and such that $0\notin V$. For every $\varphi\in C^2(\partial V)$ the problem $$\begin{aligned} \begin{cases} -Lv=0,\quad & \text{in}\quad V,\\ v=\varphi,\quad & \text{in}\quad \partial V, \end{cases}\end{aligned}$$ admits a unique solution $v\in C^2\left(V\right)\cap C\left(\bar V\right)$. Moreover $v$ satisfies $\inf_{\partial V}\varphi\leq v(x)\leq\sup_{\partial V} \varphi$ for every $x\in V$ . [Proof.]{} The transformation $Su(x)=|x|^{-\frac{N-2+c}{2}}v(x)$ turns $L$ into $$SLS^{-1}= \Delta -(N-2)\frac{x}{|x|^2}\nabla-\frac{D}{|x|^2},$$ which is uniformly elliptic with smooth coefficients and non-positive potential. Then the proof follows, immediately, by classical results.\ In order to prove Rellich inequalities in domains, we need estimates for the Green function of $-L$ in $\Omega$, that is for the integral kernel expressing $(-L)^{-1}$ with respect to the Lebesgue measure. We start with the case $D>0$ where can use the results of [@met-negro-spina; @1] and compare the Green function in $\Omega$ with that in ${\mathbb{R}}^N$. \[comparison Green\] Let $D>0$ and let $G(x,y)$, $x,y\in\Omega\times \Omega$ be the Green function of the operator $L$, written with respect to the Lebesgue measure. Then $$\label{G} 0 \leq G(x,y) \leq C\, G_0(x,y),$$ where if $N>2$ $$\begin{aligned} \label{eGreen} |x|^{\frac{c}{2}}|y|^{-\frac{c}{2}}G_0(x,y)=|x-y|^{2-N}\left(1\wedge\frac{|x||y|}{|x-y|^2}\right)^{\sqrt{D}-\frac{N-2}{2}}\end{aligned}$$ and if $N=2$ $$\begin{aligned} \label{estimates Green N=2} |x|^{\frac{c}{2}}|y|^{-\frac{c}{2}}G_0(x,y)= \begin{cases} \dfrac {\left(|x||y|\right)^{\sqrt{D}}}{|x-y|^{2\sqrt{D}}},\quad&\text{if}\quad \frac{|x-y|^2}{|x||y|}\geq 1;\\[4ex] 1-\log \left(\dfrac{|x-y|^2}{|x||y|}\right),\quad&\text{if}\quad \frac{|x-y|^2}{|x||y|}\leq 1. \end{cases}\end{aligned}$$ [Proof.]{} Let $T_{\Omega}(t)$, $T(t)$ be the semigroups generated by $L$ in $L^2(\Omega, d\mu)$ and $L^2({\mathbb{R}}^N, d\mu)$, respectively. From [@ou Sections 2.3, 2.6, Proposition 4.23] it follows that $0 \le T_{\Omega}(t)f \le T(t)f$ whenever $0 \le f \in L^2(\Omega, d\mu)$. Furthermore from [@cal-met-negro-spina Corollary 4.6] $T(t)$ is an integral operator whose kernel $p(t,x,y)$, expressed with respect to the Lebesgue measure, satisfies, for every $\epsilon>0$ and some constant $C_\epsilon>0$, $$\begin{aligned} 0\leq p(t,x,y)\leq C_\epsilon t^{-\frac{N}{2}} |x|^{-\frac{c}{2}}|y|^{\frac{c}{2}}&\left [\left (\frac{|x|}{\sqrt t}\wedge 1 \right) \left (\frac{|y|}{\sqrt t}\wedge 1 \right)\right ]^{-\frac{N}{2}+1+\sqrt{D}}\exp\left(- \dfrac{|x-y|^2}{(4+\epsilon)t}\right).\end{aligned}$$ Using [@Arendt; @Bukhalov Theorem 1.5], it follows that also $T_{\Omega}(t)$ is an integral operator whose kernel $p_\Omega$ satisfies the same estimate above. By [@met-negro-spina; @1 Theorem 7.1], since $D>0$, we have $$\label{Greenspace} \int_0^\infty p(t,x,y)\,dt\leq C G_0(x,y)$$ hence $$\begin{aligned} G(x,y)=\int_0^\infty p_{\Omega}(t,x,y)\,dt\le \int_0^\infty p(t,x,y)\,dt\leq C G_0(x,y).\end{aligned}$$ The inequality between the semigroups above easily follows from the the corresponding one for the resolvents. Let $\lambda>0$, $0\leq f\in L^2(\Omega, d\mu)$ and set $u=R(\lambda, L_{\Omega})f$, $w=R(\lambda, L_{{\mathbb{R}}^N})f$. Then $0\leq u\in D({\mathfrak a}_{\Omega})$ and $0\leq w\in D({\mathfrak a}_{{\mathbb{R}}^N})$; furthermore $\lambda u-Lu=\lambda w -L w$ and, for every $v\in D({\mathfrak a}_{\Omega})$ one has $$\begin{aligned} \lambda\int_{\Omega} (u-w)v\, d\mu= \int_{\Omega}\left( \nabla (w-u)\cdot \nabla v+ \frac{b}{|x|^2}(w-u) v\right)\,d\mu.\end{aligned}$$ Choosing $v=(u-w)^+\in D( {\mathfrak a}_{\Omega})$ we get $$\begin{aligned} & \lambda\int_{\Omega} \left|(u-w)^+\right|^2\, dx=-{\mathfrak a}_{\Omega}\Big((u-w)^+,(u-w)^+\Big)\leq 0\end{aligned}$$ which implies $(u-w)^+= 0$ that is $u\leq w$. The case $D=0$ is more involved since, in this case, the integral in is divergent near $\infty$. To overcome this problem, we use the boundedness of $\Omega$ to improve the decay of $p_\Omega$ as $t\to \infty$. We estimate directly $p_\Omega$ without comparing with the kernel in the whole space, by adapting to our case the arguments of [@cal-met-negro-spina]. We use the change of variable leading to to get rid of the potential term $b|x|^{-2}$ and introduce the Hilbert space $L^2(\Omega,|x|^{-2s_1}\,d\mu)=L^2(\Omega,|x|^{2-N}\,dx)$, where $s_1=\frac{N-2+c}{2}$. Then reads $$\begin{aligned} \mathfrak{b}(u,v) &:=(\nabla u,\nabla v)_{L^2(\Omega,|x|^{2-N}\,dx)}=a\left(|x|^{-s_1}u, |x|^{-s_1}v\right),\\[1ex] D(\mathfrak{b}) &:= H_0^1\left(\Omega,|x|^{2-N}\,dx\right). \end{aligned}$$ By construction $\mathfrak{b}$ is the inner product in $ H_0^1\left(\Omega,|x|^{2-N}\,dx\right)$ , and $u\in L^2(\Omega,|x|^{2-N}\,dx)\mapsto |x|^{-s_1} u\in L^2(\Omega,\,d\mu)$ is an isometry which maps $D(\mathfrak{b})$ onto $D(\mathfrak{a})$. The operator $-\tilde L$ associated to $\mathfrak{b}$ then satisfies $$\begin{aligned} D(\tilde{L})=|\cdot|^{s_1}\ D(L),\quad \tilde{L}u=|\cdot|^{s_1}L(|\cdot|^{-s_1} u)\end{aligned}$$ hence $$\begin{aligned} \label{relation kernels} e^{z\tilde{L}}f=|\cdot|^{s_1}e^{zL}(|\cdot|^{-s_1} f) , \quad f \in L^2(\Omega,|x|^{2-N}\,dx).\end{aligned}$$ Clearly $-\tilde L$ is non-negative and self-adjoint in $L^2\left(\Omega,|x|^{2-N}\,dx\right)$. The semigroup $\left(e^{z\tilde L}\right )_{z \in {\mathbb{C}}_+}$ is analytic, submarkovian and satisfies $$\begin{aligned} \label{exp decay} \|e^{-t\tilde L}\|_{L^2\left(\Omega,|x|^{2-N}\,dx\right)}\leq e^{-\lambda_1 t},\end{aligned}$$ where $\lambda_1>0$ is the first eigenvalue of $- \tilde L$, which is positive since $-\tilde L$ is non-negative and invertible, by the similarity with $-L$. The following lemma is a special case of Caffarelli-Kohn-Nirenberg inequalities and we refer to [@met-soba-spi4 Lemma 3.2] for a short proof. It is used to prove the $L^1$-$L^\infty$ bound of the semigroup. \[CKN\] Let $\sigma\in{\mathbb{R}}\setminus\{-N\}$. Then for every $q\in (2,\infty)$ satisfying $\frac{1}{q}\geq \frac{1}{2}-\frac{1}{N}$, there exists $C_q>0$ such that for every $u\in C^2_{c,0}(\Omega)$, $$\begin{aligned} \left( \int_{\Omega} |u(x)|^q |x|^{\sigma}\,dx \right)^{\frac{1}{q}} &\leq C_q \left( \int_{\Omega} |\nabla u(x)|^2 |x|^{(1-\frac{2}{N})\sigma}\,dx \right)^{\frac{N}{2}\left (\frac{1}{2}-\frac{1}{q} \right )} \left( \int_{\Omega} |u(x)|^2 |x|^{\sigma}\,dx \right)^{\frac{1}{2}-\frac{N}{2}\left (\frac{1}{2}-\frac{1}{q} \right )}. \end{aligned}$$ In particular, when $\Omega$ is bounded and $\sigma \le 0$, then $$\begin{aligned} \left( \int_{\Omega} |u(x)|^q |x|^{\sigma}\,dx \right)^{\frac{1}{q}} &\leq C_{q, \Omega} \left( \int_{\Omega} |\nabla u(x)|^2 |x|^{\sigma}\,dx \right)^{\frac{N}{2}\left (\frac{1}{2}-\frac{1}{q} \right )} \left( \int_{\Omega} |u(x)|^2 |x|^{\sigma}\,dx \right)^{\frac{1}{2}-\frac{N}{2}\left (\frac{1}{2}-\frac{1}{q} \right )}. \end{aligned}$$ \[comparison Green D=0\] Let $D=0$ and $\Omega$ be bounded. Then the semigroup $T_{\Omega}(t)$ generated by $L$ in $L^2(\Omega, d\mu)$ has an heat kernel $p(t,x,y)$, with respect to the Lebesgue measure, which satisfies, for every $\epsilon>0$ and some constant $C_\epsilon>0$ $$\begin{aligned} \label{HK est D=0} p(t,x,y)\leq C_\epsilon t^{-\frac{N}{2}}e^{-\frac{\lambda_1}{3} t}|x|^{-s_1}|y|^{c-s_1}\exp\left(-\dfrac{|x-y|^2)}{(4+\epsilon)t}\right).\end{aligned}$$ The Green function $G(x,y)$ of $L$, again written with respect to the Lebesgue measure, satisfies for some constant $C, k>0$, $$\label{G D=0} 0 \leq G(x,y) \leq C\, G_0(x,y),$$ where if $N>2$ $$\begin{aligned} G_0(x,y)=|x|^{-s_1}|y|^{c-s_1}e^{-c|x-y|}\left(1\wedge |x-y|\right)^{2-N}\end{aligned}$$ and if $N=2$ $$\begin{aligned} G_0(x,y)=|x|^{-s_1}|y|^{c-s_1} \begin{cases} e^{-k|x-y|},\quad&\text{if}\quad |x-y|\geq 1;\\[2ex] 1-\log \left(|x-y|\right),\quad&\text{if}\quad |x-y|\leq 1. \end{cases}\end{aligned}$$ We make use of the results and methods of [@cal-met-negro-spina Sections 3,4], pointing out the appropriate changes due to the boundedness of $\Omega$. The $L^p$ norms used here refer to the measure $|x|^{2-N}\, dx$. The ultracontractivity estimate for $t \ge 0$ $$\begin{aligned} \|e^{t\tilde L}\|_{1\to\infty}\leq C t^{-\frac N 2} \end{aligned}$$ follows from Lemma \[CKN\] with $\sigma=2-N \le 0$ and any fixed $q$ as in its statement, using [@ou Theorem 6.2]. Since $\tilde L$ is self-adjoint we have also $ \|e^{it\tilde L}\|_{2\to2}\leq 1$ for $t\in{\mathbb{R}}$. Using $\|T^*T\|_{1\to \infty}=\|T\|_{1\to 2}^2$ and recalling , we obtain for $t>0, \,s\in{\mathbb{R}}$, $$\begin{aligned} \|e^{-(t+is)\tilde L}\|_{1\to \infty}&\le \|e^{-\frac t 3 \tilde L}\|_{1 \to 2}\|e^{-\frac t 3 \tilde L}\|_{2 \to 2} \|e^{-is \tilde L}\|_{2 \to 2}\|e^{-\frac t 3 \tilde L}\|_{2 \to \infty}\\[1ex] &\leq\|e^{-\frac t 3 \tilde L}\|^2_{1 \to 2}e^{-\frac {\lambda_1}{ 3} t }=\|e^{-t \tilde L}\|_{1 \to \infty}e^{-\frac {\lambda_1}{ 3} t }\le C {\ t}^{-N/2}e^{-\frac {\lambda_1}{ 3} t }. \end{aligned}$$ This proves $$\begin{aligned} \|e^{z\tilde L}\|_{1\to\infty}\leq C \left({\textrm{\emph{Re}\,}}z\right)^{-\frac N 2}e^{-\frac {\lambda_1}{ 3} {\textrm{\emph{Re}\,}}z },\quad \forall z\in{\mathbb{C}}^+. \end{aligned}$$ The Dunford-Pettis criterion yields the existence of a kernel $\tilde p$ such that, for $z\in{\mathbb{C}}_+$, $$e^{z\tilde{L}}f(x)= \int_{\Omega}\tilde p(z,x,y)f(y)\,|x|^{2-N}dx,\quad f\in L^1\left(\Omega,|x|^{2-N}\,dx\right)\cap L^\infty(\Omega)$$ and $$\underset{x,y\in \Omega\setminus\{0\}}{\mbox{ sup}} |\tilde p(z,x,y)| \le C \left({\textrm{\emph{Re}\,}}z\right)^{-\frac N 2}e^{-\frac {\lambda_1}{ 3} {\textrm{\emph{Re}\,}}z }.$$ By classic results, see e.g. [@Grigoryan Theorem 7.20, page 208], $\tilde p$ is a continuous function of $(z,x,y)\in {\mathbb{C}}_+\times\Omega\setminus\{0\}\times\Omega\setminus\{0\}$, it is symmetric in $x,y$ and it is holomorphic in $z$.\ Furthermore, the same argument as in [@cal-met-negro-spina Theorem 4.4] proves that the family $\{e^{t\tilde L}:\ t \ge 0\}$ satisfies the Davies-Gaffney estimate in $L^2\left(\Omega,|x|^{2-N}\,dx\right)$ that is $$\begin{aligned} \left|\left(e^{t\tilde L}f_1, f_2\right)_{L^2\left(\Omega,|x|^{2-N}\,dx\right)}\right|\leq\exp\left(-\frac{r^2}{4t}-\frac {\lambda_1}{ 3} t \right)\|f_1\|_{L^2\left(\Omega,|x|^{2-N}\,dx\right)}\|f_2\|_{L^2\left(\Omega,|x|^{2-N}\,dx\right)}\end{aligned}$$ for all $t>0$, $U_1$, $U_2$ open subsets of $\Omega\setminus\{0\}$, $f_i$ in $L^2\left(U_i,|x|^{2-N}\,dx\right)$ and $r:=d(U_1, U_2)$. Applying [@CS Theorem 4.1] to the operator $-\frac{\lambda_1}{3}-\tilde L$ we get, for every $z\in{\mathbb{C}}_+$, $x, y\in \Omega\setminus\{0\}$ (here the joint continuity of $\tilde{p}(t, \cdot, \cdot)$ is used) $$\begin{aligned} |\tilde p(z,x,y)|\leq C_1({\rm Re}\,z)^{-\frac{N}{2}}\left(1+{\rm Re}\,\frac{|x-y|^2}{4z}\right)^\frac{N}{2}\exp\left(-\frac{\lambda_1}{3} {\rm Re}\,z-{\textrm{\emph{Re}\,}}\dfrac{|x-y|^2)}{4z}\right).\end{aligned}$$ Recalling , the heat kernel $p$ of $L$, taken with respect the Lebesgue measure, satisfies $$p(z,x,y)=|x|^{-s_1}|y|^{-s_1}\tilde p(z,x,y)$$ and follows. To prove the second statement we observe that $$\begin{aligned} \label{GF D=0 eq 1} G(x,y)=\int_0^\infty p(t,x,y)\,dt\leq C |x|^{-s_1}|y|^{c-s_1}\int_0^\infty h(t)\,dt,\end{aligned}$$ where we put $h(t)=t^{-\frac{N}{2}}e^{-\frac{\lambda_1}{3} t}\exp\left(-\dfrac{|x-y|^2)}{(4+\epsilon)t}\right)$. Using [@Erdelyi Formula (29), page 146], we have $$\begin{aligned} \int_0^\infty h(t)\,dt&=2\left(\frac{3|x-y|^2}{\lambda_1(4+\epsilon)}\right)^{-\frac{N-2}{4}}K_{\frac{N-2}{2}}\left(2\frac{|x-y|}{\sqrt{4+\epsilon}}\sqrt{\frac{\lambda_1}{3}}\right)\\[1ex] &=C|x-y|^{-\frac{N-2}{2}}K_{\frac{N-2}{2}}\left(c|x-y|\right),\end{aligned}$$ where the $K_\nu$ is the modified Bessel function and satisfies the following asymptotics, see e.g., [@AS 9.6 and 9.7]. $$\begin{aligned} \text{If\;} \nu>0,\quad K_\nu(r)\approx \begin{cases} \sqrt{\frac{\pi}{2}}\,r^{-\frac{1}{2}}e^{-r},\quad&\text{if}\quad r\rightarrow\infty;\\[1ex] \frac{\Gamma(\nu)}{2}\left(\frac{r}{2}\right)^{-\nu},\quad&\text{if}\quad r\rightarrow 0; \end{cases}\\[2ex] K_0(r)\approx \begin{cases} \sqrt{\frac{\pi}{2}}\,r^{-\frac{1}{2}}e^{-r},\quad&\text{if}\quad r\rightarrow\infty;\\[1ex] -\log{r},\quad&\text{if}\quad r\rightarrow 0. \end{cases}\end{aligned}$$ Inserting this relations into we get if $N>2$ $$\begin{aligned} G(x,y)\leq C |x|^{-s_1}|y|^{c-s_1}e^{-c|x-y|}\left(1\wedge |x-y|\right)^{2-N}\end{aligned}$$ and if $N=2$ $$\begin{aligned} G(x,y)\leq C |x|^{-s_1}|y|^{c-s_1} \begin{cases} e^{-c|x-y|},\quad&\text{if}\quad |x-y|\geq 1;\\[2ex] 1-\log \left(|x-y|\right),\quad&\text{if}\quad |x-y|\leq 1. \end{cases}\end{aligned}$$ Main result ----------- As in the cases $\Omega=B$ or $\Omega={\mathbb{R}}^N$, we define $$\begin{aligned} D_{p,\alpha}(\Omega):&=\left\{u:\ |x|^{\alpha-2}u,\ |x|^{\alpha}Lu\in L^p\left(\Omega\right),\ u=0 \text{ on } \partial \Omega\right\}.\end{aligned}$$ Our main result is the following \[RellichOmega\] Let $N\geq 2$, $1< p <\infty$ and assume that (\[D\]) holds. Rellich inequalities $$\begin{aligned} \||x|^\alpha Lu\|_p \geq C\||x|^{\alpha-2} u\|_p,\quad u\in D_{p,\alpha}(\Omega) \end{aligned}$$ hold if and only if $$\begin{aligned} \alpha&<N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}+ \sqrt{D} \quad\text{and}\;\\[1ex] \alpha&\neq N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}-\sqrt {D+\lambda_n}, \quad\forall\, n\in{\mathbb{N}}_0.\end{aligned}$$ [Proof.]{} We first prove that, if $\alpha$ is as in the assumptions, Rellich inequalities are true. Let $B_R\subseteq{\mathbb{R}}^N$ be such that $\Omega\subset B_R$, $R>0$. Without loss of generality we may assume that $R=1$. For a sufficiently small $\delta>0$, set $$\begin{aligned} \Omega_{\delta}:=\left\{x\in\Omega:\ \mbox{dist}(x,\partial\Omega)<\delta\right\}.\end{aligned}$$ We take a linear extension operators $E: W^{2,p}(\Omega)\to W^{2,p}_0(B)$ such that $$\|Eu\|_{W^{2,p}(B\setminus\Omega)}\leq C\|u\|_{W^{2,p}(\Omega_\delta)}$$ and let $u\in C_{c,0}^2(\Omega)$. By Theorem \[RellichFJ\] and since all coefficients are bounded in $B\setminus \Omega$, we have $$\begin{aligned} \int_ \Omega |x|^{(\alpha-2)p}|u|^p\, dx &\leq \int_B |x|^{(\alpha-2)p}|Eu|^p\, dx\leq C \int_B |x|^{\alpha p}|L(Eu)|^p\, dx\\&\leq C\left( \int_\Omega |x|^{\alpha p}|Lu|^p\, dx+\|Eu\|^p_{W^{2,p}(B\setminus\Omega)}\right)\\&\leq C\left( \int_\Omega |x|^{\alpha p}|Lu|^p\, dx+\|u\|^p_{W^{2,p}(\Omega_\delta)}\right).\end{aligned}$$ By the interior estimates for elliptic equations (see [@Krylov Theorem 1, Sec. 4, Ch.9]) $$\begin{aligned} \|u\|_{W^{2,p}(\Omega_\delta)}&\leq C\left( \|Lu\|_{p,\Omega_{2\delta}}+\|u\|_{p,\Omega_{2\delta}}\right)\\[1ex] &\leq C\left( \||x|^\alpha Lu\|_{p,\Omega_{2\delta}}+\|u\|_{p,\Omega_{2\delta}}\right)\\[1ex] &\leq C\left( \||x|^\alpha Lu\|_{p,\Omega}+\|u\|_{p,\Omega_{2\delta}}\right).\end{aligned}$$ To conclude the proof we show that $\|u\|_{p,\Omega_{2\delta}}\leq C\||x|^\alpha Lu\|_{p,\Omega}$. Set $f=-|x|^\alpha Lu$. Since $u\in C_{c,0}^2(\Omega) \subset D(L)$ and $L$ is invertible, by Proposition \[LForm\], then $u=(-L)^{-1}\frac{f}{|x|^\alpha}$. Using the estimates proved in Section \[Section estimate\], the Green function $G$ of $L$ in $\Omega$ satisfies $$0 \leq G(x,y) \leq C\, G_0(x,y),$$ where $G_0$ is defined in Proposition \[comparison Green\] when $D>0$ and in Proposition \[comparison Green D=0\] when $D=0$. Let us suppose preliminarily that $D>0$. Then, for $x\in\Omega_{2\delta}$, $$\begin{aligned} |u(x)|&\leq C\int_\Omega G_0(x,y)|y|^{-\alpha}|f(y)|\, dy\\[1ex]&= C\int_{\{|x||y|\geq |x-y|^2\}}G_0(x,y)|y|^{-\alpha}|f(y)|\, dy\\[1ex] &+ C\int_{\{|x||y|\leq |x-y|^2\}}G_0(x,y)|y|^{-\alpha}|f(y)|\, dy=: u_1(x)+u_2(x).\end{aligned}$$ Since $x\in\Omega_{2\delta}$, there exists $a>0$ such that $|x|\geq a>0$. Consider first $u_1(x)$. $$\begin{aligned} |u_1(x)|&\leq C\int_{\left\{|x||y|\geq |x-y|^2,\ |y|\geq \frac{a}{2}\right\}}G_0(x,y)|y|^{-\alpha}|f(y)|\, dy\\&+ C\int_{\left\{|x||y|\geq |x-y|^2,\ |y|\leq \frac{a}{2}\right\}}G_0(x,y)|y|^{-\alpha}|f(y)|\, dy=: I_1(x)+I_2(x).\end{aligned}$$ For the first term $I_1(x)$ we get $$\begin{aligned} I_1(x)\leq C\int_\Omega |x-y|^{2-N}|f(y)|\, dy,\quad \text{if\, }N>2,\\[1ex] I_1(x)\leq C\int_\Omega \left|\log|x-y|\right||f(y)|\, dy,\quad \text{if\, }N=2,\end{aligned}$$ which therefore implies $\|I_1\|_{p,\Omega_{2\delta}}\leq C\|f\|_{p,\Omega}$. For $I_2(x)$, observe that $|x-y|\geq\frac{a}{2}$, therefore $|x||y|\geq |x-y|^2\geq\frac{a^2}{4}$ and, recalling that $\Omega\subset B$, $|y|\geq \frac{a^2}{4|x|^2}\geq \frac{a^2}{4}$. It follows that $$I_2(x)\leq C\int_\Omega |f(y)|\, dy,$$ and $\|I_2\|_{p,\Omega_{2\delta}}\leq C\|f\|_{p,\Omega}$. Then $\|u_1\|_{p,\Omega_{2\delta}}\leq C\|f\|_{p,\Omega}$. Consider now $u_2(x)$; since $|x|\geq a$, $$\begin{aligned} |u_2(x)|&\leq C\int_{\{|x||y|\leq |x-y|^2\}} \frac{(|x||y|)^{\sqrt{D} -\frac{N-2}{2}}}{|x-y|^{2\sqrt{D}}}|y|^{\frac{c}{2}-\alpha}|f(y)|\, dy.\end{aligned}$$ As before, consider separately $$J_1(x):=\int_{\{|x||y|\leq |x-y|^2,\ |y|\geq\frac{a}{2}\}} \frac{(|x||y|)^{\sqrt{D} -\frac{N-2}{2}}}{|x-y|^{2\sqrt{D}}}|y|^{\frac{c}{2}-\alpha}|f(y)|\, dy$$ and $$J_2(x):=\int_{\{|x||y|\leq |x-y|^2,\ |y|\leq\frac{a}{2}\}} \frac{(|x||y|)^{\sqrt{D} -\frac{N-2}{2}}}{|x-y|^{2\sqrt{D}}}|y|^{\frac{c}{2}-\alpha}|f(y)|\, dy.$$ Concerning $J_1$, we have $\frac{(|x||y|)^{\sqrt{D}}}{|x-y|^{2\sqrt{D}}}\leq 1$ and $(|x||y|)^{-\frac{N-2}{2}+\frac{c}{2}-\alpha}\leq C$, therefore $$J_1(x)\leq C\int_\Omega|f(y)|\, dy$$ and $\|J_1\|_{p,\Omega_{2\delta}}\leq C\|f\|_{p,\Omega}$. Finally, for $J_2$ we have $|x-y|\geq \frac a 2$ and $$J_2(x)\leq C\int_{\{|x||y|\leq |x-y|^2,\ |y|\leq\frac{a}{2}\}} |y|^{\sqrt{D} -\frac{N-2}{2}+\frac{c}{2}-\alpha}|f(y)|\, dy\leq C\|f\|_{p,\Omega}\,\left\||y|^{\sqrt{D} -\frac{N-2}{2}+\frac{c}{2}-\alpha}\right\|_{p',\Omega}.$$ The last norm is finite if and only if $$\left(\sqrt{D} -\frac{N-2}{2}+\frac{c}{2}-\alpha\right)p'>-N$$ which is equivalent to $\alpha<N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}+ \sqrt{D}$, our assumption. Let us suppose now that $D=0$. Then, similarly, we write for $x\in\Omega_{2\delta}$, $$\begin{aligned} |u(x)|&\leq C\int_\Omega G_0(x,y)|y|^{-\alpha}|f(y)|\, dy\\[1ex]&= C\int_{\{|x-y|\leq 1\}}G_0(x,y)|y|^{-\alpha}|f(y)|\, dy\\[1ex] &+ C\int_{\{|x-y|\geq 1\}}G_0(x,y)|y|^{-\alpha}|f(y)|\, dy=: u_1(x)+u_2(x).\end{aligned}$$ Concerning $u_1$ we get $$\begin{aligned} u_1(x)\leq C\int_\Omega |x-y|^{2-N}|f(y)|\, dy,\quad \text{if\, }N>2,\\[1ex] u_1(x)\leq C\int_\Omega \left|\log|x-y|\right||f(y)|\, dy,\quad \text{if\, }N=2,\end{aligned}$$ which implies $\|u_1\|_{p,\Omega_{2\delta}}\leq C\|f\|_{p,\Omega}$ as before. Finally, for $u_2$ we have $$u_2(x)\leq C\int_{\Omega} |y|^{ -\frac{N-2}{2}+\frac{c}{2}-\alpha}|f(y)|\, dy\leq C\|f\|_{p,\Omega}\,\left\||y|^{\-\frac{N-2}{2}+\frac{c}{2}-\alpha}\right\|_{p',\Omega}$$ which is finite if and only if $\alpha<N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}$, our assumption when $D=0$ (note that in this case $c-s_1= -\frac{N-2}{2}+\frac{c}{2}$). Let us now show that the conditions on $\alpha$ are also necessary and here we do not need to distinguish between $D>0$ and $D=0$.\ When $\alpha= N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}-\sqrt {D+\lambda_n}, \ n\in{\mathbb{N}}_0$, Rellich inequalities fail, by Example 2.3. Let $\alpha> N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}+ \sqrt{D}$ and assume, as above, that $\Omega\subseteq B$. Let $s_2$ be defined in and $$\begin{aligned} u(x):=|x|^{-s_2} , \quad x\in \Omega.\end{aligned}$$ Then (see Proposition \[Counterexample in B\]), $Lu=0$ and $|x|^{\alpha-2} u\in L^p\left(B\right)$ since $\alpha-2-s_2>-\frac{N}{p}$. On the other hand $u\notin D_{p,\alpha}(\Omega)$ since $u$ does not vanish on $\partial\Omega$. We use Lemma \[Maximum principle for L\] and, for $0<\epsilon<1$, let $v_\epsilon\in C^2\left(\Omega\setminus\bar{B_\epsilon}\right)\cap C\left(\bar \Omega\setminus B_\epsilon\right)$ satisfy $$\begin{aligned} \begin{cases} -Lv_\epsilon=0,\quad & \text{in}\quad \Omega\setminus\bar{B_\epsilon},\\ v_\epsilon=u,\quad & \text{in}\quad \partial\Omega,\\ v_\epsilon=\epsilon^{-s_1},\quad & \text{in}\quad \partial B_\epsilon. \end{cases}\end{aligned}$$ Since $s_2>s_1$, one has, by construction, $v_\epsilon(x)\leq |x|^{-s_1}$ for every $x\in \partial\Omega\cup\partial B_\epsilon$. It follows from Lemma \[Maximum principle for L\] that $0\leq v_\epsilon(x)\leq |x|^{-s_1}$ in $\Omega\setminus B_\epsilon$. Using local elliptic regularity and a standard diagonal argument, we prove that $v_\epsilon$ converges, up to subsequences, to a function $v$ in $W^{2,p}_{loc}\left(\Omega\setminus\{0\}\right)$. By construction $v$ satisfies $v=u$ in $\partial\Omega$ and $Lv=0$, $0\leq v\leq |x|^{-s_1}$ in $\Omega\setminus\{0\}$; in particular $|x|^{\alpha-2}v\in L^p(\Omega)$, since $\alpha-2-s_1>-\frac{N}{p}$. Then the function $w:=u-v$ satisfies $w=0$ in $\partial\Omega$ and $Lw=0$ in $\Omega\setminus\{0\}$. In particular $w\in D_{p,\alpha}(\Omega)$ but Rellich inequalities fail for $w$.\ Rellich inequalities in exterior domains ---------------------------------------- Let $V\subseteq {\mathbb{R}}^N$ be an exterior domain (that is the complement of a bounded set) which is also open, connected and does not not containing the origin. We also assume that $\partial V$ is $C^{2,\beta}$. As before, we define $$\begin{aligned} D_{p,\alpha}(V):&=\left\{u:\ |x|^{\alpha-2}u,\ |x|^{\alpha}Lu\in L^p\left(\Omega\right),\ u=0 \text{ on } \partial V\right\}.\end{aligned}$$ \[RellichExterior\] Let $N\geq 2$, $1< p <\infty$ and assume that (\[D\]) holds. Rellich inequalities $$\begin{aligned} \||x|^\alpha Lu\|_p \geq C\||x|^{\alpha-2} u\|_p,\quad u\in D_{p,\alpha}(V) \end{aligned}$$ hold if and only if $$\begin{aligned} \alpha&>N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}- \sqrt{D} \quad\text{and}\;\\[1ex] \alpha&\neq N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}+\sqrt {D+\lambda_n}, \quad\forall\, n\in{\mathbb{N}}_0.\end{aligned}$$ When $V=B_r^c$ the same result holds when $D<0$ (replacing the square roots with their real parts) and for $p=1,\infty$. [Proof.]{} For $u\in D_{p,\alpha}(V)$, we use the Kelvin transform $u(x)=|x|^{2-N}v\left (\frac{x}{|x|^2}\right )$ where $ v $ is defined in the bounded domain $\Omega=\left\{x\in{\mathbb{R}}^N: x/|x|^2\in V\right\}$, which contains the origin. Then by elementary computation $$L u(x)=|x|^{-N-2}\tilde{L} v\left (\frac{x}{|x|^2} \right )$$ where $$\begin{aligned} \tilde{L}=\Delta +\tilde c\frac{x}{|x|^2}\cdot\nabla -\frac{\tilde b}{|x|^2},\quad \tilde c:=-c,\quad \tilde b:=b+(N-2)c.\end{aligned}$$ In particular its discriminant $\tilde D$ satisfies $\tilde D=D$. Setting $y=x/|x|^2$, $dx=|y|^{-2N} dy$ we see that the inequality $$\||x|^\alpha L u\|_{L^p\left(V\right)} \geq C\||x|^{\alpha-2} u\|_{L^p\left(V\right)}$$ is equivalent to $$\||x|^{\tilde \alpha} \tilde L v\|_{L^p\left(\Omega\right)} \geq C\||x|^{{\tilde \alpha}-2} v\|_{L^p\left(\Omega\right)}$$ with the same constant $C$ and $\tilde \alpha:=-\alpha+N+2-2N/p$. The statements then follow from Theorems \[RellichFJ\] and \[RellichOmega\]. Critical cases in $L^p({\mathbb{R}}^N)$ ======================================= In this section we assume that $\Omega$ coincides with ${\mathbb{R}}^N$ and prove that, when Rellich inequalities fail, modified inequalities which include logarithmic terms are still valid. The situation is similar to Hardy inequality, when the classical one fails. By Theorem \[RellichFJ\] Rellich inequalities fail in ${\mathbb{R}}^N$ if and only if $$\begin{aligned} \label{fail} \alpha= N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}\pm {\textrm{\emph{Re}\,}}\sqrt {D+\lambda_n}.\end{aligned}$$ or equivalently when $$\label{failmod} b +\gamma_p(\alpha,c)+\lambda_n = 0 \quad {\rm for\ some}\ n \in {\mathbb{N}}_0.$$ To study these cases we need an unweighted one dimensional result for a general second order operator on the half line. \[OneD\] Consider the operator with real constant coefficients $$\Gamma = D^2+\beta D$$ in $(0,\infty)$ and fix $a>0$. If $\beta \neq 0$, then for every $v \in C_c^2 ({\mathbb{R}}_+)$, $$\label{one-dim-Rellich} \left\|\frac{v}{s}\right\|_{L^p(a,\infty)}\leq C \|\Gamma v\|_{L^p(0,\infty)}.$$ for $1<p \le \infty$ and $$\label{one-dim-Rellich1} \left\|\frac{v}{s^{1+{\varepsilon}}}\right\|_{L^1(a,\infty)}\leq C_{\varepsilon}\|\Gamma v\|_{L^1(0,\infty)}.$$ for ${\varepsilon}>0$. The weaker inequalities $$\label{one-dim-Rellichw} \left\|\frac{v}{s^2}\right\|_{L^p(a,\infty)}\leq C \|\Gamma v\|_{L^p(0,\infty)}$$ and $$\label{one-dim-Rellichw1} \left\|\frac{v}{s^{2+{\varepsilon}}}\right\|_{L^1(a,\infty)}\leq C_{\varepsilon}\|\Gamma v\|_{L^1(0,\infty)}$$ hold when $\beta=0$. In the proof we need the following lemma. \[GreenFunc\] Let $v\in C_c^2({\mathbb{R}}_+)$ and $f=\Gamma v$, with $\beta \neq 0$. Then $$\begin{aligned} \label{rappresentazione} v(s)=-\frac{1}{\beta}\left (\int_0^s e^{-\beta(s-\sigma)}f(\sigma)\, d\sigma+\int_s^\infty f(\sigma)\,d\sigma \right ).\end{aligned}$$ Moreover, one has $$\label{orthogonal} \int_0^\infty f(\sigma)\,d\sigma= \int_0^\infty e^{\beta \sigma}f(\sigma)\,d\sigma=0.$$ [Proof.]{} Identity (\[orthogonal\]) holds since $1, e^{\beta s}$ are solution of the adjoint $\Gamma^*=D^2-\beta D$. If $w$ is the right hand side of (\[rappresentazione\]), by the variation of constants formula, $\Gamma w=f$ and, by (\[orthogonal\]) and since $f$ has a compact support, $w$ has a compact support, too. On the other hand, $\Gamma (v-w)=0$, hence $v-w=c_1 +c_2 e^{\beta s}$. Since both have a compact support in $(0,\infty)$, then $c_1=c_2=0$. [Proof.]{} (Proposition \[OneD\]) Let $f:=\Gamma v$ and assume first that $\beta=0$. Then $$\frac{|v(s)|}{s^2} \le s^{-2}\int_0^s (s-\sigma)|f(\sigma)|\, d\sigma \le s^{-1} \int_0^s |f(\sigma)|\, d\sigma$$ and (\[one-dim-Rellichw\]) follows from Hardy inequality. When $p=1$ we write $$|v(s)|=\left |-\int_0^s d\sigma \int_\sigma ^\infty v''(\xi)\, d\xi\right | \le s\|v''\|_1$$ and (\[one-dim-Rellichw1\]) is immediate. We assume now that $\beta \neq 0$ and use (\[rappresentazione\]) $$\begin{aligned} v(s)=-\frac{1}{\beta}\left (\int_0^s e^{-\beta(s-\sigma)}f(\sigma)\, d\sigma+\int_s^\infty f(\sigma)\,d\sigma \right )=:v_1+v_2.\end{aligned}$$ Since (\[orthogonal\]) holds, then $$\frac{|v_2(s)|}{s} \le C\frac{1}{s}\int_0^s |f(\sigma)|\, d\sigma, \quad \left \|\frac{v_2}{s}\right \|_{L^p(a, \infty)} \le C\|f\|_p,$$ if $1<p \le \infty$, by Hardy inequality. When $p=1$, then $|v_2(s)| \le C\|f\|_1$. This shows that (\[one-dim-Rellich\]), (\[one-dim-Rellich1\]) hold for $v_2$. Since by (\[orthogonal\]) $$-\beta v_1(s)=\int_0^s e^{-\beta (s-\sigma)}f(\sigma)\, d\sigma=-\int_s^\infty e^{-\beta(s-\sigma)}f(\sigma)\, d\sigma=e^{-\beta \cdot} \chi_{(0, \infty)}*f (s)=-e^{-\beta \cdot} \chi_{(- \infty,0)}*f (s),$$ the estimate $$\|v_1\|_{L^p(0, \infty)} \le C\|f\|_{L^p(0,\infty)}$$ follows from Young’s inequality for every $1 \le p \le \infty$ and concludes the proof. In the following theorem we concentrate on the singularity at 0, hence we consider only $C^2$-functions vanishing on a neighbourhood of the origin and with a fixed common support which can be assumed to be $ B_{R/2}$. We set $${\cal D}_R=\{u \in C^2({\mathbb{R}}^N): u=0 \ {\rm in\ a\ neighborhood\ of }\ 0, \ \ {\rm spt }\ u \subset B_{R/2} \}.$$ \[critical\] Assume that $$\alpha= N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}\pm {\textrm{\emph{Re}\,}}\sqrt {D+\lambda_n}$$ for some $n \in {\mathbb{N}}_0$ or, equivalently, that (\[failmod\]) holds. Then for $1<p\le \infty$ there exists a positive constant $C$, independent of $R$, such that for every $u\in {\cal D}_R$ $$\label{Rellichlog} \||x|^\alpha Lu\|_p \geq C\Big\||x|^{\alpha-2}\left|\log |R^{-1}x|\right|^{-2} u\Big\|_p \quad {\rm when }\ D+\lambda_n \le 0$$ $$\label{Rellichlog1} \||x|^\alpha Lu\|_p \geq C\Big\||x|^{\alpha-2}\left|\log |R^{-1}x|\right|^{-1} u\Big\|_p \quad {\rm when }\ D+\lambda_n >0.$$\ When $p=1$, inequalities (\[Rellichlog\]) and (\[Rellichlog1\]) hold with $|\log |R^{-1}x||^{-2}$ and $|\log |R^{-1}x||^{-1}$ replaced by $|\log |R^{-1}x||^{-2-{\varepsilon}}$ and $|\log |R^{-1}x||^{-1-{\varepsilon}}$, respectively. [Proof.]{} By scaling we may assume that $R=1$. By Theorem \[RellichFJ\], Rellich inequalities hold in $D_{p,\alpha}({\mathbb{R}}^N) \cap L^p_{ \neq n}$. Then (\[Rellichlog1\]) hold in ${\cal D}_1 \cap L^p_{ \neq n}$, since the singularity at $0$ is weaker and $u$ has support in $ B_{1/2}$. Since, by Lemma \[projection\] $$L^p({\mathbb{R}}^N)= L^p_{n}({\mathbb{R}}^N) \oplus L^p_{\neq n}({\mathbb{R}}^N)$$ and $L$ preserves both $L^p_{n}({\mathbb{R}}^N)$ and $ L^p_{\neq n}({\mathbb{R}}^N)$, we have to show that (\[Rellichlog\]) or (\[Rellichlog1\]) or their variants for $p=1$ hold in ${\cal D}_1 \cap L^p_{n}({\mathbb{R}}^N)$. Let $u(\rho, \omega)=c(\rho)P(\omega)$, where $P$ is a fixed spherical harmonic of order $n$. Using the transformation $c(\rho)=\rho^{-\alpha+2-\frac{N}{p}} v(-\log \rho)$ we have $$\begin{aligned} &\||x|^\alpha Lu\|^p_p =\int_{{\mathbb{R}}^N}|x|^{\alpha p}\left|\Delta u+c\frac{x}{|x|^2}\nabla u-\frac{b}{|x|^2}u\right|^p\ dx\\[1ex] &=\int_{S^{N-1}}|P(\omega)|^p\int_0^{\frac12} \rho^{\alpha p+N-1}\left|\frac{\partial^2 c(\rho)}{\partial\rho^2}+\frac{(N-1+c)}{\rho} \frac{\partial c(\rho)}{\partial \rho}-\frac{\lambda_n+b}{\rho^2}c(\rho)\right|^pd\rho\ d\omega\\[1ex] &=\int_{S^{N-1}}|P(\omega)|^p\int_{\log 2}^\infty \left|\frac{\partial^2 v(s)}{\partial s^2}+\left(2\alpha-2-N +\frac{2N}{p}-c\right)\frac{\partial v(s)}{\partial s}-(\gamma_p(\alpha,c)+b+\lambda_n)v(s)\right|^p ds\ d\omega \\[1ex] &=\int_{S^{N-1}}|P(\omega)|^p\int_{\log 2}^\infty \left|\frac{\partial^2 v(s)}{\partial s^2}+\left(2\alpha-2-N +\frac{2N}{p}-c\right)\frac{\partial v(s)}{\partial s} \right|^p ds\ d\omega.\end{aligned}$$ since $\gamma_p(\alpha,c)+b+\lambda_n=0$. At this point we apply Proposition \[OneD\] with $\beta=2\alpha-2-N+\frac{2N}{p}-c$ after noticing that $$\begin{aligned} \int_{{\mathbb{R}}^N}|x|^{\alpha p}\left|\frac{u(x)}{|x|^2|\log |x||^\gamma}\right|^p\ dx= \int_{S^{N-1}}|P(\omega)|^p\int_{\log 2}^\infty \left|\frac{v(s)}{s^\gamma}\right|^p ds\ d\omega.\end{aligned}$$ Observe that, since $\alpha= N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}\pm {\textrm{\emph{Re}\,}}\sqrt {D+\lambda_n}$, then $\beta\neq 0$ if and only if $D+\lambda_n>0$. Best constants and remainder terms ================================== When $D:=b+\left(\frac{N-2+c}{2}\right)^2>0$ and $$N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}-\sqrt {D}<\alpha < N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}+\sqrt {D}.$$ we have seen in Proposition \[easy\] that Rellich inequalities (\[Intr 1\]) hold in $D_{p, \alpha}(\Omega)$ with the best constant $$\label{best} C:=b+\Bigl(\frac{N}{p}-2+\alpha\Bigr)\Bigl(\frac{N}{p'}-\alpha+c\Bigr).$$ As usual, $\Omega$ is an open bounded and connected set containing $0$ and with a smooth boundary, or $\Omega ={\mathbb{R}}^N$. Best constants are not known in other cases, except for $p=2$ or in special subspaces, see [@rellich]. A direct proof that, in the above range, the constant $C$ is optimal can be achieved by truncating the function $u(x)=|x|^{2-\alpha-N/p}$ as in Example (\[esem1\]). \[c1\] Assume $1<p<\infty$. Under the above assumption on $\alpha$, Rellich inequalities hold in $D_{p,\alpha}({\mathbb{R}}^N)\cap L^p_{\ge 1}({\mathbb{R}}^N)$ with a constant $C_1>C$. According to equation of Section 2, we have to show that the inequality $$\|\mu v-Av\|_p \ge C_1 \|v\|_p, \quad v \in D_{p,max}({\mathbb{R}}^N)$$ holds for a suitable $C_1>C$. We revisit Theorem \[dissipativity\] where, we recall, $\mu=\lambda-\omega_p$ and $\lambda=\mu+\omega_p=b+\gamma_p(\alpha, c)=C$ (see also Lemma \[Parameters\]). Theorem \[contractivity\] holds in $D_{p,max}({\mathbb{R}}^N)$ with a suitable $\omega_p^1 >\omega$, by the results in Section 2 of [@met-soba-spi3], see in particular Proposition 2.8 and Remark 2.9. with $\beta=0$ therein. It follows that $\mu=\lambda-\omega=\lambda_1-\omega_p^1$ and $\lambda_1>\lambda$. Then estimate (\[contractivity\]) holds with $\lambda_1 >\lambda=C$. The remainder term can arise, therefore, when considering radial functions. To deal with them, we need the following auxiliary result. \[aux\] Let $1<p<\infty$ and $\Gamma=D^2+\beta D-\lambda$ be an operator with real constant coefficients on $(0, \infty)$. Then for every $v \in C_c^2(0,\infty)$ and $\lambda >0$ $$\|\Gamma v\|_p^p -\lambda^p \|v\|_p^p \ge \lambda^{p-1}\frac{p-1}{p^2} \int_0^\infty \frac{|v(s)|^p}{s^2}\, ds.$$ [Proof. ]{} We have $$\int_0^\infty (\lambda v-v''-\beta v')v|v|^{p-2}=\int_0^\infty \left (\lambda |v|^p +(p-1) |v'|^2 |v|^{p-2}-\beta v'v|v|^{p-2} \right ).$$ Since $v'v|v|^{p-2}$ is the derivative of $p^{-1}|v|^p$, the last integral vanishes. By Hardy inequality of Proposition \[hardy2\], with $N=1, \beta =0$ we have $$\int_0^\infty |v'|^2|v|^{p-2} \ge \frac{1}{p^2} \int_0^\infty \frac{|v(s)|^p}{s^2}\, ds$$ and therefore $$\lambda \|v\|_p^p +\frac{p-1}{p^2} \int_0^\infty \frac{|v(s)|^p}{s^2}\, ds \le \|\Gamma v\|_p \|v\|_p^{p-1}.$$ Let $$A^p=\|V\|_p^P, \quad B^p=\frac{p-1}{p^2} \int_0^\infty \frac{|v(s)|^p}{s^2}\, ds, \quad C=\|\Gamma v\|_p.$$ then from $\lambda A^p+B^p \le CA^{p-1}$ we get $\lambda A \le C$ and $$C^p-\lambda^p A^p \ge C^p-C\lambda^{p-1}A^{p-1}+\lambda^{p-1}B^p=C(C^{p-1}-\lambda^{p-1}A^{p-1})+\lambda^{p-1}B^p \ge \lambda^{p-1}B^p.$$ The main result of this section is stated below. As in the previous section we formulate it for functions belonging to $${\cal D}_R=\{u \in C^2({\mathbb{R}}^N): u=0 \ {\rm in\ a\ neighborhood\ of }\ 0, \ \ {\rm spt }\ u \subset B_{R/2} \}.$$ \[Rellichremainder\] Let $1<p<\infty$, $D:=b+\left(\frac{N-2+c}{2}\right)^2>0$ and $$N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}-\sqrt {D}<\alpha < N\Bigl (\frac12-\frac1p\Bigr)+1+\frac{c}{2}+\sqrt {D}.$$ If $C$ is the best constant defined in (\[best\]), then there exists $c>0$, independent of $R$, such that for every $u \in {\cal D}_R$ $$\label{RR} \Big\||x|^\alpha Lu\Big\|_p^p -C^p \Big\||x|^{\alpha-2} u\Big\|_p^p \ge c \Big\||x|^{\alpha-2}\left|\log |R^{-1}x|\right|^{-\frac{2}{p}} u\Big\|_p^p.$$ [Proof.]{} By scaling we may assume that $R=1$. If $u \in {\cal D}_1$, we split $u=u_0+u_1$, where $u_0$ is radial and $u_1 \in L^p_{\ge 1}({\mathbb{R}}^N)$. By Lemma \[c1\], inequality (\[RR\]) holds for $u_1$. For $u_0$ we proceed as in Theorem \[critical\] writing $u_0(\rho)=\rho^{-\alpha+2-\frac{N}{p}} v(-\log \rho)$. Then $$\begin{aligned} \||x|^\alpha Lu_0\|^p_p =N\omega_N\int_{\log 2}^\infty \left|\frac{\partial^2 v(s)}{\partial s^2}+\left(2\alpha-2-N +\frac{2N}{p}-c\right)\frac{\partial v(s)}{\partial s}-(\gamma_p(\alpha,c)+b)v(s)\right|^p ds.\end{aligned}$$ Next we use Lemma \[aux\] with $\lambda=\gamma(\alpha,c)+b=C$ to obtain $$\begin{aligned} \||x|^\alpha Lu_0\|^p_p-C^p\||x|^{\alpha-2}u\|_p^p&= N\omega_N (\|\Gamma v\|_p^p-C^p \|v\|_p) \ge N\omega_N C^{p-1}\frac{p-1}{p^2}\int_0^\infty \frac{|v(s)|^p}{s^2}\, ds \\[1ex] &=C^{p-1}\frac{p-1}{p^2}\Big\||x|^{\alpha-2}\big|\log |x|\big|^{-\frac{2}{p}} u\Big\|_p^p.\end{aligned}$$ The general case now follows, since $L_0({\mathbb{R}}^N), L_{\ge 1}({\mathbb{R}}^N)$ are invariant under $L$ and under multiplication by radial weights and since $|u|^p_p:=\|u_0\|_p^p +\|u_1\|_p^p$ is an equivalent norm on $L^p({\mathbb{R}}^N)$. Appendix ======== Approximation on Sobolev spaces on domains ------------------------------------------ Let $V$ be a $C^{2,\beta}$ bounded connected open subset of ${\mathbb{R}}^N$ and let $A$ be a uniformly elliptic operator $A=\mbox{tr}(A(x)D^2)+c(x)\cdot\nabla-b(x)$, with $C^\beta$ coefficients, endowed with Dirichlet boundary conditions. We recall that for $1<p<\infty$ $$\begin{aligned} D_p(\Omega)=W^{2,p}(\Omega)\cap W^{1,p}_0(\Omega),\end{aligned}$$ whereas for $p=1$ $$\begin{aligned} D_1(\Omega)=\left\{u\in W^{1,p}_0(\Omega): \mbox{tr}(A(x)D^2u)\in L^1(\Omega)\right\},\end{aligned}$$ and for $p=\infty$ $$\begin{aligned} D_\infty(\Omega)=\left\{u\in C^1(\Omega)\cap C_0(\overline{\Omega}): \mbox{tr}(A(x)D^2u)\in C(\overline{\Omega})\right\},\end{aligned}$$ both endowed with the graph norm. \[Sobolev approximation 1,infty\] Under the above assumptions the set $$C^2_0(\Omega)=\{u \in C^2(\overline{\Omega}): u=0 \ {\rm on}\ \partial \Omega \}$$ is dense in $D_p(\Omega)$ for every $1 \le p\le \infty$. [Proof. ]{} Let $\lambda>0$ such that $\lambda-A$ is invertible from $D_p(\Omega)$ to $L^p(\Omega)$. If $u \in D_p(\Omega)$, $f=\lambda u-Au$ and $(f_n) \subset C^\beta (\Omega)$ tends to $f$ in $L^p(\Omega)$, then $u_n=(\lambda-A)^{-1}f_n$ belongs to $C^{2,\beta}(\Omega)$, by the Schauder theory, vanishes at $\partial \Omega$ and approximates $u$ in the graph norm. The following partition of unity of $\Omega$ has been used several times. \[Partition unity\] Let $0\leq\beta\leq 1$ and let $\Omega$ be a bounded connected open subset of ${\mathbb{R}}^N$ whose boundary $\partial \Omega$ is of class $C^{2,\beta}$. Then there exist $\delta>0$ such that the distance function $x\mapsto \mbox{dist}(x,\partial\Omega)$ is $C^{2,\beta}$ over the set $$\begin{aligned} K_{\delta}:=\left\{x\in{\mathbb{R}}^N:\ \mbox{dist}(x,\partial\Omega)<\delta\right\}.\end{aligned}$$ In particular $K_{\delta}$ and the subset $$\begin{aligned} \Omega_{\delta}:=K_{\delta}\cap\Omega\end{aligned}$$ have $C^{2,\beta}$ boundary. Furthermore there exists an open subset $\Omega_0\subset\subset\Omega$ for which $\overline\Omega=\overline\Omega_\delta\cup\Omega_0$ and there exists a partition of unity $\{\eta_\delta^2,\eta_0^2\}$ such that - $\eta_\delta\in C_c^\infty(K_{\delta})$, $0\leq\eta_\delta\leq 1$, $\eta_\delta=1$ in $\overline\Omega_{\frac \delta 2}$; - $\eta_0\in C_c^\infty(\Omega_0)$, $0\leq\eta_\delta\leq 1$; - $\eta_\delta^2+\eta_0^2=1$ in $\overline\Omega$. [@gil-tru Lemma 14.16] proves the case $\beta=0$ and that, for sufficiently small $\delta$, for every point $x\in K_{\delta}$ there exist a unique $y\in\partial\Omega$ such that $|x-y|=d(x,\partial\Omega)$. The result for $\beta>0$ then follows by [@Li-Nirenberg]. The existence of such a partition of unity is a standard result. Some results on spectral theory {#spectral} -------------------------------- We collect some definitions and results from spectral theory which are used throughout the paper. Let $X$ be a Banach space and let $A$ be a closed operator $A:D(A)\subseteq X\to X$. The spectrum of $A$ is denoted by $\sigma (A)$ and the resolvent set ${\mathbb{C}}\setminus \sigma (A)$ by $\rho (A)$. The set $$P\sigma(A):=\{\lambda\in{\mathbb{C}}: \lambda-A\ \textrm{is not injective}\}$$ is called the point spectrum of $A$. Moreover each $\lambda\in P\sigma(A)$ is called an eigenvalue and each $0\neq x\in D(A)$ satisfying $(\lambda-A)x=0$ is an eigenvector of $A$ (corresponding to $\lambda$). \[defi A spectrum\] The set $$A\sigma(A):=\{\lambda\in{\mathbb{C}}: \lambda-A\ \textrm{is not injective or}\ \rm rg(\lambda-A)\ \textrm{is not closed in X}\}$$ is called the approximate point spectrum of $A$. Obviously $P\sigma(A)\subseteq A\sigma(A)$. \[defi R spectrum\] The set $$R\sigma(A):=\{\lambda\in{\mathbb{C}}: \rm rg(\lambda-A)\ \textrm{is not dense in X}\}$$ is called the residual spectrum of $A$. Note that $P\sigma (A) \subset A\sigma (A)$, that $P\sigma (A)$ and $R\sigma (A)$, as well as $A\sigma (A)$ and $R\sigma (A)$ may overlap and that $\sigma (A)=A\sigma (A) \cup R\sigma (A)$. ([@engel-nagel Lemma 1.9, Chapter IV])\[Char Aspectrum\] A number $\lambda\in{\mathbb{C}}$ belongs to $A\sigma(A)$ if and only if there exists a sequence $(x_n)_{n\in{\mathbb{N}}}\subset D(A)$, called an approximate eigenvector, such that $\|x_n\|=1$ and $\lim_{n\to\infty}\|Ax_n-\lambda x_n\|=0$. The following result is an elementary consequence of the previous Lemma. \[Rellich-spectrum\] The following properties are equivalent - There exists $C>0$ such that $$\begin{aligned} \|x\|\leq C\|\lambda x-Ax\|,\quad \forall x\in D(A);\end{aligned}$$ - $\lambda$ does not belong to the approximate point spectrum of $A$. The next Proposition implies that $A\sigma(A)$ is never empty. \[boundary-spectrum\] [@engel-nagel Proposition 1.10, Chapter IV] The topological boundary of the spectrum is contained in the approximate point spectrum. Spectrum of a second order ordinary differential operator --------------------------------------------------------- We present the following elementary result on the spectrum of the second order ordinary differential operator $B=D^2+\beta D$ in $L^p([0, \infty[)$, endowed with Dirichlet boundary condition at 0, that is $$D(B)=\{u \in W^{2,p}([0, \infty[): u(0)=0\}.$$ As usually $L^\infty([0,+\infty))$ stands for $C_0^0([0,+\infty)$. Here $\beta\in{\mathbb{R}}$ and we recall that $$\begin{aligned} {\cal Q}&=\left\{\lambda\in {\mathbb{C}}: \ ({\rm Im}\lambda)^2 \leq -\beta^2 {\rm Re }\lambda \right\},\quad {\cal P}=\left\{\lambda\in {\mathbb{C}}: \ ({\rm Im}\lambda)^2 = -\beta^2 {\rm Re }\lambda \right\}.\end{aligned}$$ Note that $${\cal P}(\kappa):= \{-\xi^2+i \beta \xi\;;\;\xi\in {\mathbb{R}}\}$$ and that $$\label{distance parabola} {\rm dist}(\lambda,{\cal P})^2 = \begin{cases} \lambda^2 & {\rm if}\ \lambda\geq-\frac{\beta^2}{2}, \\[1ex] \beta^2(-\lambda-\frac{\beta^2}{4}) & { \rm if}\ \lambda<-\frac{\beta^2}{2}. \end{cases}$$ Observe that the spectrum of $B$ in $L^p({\mathbb{R}})$ is given by $\cal P$ and consists of approximate eigenvalues. This can be seen by noticing that the spectrum is independent of $p$ and using the Fourier transform in $L^2({\mathbb{R}})$.\ For $\lambda \in {\mathbb{C}}$, we consider the solutions of the homogeneous equation $\lambda u -Bu=0$ given by $e^{\mu_it}$, $i=1,2$ where $$\mu_1=\frac{-\beta-\sqrt{\beta^2+4\lambda}}{2},\quad \mu_2=\frac{-\beta+\sqrt{\beta^2+4\lambda}}{2}.$$ When $\lambda=-\beta^2/4$ then $\mu_1=\mu_2=-b/2$ and we substitute $e^{\mu_2 t}$ with $te^{-\frac{\beta}{2}t}$. \[sqrt\] The inequality ${\rm Re }\sqrt{\beta^2+4\lambda}<|\beta|$ holds if and only if $\lambda \in \overset{\mathrm{o}}{\cal Q}$. Similarly, ${\rm Re }\sqrt{\beta^2+4\lambda}>|\beta|$ if and only if $\lambda \not \in {\cal Q}$ and ${\rm Re }\sqrt{\beta^2+4\lambda}=|\beta|$ if and only if $\lambda \in {\cal P}$. Here $\sqrt z$ denotes any square root of $z$ with non negative real part. [Proof.]{} If $\sqrt{\beta^2+4\lambda}=x+iy$, with $x \ge 0$, then $4\lambda=(x^2-y^2-\beta^2)+2i xy$ and $x=|\beta|$ if and only if $({\rm Im}\lambda)^2 =-\beta^2 {\rm Re }\lambda$. The other cases are similar. \[ODE\] The spectrum of $B=D^2+\beta D$ in $L^p([0,+\infty))$, with Dirichlet boundary condition at 0, is given by $\sigma (B)={\cal Q}$. More specifically we have - if $\beta>0$, then $\sigma(B)=A\sigma(B)={\cal Q}$, $P\sigma(B) \supset \overset{\mathrm{o}}{\cal Q}$; - if $\beta=0$, then $\sigma(B)=A\sigma(B)=(-\infty, 0]$; - if $\beta<0$, then $ A\sigma(B)={\cal P}$, $R\sigma(B)\setminus A\sigma(B)=\overset{\mathrm{o}}{\cal Q}$. [Proof.]{} Let us prove preliminarily that $\cal{Q}^{\mbox{c}}\subseteq\rho(B)$ in all cases. If $\lambda \notin \cal{Q}$ by the lemma above ${\rm Re }\sqrt{\beta^2+4\lambda}>|\beta|$, hence ${\rm Re}\,\mu_1<0<{\rm Re}\,\mu_2$. It is then easy to see that $\lambda-B$ is invertible and that its inverse is given by the Green function $$G(t,s)=\left\{ \begin{array}{ll} \displaystyle\frac{u_1(t)u_2(s)}{W(s)} & \quad t\leq s,\\[3ex] \displaystyle\frac{u_1(s)u_2(t)}{W(s)} & \quad t\geq s; \end{array}\right.\\$$ where $u_1(t)=e^{\mu_2t}-e^{\mu_1t}$, $u_2(t)=e^{\mu_1t}$ and $W(t)=(\mu_1-\mu_2)e^{(\mu_1+\mu_2)t}=(\mu_1-\mu_2)e^{-\beta t}$ is their Wronskian. Let us suppose now that $\lambda\in\overset{\mathrm{o}}{\cal Q}$ and assume first $\beta>0$. Then ${\rm Re }\sqrt{\beta^2+4\lambda}<\beta$ and ${\rm Re } \mu_1 \leq {\rm Re } \mu_2<0$. It follows that $\lambda $ is an eigenvalue with eigenfunction $u(t)=e^{\mu_1t}-e^{\mu_2t}$ (or $te^{-\frac{\beta}{2}t}$ when $\lambda=-\beta^2/4$). This proves that $\overset{\mathrm{o}}{\cal Q}\subseteq P\sigma(B)$ and case (i) is done, since the boundary of the spectrum is always contained in the approximate point spectrum, see Proposition \[boundary-spectrum\].\ Assume now $\beta<0$ and still that $\lambda\in\overset{\mathrm{o}}{\cal Q}$. Then ${\rm Re }\sqrt{\beta^2+4\lambda}<-\beta$ and $0 <{\rm Re }\, \mu_1 \le {\rm Re }\, \mu_2$, hence $\lambda-B$ is injective. Moreover, $\lambda-B$ is invertible with a continuous inverse from its domain onto the closed subspace $$X=\left\{f\in L^p\left([0,+\infty)\right):\ \int_0^\infty f(e^{-\mu_1s}-e^{-\mu_2s})ds=0\right\}$$ (with the usual change here and in what follows if $\lambda=-\beta^2/4$). Indeed if $u\in D(B)$ set $f= (\lambda-B)u$ and $B^*u=u''-\beta u'$. Since $(e^{-\mu_1s}-e^{-\mu_2s})(0)=0$ and $(\lambda -B^*)(e^{-\mu_1s}-e^{-\mu_2s})=0$ , one has $$\begin{aligned} \int_0^\infty f(e^{-\mu_1s}-e^{-\mu_2s})ds&=\int_0^\infty (\lambda-B)u\,(e^{-\mu_1s}-e^{-\mu_2s})ds\\ &=\int_0^\infty u(\lambda -L^*)(e^{-\mu_1s}-e^{-\mu_2s})ds=0.\end{aligned}$$ On the other hand, if $f\in L^p([0,+\infty[)$ satisfies $\int_0^\infty f(e^{-\mu_1s}-e^{-\mu_2s})ds=0$, by the variation of constants method, one finds that $$u(t)=\frac{1}{\mu_1-\mu_2}e^{\mu_2 t}\int_t^\infty e^{-\mu_2 s}f(s)ds+\frac{1}{\mu_2-\mu_2}e^{\mu_1 t}\int_t^\infty e^{-\mu_1 s}f(s)ds$$ satisfies $u(0)=0$, $u \in D(B)$ and $(\lambda-B)u=f$. This proves that $\lambda-B$ is injective and that $rg(\lambda-B)=X\subset L^p([0,\infty[)$ which, recalling Definitions \[defi A spectrum\], \[defi R spectrum\], gives $\overset{\mathrm{o}}{\cal Q}\subseteq R\sigma(B)\setminus A\sigma(B)$. Using again Proposition \[boundary-spectrum\], (iii) is proved. When $\beta=0$ one sees that $A\sigma(B) \supset(-\infty,0]$ by truncating the functions $\sin (\sqrt {-\lambda}\, t)$.\ An analogous result can be obviously proved in $L^p(]-\infty,0])$ using the isometry $$S:L^p([0,\infty[)\to L^p(]-\infty,0]),\quad Su(t)=u(-t).$$ \[ODE2\] The spectrum of $B=D^2+\beta D$ in $L^p(]-\infty,0])$, with Dirichlet boundary condition at 0, is given by $\sigma (B)={\cal Q}$. More specifically we have - if $\beta<0$, then $\sigma(B)=A\sigma(B)={\cal Q}$, $P\sigma(B) \supset \overset{\mathrm{o}}{\cal Q}$; - if $\beta=0$, then $\sigma(B)=A\sigma(B)=(-\infty, 0]$; - if $\beta>0$, then $ A\sigma(B)={\cal P}$, $R\sigma(B)\setminus A\sigma(B)=\overset{\mathrm{o}}{\cal Q}$. [99]{} [[M. Abramowitz, I.A. Stegun:]{}[* Handbook of mathematical functions with formulas, graphs, and mathematical tables,*]{}[ National Bureau of Standards Applied Mathematics Series [**55**]{}, Washington, D.C. 1964.]{}]{} [[K. Adimurthi, S. Santra:]{}[ Generalized Hardy-Rellich inequalities in critical dimension and its applications, ]{}[* Commun. Contemp. Math.,*]{}[** 11**]{}[ (2009), 367-394]{}]{}. [[H. Ando, A. Detalla, T. Horiuchi:]{}[ Sharp remainder terms of the Rellich inequality and its application,]{}[*  Bull. Malays. Math. Sci. Soc.,*]{}[** 35**]{}[  (2012), 519-528]{}]{}. [[W. Arendt, K. Räbiger, A. Sourour: ]{}[ Spectral properties of the operator equation $AX+XB=Y$, ]{}[* Quart. J. Math. Oxford,* ]{}[** 45**]{}[  (1994), 133-149]{}]{}. [[W. Arendt, A. Bukhvalov:]{}[ Integral representations of resolvents and semigroups,]{}[* Forum Math.,*]{}[**  6**]{}[  (1994), 111-135.]{}]{} [[G. Barbatis, A. Tertikas: ]{}[ On a class of Rellich inequalities, ]{}[*  J. Comp. Appl. Math.,* ]{}[** 194**]{}[  (2006), 156-172]{}]{}. [[P. Caldiroli, R. Musina:]{}[ Rellich inequalities with weights,]{}[* Calc. Var. Partial Differential Equations,*]{}[** 45**]{}[ (2012), 147-164.]{}]{} [[G. Calvaruso, G. Metafune, L. Negro, C. Spina:]{}[ Optimal kernel estimates for elliptic operators with second order discontinuous coefficients,]{}[* Arxiv?,*]{}[** **]{}[ to appear.]{}]{} [[T. Coulhon, A. Sikora:]{}[ Gaussian heat kernel upper bounds via Phragmén-Lindelöf theorem,]{}[*  Proc. London Math. Soc.*]{}[** **96****]{}[  (2008), 507–544]{}]{}. [[E. B. Davies: ]{}[* Heat Kernels and Spectral Theory,* ]{}[  Cambridge University Press, 1989]{}]{}. [[E. B. Davies, A. M. Hinz:]{}[ Explicit constants for Rellich inequalities in $L^p(\Omega)$,]{}[* Math. Z.,*]{}[** 227,**]{}[ (1998), 511-523.]{}]{} [[K.J. Engel, R. Nagel:]{}[* One parameter semigroups for linear evolutions equations,*]{}[ Springer-Verlag, Berlin, (2000)]{}]{}. [[A. Erdelyi:]{}[* Table of Integral Transorms, vol. I,*]{}[  McGraw-Hill , 1954]{}]{}. [[S. Fornaro, L. Lorenzi:]{}[ Generation results for elliptic operators with unbounded diffusion coefficients in $L^p$ and $C_b$-spaces,]{}[* Discrete and continuous dynamical sistems,*]{}[** 18**]{}[ (2007), 747-772]{}]{}. [[F. Gazzola, H. C. Grunau, E. Mitidieri:]{}[ Hardy inequalities with optimal constants and remainder terms,]{}[* Trans. Am. Math. Soc.* ]{}[** 356(6),**]{}[ (2004) 2149-2168.]{}]{} [[D. Gilbarg, N.S. Trudinger:]{}[* Elliptic Partial Differential Equations of Second Order,* ]{}[ Second edition, Springer-Verlag, Berlin, (2001)]{}]{}. [[N. Ghoussoub, A. Moradifam:]{}[ On the best possible remaining term in the Hardy inequality, ]{}[*  Proc. Natl. Acad. Sci. USA,*]{}[** 105**]{}[ (2008), 13746-13751]{}]{}. [[A. Grigor’yan:]{}[* Heat Kernel and Analysis on Manifolds*]{}[ AMS/IP Studies in Advanced Mathematics, Vol. 47, American Mathematical Society, 2009]{}]{}. [[N. V. Krylov:]{}[* Lectures on Elliptic and Parabolic Equations in Sobolev Spaces,* ]{}[ Graduate Studies in Mathematics, 96, American Mathematical Society, (2008)]{}]{}. [[G. Metafune, M. Sobajima, C. Spina:]{}[ Elliptic and parabolic problems for a class of operators with discontinuous coefficients,]{}[*  Annali Scuola Normale Superiore di Pisa Cl. Sc. ,*]{}[** **]{}[ (to appear).]{}]{} [[G. Metafune, M: Sobajima, C. Spina:]{}[  Kernel estimates for elliptic operators with second order discontinuous coefficients,]{}[* J. Evol. Eq.*]{}[** 17**]{}[ (2017), 485-522.]{}]{} [[G. Metafune, C. Spina: ]{}[ An integration by parts formula in Sobolev spaces, ]{}[* Mediterranean Journal of Mathematics,* ]{}[** 5** ]{}[ (2008), 359-371]{}]{}. [[G. Metafune, L. Negro, C. Spina:]{}[ Sharp kernel estimates for elliptic operators with second-order discontinuous coefficients,]{}[* Journal of Evolution Equations,*]{}[** 18**]{}[ (2018), 467-514.]{}]{} [[G. Metafune, M. Sobajima, C. Spina:]{}[ Weighted Calderón-Zygmund and Rellich inequalities in $L^p$,]{}[* Mathematische Annalen,*]{}[** 361**]{}[ (2015), 313-366.]{}]{} [[G. Metafune, M. Sobajima, C. Spina:]{}[ Rellich and Calderón-Zygmund inequalities for an operator with discontinuous coefficients,]{}[*  Annali di Matematica Pura Appl.,*]{}[** 195**]{}[ (2016), 1305-1331. ]{}]{} [[E. Mitidieri: ]{}[ A simple approach to Hardy inequalities, ]{}[* Mathematical Notes,* ]{}[** 67** ]{}[  (2000), 479-486]{}]{}. [[M. Morimoto: ]{}[ Analytic Functionals on the Sphere, ]{}[*  American Mathematical Soc., Translations of mathematical monographs,*]{}[**  178**]{}[ (1998)]{}]{}. [[R. Musina:]{}[ Optimal Rellich-Sobolev constants and their extremals,]{}[* Differential and Integral equations,* ]{}[** 27**]{}[  (2014), 579-600]{}]{}. [[R. Nagel ([ed]{}):]{}[ One-parameter Semigroups of Linear Operators,]{}[* Lecture Notes in Mathematics,*]{}[** 1184**]{}[ (1980), 1-24]{}]{}. [[Y. Li, L. Nirenberg:]{}[  Regularity of the distance function to the boundary,]{}[* Rend. Accad. Naz. Sci. XL Mem. Mat. Appl.,*]{}[** 29**]{}[ (2005), 257-264]{}]{}. [[N. Okazawa:]{}[ $L^p$-theory of Schrödinger operators with strongly singular potentials,]{}[* Japan. J. Math.,*]{}[** 22**]{}[ (1996), 199-239]{}]{}. [[E. M. Ouhabaz:]{}[* Analysis of Heat Equations on Domains,*]{}[ Princeton University Press]{}]{}. [[F. Rellich: ]{}[ Halbbeschränkte Differentialoperatoren höherer Ordnung,]{}[* Proceedings of the International Congress of Mathematicians,*]{}[** III**]{}[ (1954), 243-250]{}]{}. [[M. Sano, F. Takahashi: ]{}[ Improved Rellich type inequalites in ${\mathbb{R}}^N$,]{}[* Springer Proceedings in Mathematics and Statistics,*]{}[** 176**]{}[  (2016) Geometric Properties for Parabolic and Elliptic PDE’s, 241-255]{}]{}. [[E.M. Stein, G.L. Weiss:]{}[* Introduction to Fourier Analysis on Euclidean Spaces,*]{}[ Princeton University Press, 1971]{}]{}. [[A. Tertikas, N.B. Zographopoulos:]{}[ Best constants in the Hardy-Rellich inequalities and related improvements,]{}[* Adv. Math.,*]{}[** 209**]{}[ (2007), 407-459]{}]{}. [^1]: Dipartimento di Matematica “Ennio De Giorgi”, Università del Salento, C.P.193, 73100, Lecce, Italy. email: giorgio.metafune@unisalento.it [^2]: Dipartimento di Matematica “Ennio De Giorgi”, Università del Salento, C.P.193, 73100, Lecce, Italy. email: luigi.negro@unisalento.it [^3]: Department of Mathematics, Tokyo University of Science, Japan. email: msobajima1984@gmail.com [^4]: Dipartimento di Matematica “Ennio De Giorgi”, Università del Salento, C.P.193, 73100, Lecce, Italy. email: chiara.spina@unisalento.it
--- abstract: 'We investigate the (small) quantum cohomology ring of the moduli spaces ${\overline{\mathcal{M}}_{0,n}}$ of stable $n$-pointed curves of genus $0$. In particular, we determine an explicit presentation in the case $n=5$ and we outline a computational approach to the case $n=6$.' author: - Claudio Fontanari title: | Quantum cohomology of moduli spaces\ of genus zero stable curves --- Introduction ============ The (small) quantum cohomology ring of a smooth algebraic variety with $\star$-product defined in terms of ($3$-point) Gromov-Witten invariants is a formal deformation of the classical Chow ring in the sense that the $\star$-product specializes to the cup-product when the formal parameters are set to $0$. The notion of quantum Chow ring has been recently extended also to smooth orbifolds and its degree zero part is usually called the stringy Chow ring. In [@AGV:02] the stringy Chow ring of $\overline{\mathcal{M}}_{1,1}$ has been computed, while in [@Spencer:06] the case of $\mathcal{M}_2$ is handled and that of $\overline{\mathcal{M}}_{2}$ is announced. Here instead we address the (small) quantum cohomology of the moduli spaces ${\overline{\mathcal{M}}_{0,n}}$ of stable $n$-pointed curves of genus $0$. Even though these spaces are smooth projective varieties with a quite explicit description, nonetheless their geometry turns out to be rather involved: indeed, just to quote a couple of astonishing facts, we mention that, despite the serious efforts by many valuable mathematicians, their ample (resp., effective) cone has been determined so far only for $n \le 7$ and $n \le 6$ resp. (see [@KMK:96] and [@HT:02] resp.). In the present paper we provide an explicit presentation of the small quantum cohomology ring in the case $n=5$ (see Corollary \[n=5\]) relying on previous work by Göttsche and Pandharipande ([@GP:98]) and we suggest a computational approach to the case $n=6$ (see Remark \[n=6\]) inspired by Gathmann [@G:01] (see also [@BM:04], where the small quantum cohomology of all del Pezzo surfaces is calculated, and [@B:04], where a general theorem about semisimplicity conservation under blowing-up of points is proved). The author is grateful to Gianfranco Casnati, Gianni Ciolli, and Barbara Fantechi for inspiring conversations and enlightening suggestions. Thanks are also due to Yuri I. Manin and Dan Abramovich for kindly pointing out references [@B:04], [@BM:04], and [@BK:05], respectively. Preliminaries ============= (Small) quantum cohomology {#quantum} -------------------------- Let $X$ be a smooth complex projective variety, let $T_0 = 1 \in A^0(X), T_1, \ldots,$ $T_m$ be a homogeneous basis of the graded vector space $V := H^*(X, {\mathbb{Q}})$. Let $T^0 = \mathrm{point}, T^1, \ldots, T^m \in V$ be the (Poincaré) dual basis and let $E \subset H_2(X, {\mathbb{Z}})$ denote the subset of effective curves. The (small) quantum cohomology ring of $X$ is a $\star$-product structure on $V \otimes R$, where $R$ is a formal power series ring and the quantum product $\star$ reduces to the usual cup product $\cup$ when all formal variables are set to zero. Namely, given classes $\alpha_1$ and $\alpha_2 \in H^*(X, {\mathbb{Q}})$, their quantum product is defined as follows: $$\alpha_1 \star \alpha_2 = \alpha_1 \cup \alpha_2 + \sum_{\beta \in E} \sum_{i=0}^m I_\beta(\alpha_1, \alpha_2, T_i) T^i q^{\beta}$$ where $I_\beta(\cdot,\cdot,\cdot)$ denotes the ($3$-point) Gromov-Witten invariant relative to class $\beta$ (see for instance [@FP:97], § 7). Moduli spaces of genus zero (stable) curves {#moduli} ------------------------------------------- The moduli spaces ${\overline{\mathcal{M}}_{0,n}}$ of stable $n$-pointed curves of genus $0$ is a smooth projective variety of dimension $n-3$ which can be explicitely obtained from $\mathbb{P}^{n-3}$ via the following construction due to Kapranov. For every $n \ge 4$, let $X \subset \mathbb{P}^{n-3}$ be a set of $n-1$ points in linear general position, let $B^0 := \mathbb{P}^{n-3}$ and for $i \ge 0$ let $B^{i+1} \to B^i$ be the blow-up of $B^i$ along the proper transforms of the $\binom{n-1}{i+1}$ $i$-planes through $i+1$ points. With the above notation, we have ${\overline{\mathcal{M}}_{0,n}}\cong B^{n-4}$ (see for instance [@V:02]). Let $P:= \{1,2, \ldots, n \}$ and for every $S \subset P$ with $2 \le \vert S \vert \le n-2$ let $\Delta_{\{0,S\}}$ be the boundary component of ${\overline{\mathcal{M}}_{0,n}}$ whose general element is the union of two copies of $\mathbb{P}^1$, labelled respectively by $S$ and $P \setminus S$, meeting at one point. We denote by $\delta_S$ the corresponding class in ${\mathrm{Pic}}({\overline{\mathcal{M}}_{0,n}})$ and we define inductively: $$\begin{aligned} {\mathscr B}_4 &:=& \{ \delta_{\{2,3\}} \}\\ {\mathscr B}_i &:=& {\mathscr B}_{i-1} \cup \{ \delta_B: B \subseteq \{1, \ldots, i \}, i \notin B \supseteq \{i-1, i-2 \} \}\\ & &\cup \{\delta_{B^c \setminus \{ i \}}: \delta_B \in {\mathscr B}_{i-1} \setminus {\mathscr B}_{i-2} \}.\end{aligned}$$ Then according to [@F:05], Proposition 1, ${\mathscr B}_n$ is a basis of ${\mathrm{Pic}}({\overline{\mathcal{M}}_{0,n}})$. The results =========== The case $n=5$ -------------- In order to determine the quantum cohomology ring we first need to manage the quantum product between two divisor classes.  \[main\] Let $X$ be $\mathbb{P}^2$ blown up at $4$ points in linear general position. Let $H, E_1, \ldots, E_4$ denote the strict transform of the hyperplane class and the exceptional divisor classes respectively. If $\Delta_1, \Delta_2 \in H^2(X, {\mathbb{Q}})$ then their quantum product can be expressed as follows: $$\begin{aligned} \Delta_1 \star \Delta_2 &=& \Delta_1 . \Delta_2 - \sum_{i=1}^4 \Delta_1.E_i \Delta_2.E_i E_i q^{0, e_i} \\ & & + \sum_{1 \le i < j \le 4} \Delta_1.(H-E_i-E_j) \Delta_2.(H-E_i-E_j)\\ & &+ (H+E_i+E_j)q^{1, e_i+e_j}\\ & &+ \sum \Delta_1.(H-\sum_{i=1}^4 \varepsilon_i E_i) \Delta_2.(H-\sum_{i=1}^4 \varepsilon_iE_i) q^{1, \sum_{i=1}^4 \varepsilon_ie_i}\\ & & + \sum \Delta_1. (2H-\sum_{i=1}^4 \varepsilon_iE_i) \Delta_2. (2H-\sum_{i=1}^4 \varepsilon_iE_i) q^{2,\sum_{i=1}^4 \varepsilon_ie_i}\end{aligned}$$ where the sums run over $\varepsilon_i \in \{0,1 \}$ and if $\beta = a H - \sum_{i=1}^4 b_i E_i$ we denote $q^{\beta}$ by $q^{a,(b_1,b_2,b_3,b_4)}$ writing $e_i$ for the $i$-th vector of the canonical basis of ${\mathbb{R}}^4$. By [@BP:04], Corollary 3.3, the effective cone of $X$ is generated by the following divisors: $$\begin{aligned} D_1 &=& H - E_1 - E_2 \\ D_2 &=& H - E_1 - E_3 \\ D_3 &=& H - E_1 - E_4 \\ D_4 &=& H - E_2 - E_3 \\ D_5 &=& H - E_2 - E_4 \\ D_6 &=& H - E_3 - E_4 \\ D_7 &=& E_1 \\ D_8 &=& E_2 \\ D_9 &=& E_3 \\ D_{10} &=& E_4 \\\end{aligned}$$ Hence if $$\beta = d H - \sum_{i=1}^4 a_i E_i$$ is an effective curve, then we can write $$\beta = \sum_{i=1}^{10} c_i D_i$$ with $c_i \ge 0$ for every $i$, in particular we have $$d = \sum_{i=1}^6 c_i$$ By the divisor axiom, $$I_\beta(\Delta_1,\Delta_2,T_i) = (\Delta_1.\beta)(\Delta_2.\beta) I_\beta(T_i)$$ therefore we need only to compute $1$-point Gromov-Witten invariants. Since $$-K_X = \mathcal{O}_X(1) = 3H - E_1 - E_2 - E_3 - E_4$$ we deduce that the expected dimension of the corresponding moduli space of stable maps is $$\label{dimension} {\mathrm{vdim}}\overline{\mathcal{M}}_{0,1}(X, \beta) = - K_X. \beta = \sum_{i=1}^{10} c_i$$ We have to consider separately the three cases $T_i = 1$, $T_i = D$ a divisor and $T_i = \mathrm{point}$. It is a general fact that $I_\beta(1)=0$ (see for instance [@FP:97], § 7.(II)). Next, if $D$ is a divisor then $I_\beta(D) \ne 0$ only if ${\mathrm{vdim}}\overline{\mathcal{M}}_{0,1}(X, \beta)=1$, in particular we have $d \le 1$. If $d=0$, then $\beta$ is purely exceptional and from [@G:01], Lemma 2.3 (i), it follows that $I_\beta(D) \ne 0$ if and only if $\beta = E_i$, $D = E_i$ and $I_{E_i}(E_i)=-1$. If instead $d=1$, then $\beta = H - E_i - E_j$ and $I_\beta(D) \ne 0$ if and only if $D$ is either $H$, or $E_i$, or $E_j$, and $I_\beta(D)=1$. Finally, if $T_i = \mathrm{point}$ then $I_\beta(\mathrm{point}) \ne 0$ only if ${\mathrm{vdim}}\overline{\mathcal{M}}_{0,1}(X, \beta)=2$, in particular we have $d \le 2$. If $d=0$, then $\beta$ is purely exceptional and $I_\beta(\mathrm{point}) = 0$ by [@G:01], Lemma 2.3 (i). If $d=1$, we have $I_\beta(\mathrm{point}) = 1$ for $\beta = H - \sum_{i=1}^4 \varepsilon_i E_i$ with $\varepsilon_i \in \{0,1 \}$ and $I_\beta(\mathrm{point}) = 0$ otherwise by [@GP:98], § 5.2 and § 3.(P4)–(P5). If $d=2$, we have $I_\beta(\mathrm{point}) = 1$ for $\beta = 2H - \sum_{i=1}^4 \varepsilon_i E_i$ with $\varepsilon_i \in \{0,1 \}$ and $I_\beta(\mathrm{point}) = 0$ otherwise by [@GP:98], § 5.2 and § 3.(P4)–(P5). Hence our claim follows. As a consequence, we can perform the computation we are interested in. \[n=5\] The small quantum cohomology ring of $\overline{\mathcal{M}}_{0,5}$ admits the following explicit presentation: $$QH^*_s(\overline{\mathcal{M}}_{0,5}) = \frac{{\mathbb{Q}}[q^{0, \sum_{i=1}^4 \varepsilon_i e_i}, q^{1,\sum_{i=1}^4 \varepsilon_i e_i}, q^{2, \sum_{i=1}^4 \varepsilon_i e_i}, \delta_{2,3}, \delta_{3,4},\delta_{1,5}, \delta_{2,5},\delta_{1,4}]}{(f_i^*)_{i=1, \ldots, 5}}$$ where $\varepsilon_i \in \{0,1\}$, $e_i$ denotes the $i$-th vector of the canonical basis of ${\mathbb{R}}^4$ and $$\begin{aligned} f_1^* &=& \delta_{2,3} \star \delta_{3,4}+ E_1 q^{0,(1,0,0,0)}-q^{1,(0,0,0,0)} -q^{1,(0,0,1,0)}-q^{1,(1,1,0,1)}\\ & &-q^{1,(1,1,1,1)}-4q^{2,(0,0,0,0)}-q^{2,(1,0,0,0)}-2q^{2,(0,1,0,0)}-4q^{2,(0,0,1,0)}\\ & &-2q^{2,(0,0,0,1)}-q^{2,(1,0,1,0)}-2q^{2,(0,1,1,0)}-q^{2,(0,1,0,1)}\\ & &-2q^{2,(0,0,1,1)}-q^{2,(0,1,1,1)}\\ f_2^* &=& \delta_{2,3} \star \delta_{2,5}-(H+E_2+E_3)q^{1,(0,1,1,0)}-q^{1,(0,1,0,0)} -q^{1,(0,1,1,0)}\\ & &+q^{1,(1,1,0,1)}+q^{1,(1,1,1,1)}-2q^{2,(0,1,0,0)}-q^{2,(1,1,0,0)} -2q^{2,(0,1,1,0)}\\ & &-q^{2,(0,1,0,1)}-q^{2,(0,1,1,1)} -q^{2,(1,1,1,0)}\\ f_3^* &=& \delta_{3,4} \star \delta_{1,4}+ E_2 q^{0,(0,1,0,0)}-q^{1,(0,0,0,0)} -q^{1,(0,0,0,1)}-q^{1,(1,1,1,0)}\\ & &-q^{1,(1,1,1,1)}-4q^{2,(0,0,0,0)}-2q^{2,(1,0,0,0)}-q^{2,(0,1,0,0)}\\ & &-2q^{2,(0,0,1,0)}-4q^{2,(0,0,0,1)}-q^{2,(1,0,1,0)}-2q^{2,(1,0,0,1)}-q^{2,(0,1,0,1)}\\ & &-2q^{2,(0,0,1,1)}-q^{2,(1,0,1,1)}\\ f_4^* &=& \delta_{1,5} \star \delta_{2,5}-(H+E_1+E_2)q^{1,(1,1,0,0)} -q^{1,(1,1,0,0)}-q^{1,(1,1,1,0)}\\ & &-q^{1,(1,1,0,1)}-q^{1,(1,1,1,1)}-q^{2,(1,1,0,0)}-q^{2,(1,1,1,0)} -q^{2,(1,1,0,1)}\\ & &-q^{2,(1,1,1,1)}\\ f_5^* &=& \delta_{1,5} \star \delta_{1,4}-(H+E_1+E_4)q^{1,(1,0,0,1)}-q^{1,(1,0,0,0)} -q^{1,(1,0,0,1)}\\ & &+q^{1,(1,1,1,0)}+q^{1,(1,1,1,1)}-2q^{2,(1,0,0,0)}-2q^{2,(1,0,0,1)} -q^{2,(1,1,0,0)}\\ & &-q^{2,(1,0,1,0)}-q^{2,(1,1,0,1)}-q^{2,(1,0,1,1)}\end{aligned}$$ From Keel’s results in [@K:92] we deduce the following presentation of the classical Chow ring of $\overline{\mathcal{M}}_{0,5}$ in terms of the basis ${\mathscr B}_5$ recalled in § \[moduli\]: $$A^*(\overline{\mathcal{M}}_{0,5}) = \frac{{\mathbb{Z}}[\delta_{2,3}, \delta_{3,4},\delta_{1,5}, \delta_{2,5},\delta_{1,4}]}{(f_i)_{i=1, \ldots, 5}}$$ where $$\begin{aligned} f_1^* &=& \delta_{2,3}.\delta_{3,4} = 0 \\ f_2^* &=& \delta_{2,3}.\delta_{2,5} = 0 \\ f_3^* &=& \delta_{3,4}.\delta_{1,4} = 0 \\ f_4^* &=& \delta_{1,5}.\delta_{2,5} = 0 \\ f_5^* &=& \delta_{1,5}.\delta_{1,4} = 0 \end{aligned}$$ According to Kapranov construction recalled in § \[moduli\], we can regard $\overline{\mathcal{M}}_{0,5}$ as $\mathbb{P}^2$ blown up at $4$ points in linear general position and obtain exactly as in [@V:02] the following identifications (here we take $5$ to be the special point): $$\begin{aligned} \delta_{2,3} &=& H - E_1 - E_4 \\ \delta_{3,4} &=& H - E_1 - E_2 \\ \delta_{1,5} &=& E_1 \\ \delta_{2,5} &=& E_2 \\ \delta_{1,4} &=& H - E_2 - E_3\end{aligned}$$ Notice moreover that by (\[dimension\]) $I_\beta(\alpha_1,\alpha_2,T_i) \ne 0$ only when $\sum c_i (-K_X.D_i)$ is a fixed number with both $c_i \ge 0$ and $-K_X.D_i > 0$ for every $i$, hence there are only finitely many possible values for the exponents of the formal variables $q$ and the quantum cohomology ring turns out to be a polynomial ring. Hence our claim can be deduced from Theorem \[main\] by applying[@FP:97], § 10, Proposition 11 (see [@P:03], Chapter 3, for analogous computations). The case $n=6$ -------------- Here we have obtained only partial results. \[associativity\] Let $Y$ be the blow-up a smooth projective threefold $X$ along a curve $C$ such that $g(C) \ge 1$ or $g(C)=0$ and $-K_X.C \ge 0$. Then the associativity equations of the quantum product suffice to determine all (genus $0$) Gromov-Witten invariants of $Y$ in terms of those of $X$. In order to address Conjecture \[associativity\], one might wish to argue as in [@G:01], proof of Theorem 2.1. Indeed, if both $\beta$ and $T$ are non-exceptional classes, then [@H:00], Theorem 1.5, would even imply $I_\beta^Y(T) = I_\beta^X(T)$ (but see [@BK:05], Remark 8, for a pertinent counterexample to the statement in [@H:00]). On the other hand, if $\beta$ is exceptional then $\beta = F$ and the only eventually nonzero invariants to be computed are $I_{dF}(d \varphi)$, where $F$ and $\varphi$ correspond to exceptional fibers. These invariants enumerate $d$-fold coverings of a fibre over a point in $C$, hence they are zero for $d \ge 2$ (otherwise a curve should lie in two different fibers). If instead $d=1$ then $I_F(\varphi)=-1$ (see for instance [@C:05], Lemma 2). Unluckily, as far as we know, the analogue of [@G:01], Algorithm 2.4, is still missing. \[n=6\] From Conjecture \[associativity\] it would follow that all Gromov-Witten invariants of $\overline{\mathcal{M}}_{0,6}$ can be recursively computed. Indeed, as recalled in § \[moduli\], $\overline{\mathcal{M}}_{0,6}$ can be identified with $\mathbb{P}^3$ blown up in $5$ points in linear general position and along the cords between pairs of points. In order to check that $-K.C \ge 0$, let $C$ be the strict transform of the cord $l_{ab}$ between points $a$ and $b$ and choose planes $\pi$, $\rho$ in $\mathbb{P}^3$ such that $l_{ab} = \pi.\rho$. If $p: \tilde{X} \to X$ denotes the blow up of $l_{ab}$ we have $$\begin{aligned} p^* \pi &=& \tilde{\pi} + E_a + E_b \\ p^* \rho &=& \tilde{\rho} + E_a + E_b \\ K_{\tilde{X}} &=& K_{\mathbb{P}^3} + 2 \sum_i E_i + 2 \sum_{i,j} E_{ij} \end{aligned}$$ (see [@H:77], II., ex. 8.5) where $\tilde{\pi}$ and $\tilde{\rho}$ resp. are the strict transforms of $\pi$ and $\rho$ resp., while $E_i$ and $E_{ij}$ are the exceptional divisors corresponding to the points and the cords which have been previously blown up. Hence $$\begin{aligned} -K_{\tilde{X}}.C &=& (- K_{\mathbb{P}^3} - 2 \sum_i E_i - 2 \sum_{i,j} E_{ij}). \\ & & (p^* \pi - E_a - E_b) . (p^* \rho - E_a - E_b) \\ &=& - K_{\mathbb{P}^3}.\pi.\rho - 2 E_a^3 - 2 E_b^3 \\ &=& \mathcal{O}_{\mathbb{P}^3}(4).l_{ab}-2-2 = 0\end{aligned}$$ (recall that if $p: \tilde{X} \to X$ is the blow up of a smooth threefold along a smooth curve with exceptional divisor $E$ then $E.p^*C=0$ for every curve $C \subset X$, see for instance [@C:05], Lemma 1). [99]{} D. Abramovich, T. Graber, and A. Vistoli: Algebraic orbifold quantum products. Orbifolds in mathematics and physics (Madison, WI, 2001), 1–24, Contemp. Math., 310, Amer. Math. Soc., Providence, RI, 2002. V. Batyrev and O. N. Popov: The Cox ring of a del Pezzo surface. Arithmetic of higher-dimensional algebraic varieties (Palo Alto, CA, 2002), 85–103, Progr. Math., 226, Birkhäuser Boston, Boston, MA, 2004. A. Bayer: Semisimple quantum cohomology and blowups. Int. Math. Res. Not. 2004, no. 40, 2069–2083. A. Bayer and Y. I. Manin: (Semi)simple exercises in quantum cohomology. The Fano Conference, 143–173, Univ. Torino, Turin, 2004. J. Bryan and D. Karp: The closed topological vertex via the Cremona transform. J. Algebraic Geom. 14 (2005), 529–542. G. Ciolli: On the quantum cohomology of some Fano threefolds and a conjecture of Dubrovin. Internat. J. Math. 16 (2005), 823–839. C. Fontanari: A remark on the ample cone of ${\overline{\mathcal{M}}_{g,n}}$. Rend. Sem. Mat. Univ. Politec. Torino 63 (2005), 9–14. W. Fulton and R. Pandharipande: Notes on stable maps and quantum cohomology. Algebraic geometry—Santa Cruz 1995, 45–96, Proc. Sympos. Pure Math., 62, Part 2, Amer. Math. Soc., Providence, RI, 1997. A. Gathmann: Gromov-Witten invariants of blow-ups. J. Algebraic Geom. 10 (2001), 399–432. L. Göttsche and R. Pandharipande: The quantum cohomology of blow-ups of $\mathbb{P}^2$ and enumerative geometry. J. Differential Geom. 48 (1998), 61–90. R. Hartshorne: Algebraic geometry. Graduate Texts in Mathematics, No. 52. Springer-Verlag, New York-Heidelberg, 1977. B. Hassett and Y. Tschinkel: On the effective cone of the moduli space of pointed rational curves. Topology and geometry: commemorating SISTAG, 83–96, Contemp. Math., 314, Amer. Math. Soc., Providence, RI, 2002. J. Hu: Gromov-Witten invariants of blow-ups along points and curves. Math. Z. 233 (2000), 709–739. S. Keel: Intersection theory of moduli space of stable $n$-pointed curves of genus zero. Trans. Amer. Math. Soc. 330 (1992), 545–574. S. Keel and J. McKernan: Contractible extremal rays on ${\overline{\mathcal{M}}_{0,n}}$. Pre-print alg-geom/9607009 (1996). D. Pontoni: Quantum cohomology of ${\mathrm{Hilb}}^2(\mathbb{P}^1 \times \mathbb{P}^1)$ and enumerative applications. Ph.D. Thesis, Padova, 2003. J. Spencer: The orbifold cohomology of the moduli of genus-two curves. Gromov-Witten theory of spin curves and orbifolds, 167–184, Contemp. Math., 403, Amer. Math. Soc., Providence, RI, 2006. P. Vermeire: A Counterexample to Fulton’s Conjecture on ${\overline{\mathcal{M}}_{0,n}}$. J. Algebra 248, 780–784 (2002).